AI search fix
AI crawlers blocked by robots.txt — how to fix it
When robots.txt disallows trusted AI bots, your pages become unreadable for AI search workflows even if they are public in Google. This usually happens due to broad `Disallow: /` rules or legacy bot-blocking snippets copied from old SEO templates. To fix it, define clear allow policy per user agent and test the final file in production, not only in staging.
Start by listing the bots you actually want to serve: GPTBot, ChatGPT-User, PerplexityBot, ClaudeBot, Google-Extended, and Bingbot. Then write explicit user-agent sections for those bots and avoid catch-all rules that accidentally override your intent. Keep your anti-scraper policy for unknown bots separately so you do not lose security coverage.
How to fix robots.txt for AI crawlers
- Open your live robots.txt file at /robots.txt and copy current rules.
- Remove or narrow broad disallow blocks that affect trusted AI user agents.
- Add explicit user-agent sections for GPTBot, PerplexityBot, ClaudeBot, Google-Extended, and Bingbot.
- Deploy and verify the exact file served on the production domain.
- Re-check server logs to confirm trusted bots receive allowed responses.
You'll get an HTML report showing which AI user agents are disallowed in your live robots.txt.
Run Express CheckRelated questions
- Cloudflare blocking AI crawlers — how to fix itBroader edge-level allowlist pattern for trusted AI bots.
- GPTBot blocked by Cloudflare — how to fix itMost frequent root cause for ChatGPT crawl failures.
- ChatGPT cannot crawl my website — how to fix itEnd-to-end checklist when ChatGPT still cannot crawl.
Updated
