GEO FixAI search readiness check

AI search fix

AI crawlers blocked by robots.txt — how to fix it

When robots.txt disallows trusted AI bots, your pages become unreadable for AI search workflows even if they are public in Google. This usually happens due to broad `Disallow: /` rules or legacy bot-blocking snippets copied from old SEO templates. To fix it, define clear allow policy per user agent and test the final file in production, not only in staging.

Start by listing the bots you actually want to serve: GPTBot, ChatGPT-User, PerplexityBot, ClaudeBot, Google-Extended, and Bingbot. Then write explicit user-agent sections for those bots and avoid catch-all rules that accidentally override your intent. Keep your anti-scraper policy for unknown bots separately so you do not lose security coverage.

How to fix robots.txt for AI crawlers

  1. Open your live robots.txt file at /robots.txt and copy current rules.
  2. Remove or narrow broad disallow blocks that affect trusted AI user agents.
  3. Add explicit user-agent sections for GPTBot, PerplexityBot, ClaudeBot, Google-Extended, and Bingbot.
  4. Deploy and verify the exact file served on the production domain.
  5. Re-check server logs to confirm trusted bots receive allowed responses.

You'll get an HTML report showing which AI user agents are disallowed in your live robots.txt.

Run Express Check

Updated

Cookies

Choose which optional cookies to allow

Strictly necessary cookies keep the site running. Analytics and marketing cookies load only with your consent — except in the United States, where an opt-out model applies by default until you turn the optional categories off. Cookie Policy