Skip to content
All posts
AEODiagnosticsChatGPT

Why ChatGPT doesn't recommend your brand (and how to fix it)

Six concrete reasons ChatGPT names your competitor instead of you, ranked by how often we see them in 1,000+ FixAEO scans. With a fix for each.

5/13/2026  •  by FixAEO Team  •  7 min read

You ask ChatGPT "best [your category]" and it confidently lists three competitors. Your brand isn't even mentioned. You've done the SEO work — you rank top 5 on Google. Why doesn't ChatGPT see you?

We've analysed 1,000+ scans on [INTERNAL LINK: / "FixAEO"] and the answer is almost always one of six specific problems. This post lists them in descending order of frequency, with the exact fix for each.

1. Your robots.txt blocks AI crawlers (≈22% of cases)

The #1 cause of AEO invisibility — and the most embarrassing one — is a site that explicitly tells GPTBot, ClaudeBot, Google-Extended, or PerplexityBot to go away.

Many sites added these blocks in 2023 during the brief "AI is stealing our content" panic. Most teams never removed them. The result: a site Google indexes happily but that ChatGPT literally cannot read.

The fix. Check your robots.txt:

curl -s https://yoursite.com/robots.txt

If you see lines like User-agent: GPTBot\nDisallow: /, delete them.1 The default User-agent: * already allows AI crawlers; explicit blocks are the trap.

2. No JSON-LD Organization schema on the homepage (≈19%)

When an AI assistant lands on your homepage for the first time, it has to figure out what your company actually is from scratch. If you have no structured data, it has to guess from your H1, meta description, and OG tags — which are often marketing copy, not factual descriptions.

Guessing leads to weak recommendations. "FixAEO is some kind of SEO product?" instead of "FixAEO is an Answer Engine Optimization checker that audits how brands appear across ChatGPT, Claude, Gemini, Perplexity, Grok, and DeepSeek."

The fix. Add this JSON-LD block to your <head> (substitute your details):

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Your Brand",
  "url": "https://yoursite.com",
  "logo": "https://yoursite.com/logo.png",
  "description": "[one-sentence factual description]",
  "email": "[email protected]",
  "sameAs": [
    "https://linkedin.com/company/yourbrand",
    "https://x.com/yourbrand",
    "https://github.com/yourbrand"
  ]
}
</script>

Validate it at schema.org validator before shipping.

3. Your site has no llms.txt (≈18%)

Sites without llms.txt are not automatically excluded from AI answers, but having one moves you up the retrieval ranking when models do a comparative lookup.2

The fix. Five-minute job. [INTERNAL LINK: /blogs/how-to-add-llms-txt "Step-by-step llms.txt tutorial"].

4. Your top SEO pages are product-marketing pages, not answer pages (≈15%)

This is the subtle one. Your homepage probably has an H1 like "The simplest CRM for teams" — strong brand positioning, weak retrieval bait. AI assistants tend to surface pages whose content pre-answers the user's prompt.

Compare:

Bad (marketing H1)Good (answer H1)
The simplest CRM for teamsWhat is the best CRM for a 5-person team?
AI-powered code reviewHow do I get AI-assisted code review in my CI pipeline?
Beautiful expense trackingWhat's the easiest expense tracker for an indie SaaS founder?

When a user asks ChatGPT "best CRM for a 5-person team", the model's retrieval layer looks for pages whose content literally answers that question. If yours doesn't, your competitor wins.

The fix. Pick your three highest-intent SEO keywords. For each, publish a comparison post or how-to whose H1 is the question itself. You don't have to demote your existing homepage; just give the retrieval layer something better to find.

5. No authoritative third-party citations (≈14%)

AI models cross-reference. When you claim "the best CRM for indie SaaS" on your own site, that's a self-citation — weak signal. When Indie Hackers, Hacker News, and a Wikipedia paragraph also mention you in that context, the model's confidence increases sharply.

The most powerful third-party citations, in roughly this order:3

  1. Wikipedia (if you're eligible for a page)
  2. High-authority industry publications (NYT, WSJ, TechCrunch tier for B2C; trade pubs for B2B)
  3. Curated lists (Awesome lists on GitHub, 10 best X roundups by recognised reviewers)
  4. Forum threads with high upvote/award counts (Reddit, Hacker News, Stack Overflow)
  5. Niche subreddits with active moderation (very high signal for B2B SaaS)

The fix. Pick one. Don't try to do all five at once. Reddit AMAs and well-written tutorial blog posts that other sites cite are the fastest path for most companies.

6. Your homepage hasn't been crawled recently (≈12%)

This is rare but devastating when it happens. Modern AI assistants use retrieval-augmented generation (RAG) — they fetch fresh pages at query time.4 If your site returns 5xx errors, redirects in a loop, or has a robots.txt drive-by block, the model falls back to its training data — which is likely 6-18 months stale.

The fix. Check:

curl -sI https://yoursite.com
# Expect: HTTP/2 200

If you see a redirect, make sure it's a single 301 to the canonical URL — not a chain. If your site is behind a heavy bot-protection layer (e.g., Cloudflare's strictest WAF), you may be soft-blocking AI crawlers without realising it.

The remaining ≈ 10% — long tail

A scatter of less common but real causes:

  • Negative sentiment in training data — if Reddit threads from 2024 trashed your brand, the model has absorbed that. Hard to fix; counteract by publishing better, more recent third-party coverage.
  • Brand name collision"Stripe" is unambiguous; "Vibes" matches a hundred products. Disambiguate with category context in your llms.txt and Organization schema.
  • Site is single-page React/Vue with no SSR — older AI crawlers don't always execute JavaScript. Most do now, but verify by curling your URL and checking that your H1 and key content are in the raw HTML.5
  • You renamed your product — models lag on rebrands by 12+ months. Update llms.txt, schema, and try to earn a citation that includes both names ("Foo (formerly Bar)").

How to find which one is hurting you

Run a free FixAEO scan. It checks each of these heuristics, then asks Gemini live whether it actually recognises your brand. You get a 0-100 score plus a ranked list of fixes in 30 seconds.

If you'd rather diagnose manually:

  1. curl -s https://yoursite.com/robots.txt — look for AI crawler blocks
  2. View source on your homepage — search for application/ld+json. Zero blocks = problem.
  3. curl -sI https://yoursite.com/llms.txt — 404 = problem.
  4. Look at your top 3 page titles. Are they marketing or are they questions? Marketing = problem.
  5. Search "yourbrand site:reddit.com OR site:news.ycombinator.com" on Google. Zero results = problem.

Each fix takes 5-60 minutes. None require a developer for more than a single PR.

FAQ

How long until ChatGPT updates after I fix something?

For retrieval-based fixes (llms.txt, schema, robots.txt) — within days. For training-data fixes (Wikipedia mention, new citations) — 6 to 12 months for the next training cycle, but immediately for retrieval-augmented queries.

Does this work for Claude and Gemini too?

Yes. The six causes apply across all major AI assistants. The relative weights differ slightly (Gemini leans harder on schema; Perplexity leans harder on real-time retrieval) but the diagnostic list is the same.6

Is there a way to "prompt my way" into AI answers?

Not really. You can occasionally bait models with very specific prompts ("according to FixAEO…"), but recommendations from generic "best X" queries are driven by the six factors above, not prompt engineering.

How often should I re-audit?

Monthly. AI engines change their retrieval models constantly. A site that scored 90 in January can drift to 70 by June if a competitor publishes 10 new citations.

What's the single highest-leverage fix?

For most sites: adding Organization JSON-LD (cause #2). It's 10 minutes of work and almost universally missing. Wikipedia mentions are higher-impact but much harder to engineer.

In one paragraph

ChatGPT doesn't recommend your brand because of one of six fixable problems: blocked crawlers, missing schema, no llms.txt, marketing-shaped (vs answer-shaped) content, no third-party citations, or a flaky homepage. Each has a concrete fix; most take under an hour. Run a free FixAEO scan and we'll tell you exactly which ones are biting you.

Footnotes

  1. Google Search Central: Introduction to robots.txt. Read the robots.txt guide.

  2. llmstxt.org: The /llms.txt file. Read the spec.

  3. Stanford Web Credibility Project: How do users evaluate web credibility? Read the findings.

  4. Lewis et al.: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Read the RAG paper.

  5. Google Search Central: Understand JavaScript SEO basics. Read the JavaScript SEO guide.

  6. Anthropic: Claude's content sources. Read the policy.

Want to see how your brand scores?

FixAEO runs all the checks in this post automatically — free, no signup.

Run a free scan