Skip to content
Free tool · No signup · 60 seconds

GEO Audit

Generative Engine Optimization scorecard. 12 questions, 4 pillars, per-pillar fixes linked to the tool that solves each.

GEO score
/ 100
Pillars
Content shape
Retrieval signals
Trust (E-E-A-T)
Generative surface readiness

Content shape

Generative engines chunk pages into passages. Clear sectioning + answer-shape paragraphs let the right passage rise.

Do top pages start with a H1 question + a self-contained answer paragraph?

Generative engines extract the first 200 words. Buried answers get skipped in favour of Reddit threads.

Do you use clear H2 / H3 hierarchy with one idea per section?

Retrieval models chunk by heading. Mixed-topic sections muddy the embedding and the AI picks a worse chunk.

Do important pages have FAQ sections with FAQPage schema?

FAQ schema is independently quotable: each Q&A is a self-contained chunk. Highest extraction rate of any content shape.

Do your images have descriptive alt text + captions?

Multi-modal retrieval is shipping. Without alt text, images are invisible to the generative pipeline.

Retrieval signals

What makes your page findable when the AI does vector search: schema, llms.txt, semantic boundaries, freshness.

Do your most-important pages have type-appropriate schema markup?

Schema is the most direct retrieval signal — the difference between cited 2× and cited normally.

Do you have an llms.txt at your root listing top pages?

Generative engines (Perplexity especially) consume llms.txt as a curated AI sitemap.

Are your top pages updated within the last 90 days?

Generative engines weight fresh content heavily — especially Perplexity. Stale pages lose to refreshed competitors.

Trust (E-E-A-T)

Experience, Expertise, Authority, Trust — the signals AI uses to decide whether to cite YOUR page over Reddit.

Do your articles have author bylines with credentials + sameAs profiles?

Expertise + author identity is one of the strongest E-E-A-T signals. Anonymous content is treated as commodity.

Do you have presence on 3+ authoritative third-party sites (Wikipedia, G2, Crunchbase, etc.)?

External citations confirm you exist. AI engines treat sites with no external footprint as 'might be fake' and downgrade.

Do you have 20+ third-party reviews across at least 2 platforms?

Review density is both an authority signal and a citation surface — review pages are cited as 'what users say'.

Generative surface readiness

Specific to surfaces like AI Overviews, Perplexity instant answers, ChatGPT browsing. Each has format quirks.

Are your top pages structured for Google's AI Overviews (concise answers, lists, comparisons)?

AI Overviews quote pages that mirror their preferred answer shape: tight paragraphs, bulleted lists, comparison tables.

Does robots.txt explicitly allow live-fetch AI bots (OAI-SearchBot, Claude-SearchBot, PerplexityBot)?

Some templates block AI bots by default. Blocked bots can't cite — even if everything else is perfect.

4 pillars, 12 checks

Specifically tuned for generative answer surfaces.

Content shape

Generative engines chunk pages. Clear H1-H2-H3 hierarchy, answer-shape paragraphs, FAQ schema, and alt text on images all lift extraction.

Retrieval signals

Schema + llms.txt + freshness make your page findable in the AI's vector search. Without these, you don't even enter the candidate pool.

Trust + surface fit

E-E-A-T signals (authors, citations, reviews) decide whether you're cited *over* Reddit. Surface fit ensures your content matches what each AI Overview wants.

Frequently asked questions

What's the difference between GEO and AEO?
Mostly the same discipline with different emphasis. AEO (Answer Engine Optimization) covers being cited by any AI assistant. GEO (Generative Engine Optimization) emphasises the *generative* answer surfaces specifically — Google AI Overviews, ChatGPT instant answers, Perplexity quick replies. Same toolkit, slightly different priorities (content shape + retrieval signals matter most for GEO).
Why 12 questions instead of 15?
GEO is narrower than full AEO. We focus on the four pillars that move generative-answer citations specifically: content shape, retrieval signals, trust (E-E-A-T), and generative surface readiness. The Citation Readiness scorecard is the broader 15-question version.
What's 'retrieval signals'?
How findable your page is during the AI's vector retrieval step. Schema completeness, llms.txt, semantic boundaries (clear H2/H3 sections), and freshness all influence whether your passage is one of the top-k retrieved chunks.
Generative surface readiness — what's that?
Pages structured to match what each generative surface prefers. Google AI Overviews like concise opening + bullets + tables. Perplexity likes tight factual paragraphs with citations. ChatGPT browsing prefers structured answers with clear sections. Audit your top pages against these patterns.

Want the broader AEO scorecard?

15 questions, 5 pillars — covers structured data, discoverability, entity citations, content shape, and brand authority.

AI Citation Readiness