We tested 270+ GEO signals. The data speaks for itself.
Most GEO advice is based on assumptions. We built a research pipeline that tests content signals against real AI engine citations from ChatGPT, Gemini, and Google AI Overviews. Then we published what works, what doesn’t, and why.
What makes this research different?
GEO is a new field. Most recommendations come from reasonable-sounding intuition, small case studies, or SEO patterns that may not apply to AI engines. We wanted something more rigorous.
We built a pipeline that collects live AI engine responses, measures hundreds of page-level features, and runs statistical tests on each one. A signal only becomes a recommendation if the data supports it. If a popular tactic fails our tests, we say so.
We also re-run this analysis regularly. AI engines change their models, and the signals that drive citations shift with them. Our recommendations reflect how AI engines behave today, not six months ago.
signals tested across three AI engines
AI engines tested independently (ChatGPT, Gemini, Google AI Overviews)
GEO signal categories (format, quality, credibility, extractability, freshness, technical, vertical)
average citation lift from top GEO signals
How do we know which signals actually work?
Collect
We query real AI engines with the same questions users ask and record every source cited, every link given, and every brand mentioned.
Measure
For every page, we measure 270+ features: content structure, schema, freshness signals, authority markers, technical health, and more.
Test
For each signal, we run logistic regression, compute odds ratios, and apply Benjamini-Hochberg correction to control for false discoveries.
Validate
A signal only becomes a recommendation if it has a corrected q-value below 0.10. Popular advice that fails this threshold gets cut.
What signals make AI engines cite your content?
Our research identified seven categories of signals statistically associated with AI citation. Here are the three pillars where the strongest signals cluster.
Content Structure and Extractability
When someone asks ChatGPT a question, the AI breaks it into multiple sub-queries and searches for the best answer to each one. Pages that cover the specific sub-queries an AI generates are dramatically more likely to be cited. Question-format headings, answer-first paragraphs, numbered lists, and FAQ sections all help AI match your content to what it’s looking for.
Measured Impact
2-5x
higher citation odds for top content signals
What we publish, and what we don’t.
We publish what works and what doesn’t. If a popular GEO tactic fails our statistical tests, we say so openly.
We acknowledge the limitations of our research. These are observational associations, not guaranteed causes. The data represents a point-in-time snapshot. AI engines evolve, and we re-test accordingly.
Each engine is analyzed independently. A signal that works on ChatGPT might have no effect on Google AI Overviews. We surface which signals matter where, so your optimization is targeted.
We believe the data speaks for itself. If a signal doesn’t pass statistical validation, it doesn’t become a recommendation.
Questions about our research.
30 minutes. Ranked actions. Zero obligation.
Free audit. Real data. Yours to keep.
What you get
30-minute review
A live walkthrough of the pages AI is reading and the gaps costing you citations.
Ranked next steps
Top fixes prioritized by measured impact, not generic SEO advice.
Written handoff
A clean summary your team can act on even if you never hire us.
Last updated: March 2026