CITED RESEARCH

We tested 270+ GEO signals. The data speaks for itself.

Most GEO advice is based on assumptions. We built a research pipeline that tests content signals against real AI engine citations from ChatGPT, Gemini, and Google AI Overviews. Then we published what works, what doesn’t, and why.

OUR APPROACH

What makes this research different?

GEO is a new field. Most recommendations come from reasonable-sounding intuition, small case studies, or SEO patterns that may not apply to AI engines. We wanted something more rigorous.

We built a pipeline that collects live AI engine responses, measures hundreds of page-level features, and runs statistical tests on each one. A signal only becomes a recommendation if the data supports it. If a popular tactic fails our tests, we say so.

We also re-run this analysis regularly. AI engines change their models, and the signals that drive citations shift with them. Our recommendations reflect how AI engines behave today, not six months ago.

RESEARCH AT A GLANCE
0+

signals tested across three AI engines

Cited Research, March 2026
0

AI engines tested independently (ChatGPT, Gemini, Google AI Overviews)

Cited Research, March 2026
0

GEO signal categories (format, quality, credibility, extractability, freshness, technical, vertical)

Cited Research, March 2026
0x

average citation lift from top GEO signals

Cited Research, March 2026
OUR METHODOLOGY

How do we know which signals actually work?

01

Collect

We query real AI engines with the same questions users ask and record every source cited, every link given, and every brand mentioned.

02

Measure

For every page, we measure 270+ features: content structure, schema, freshness signals, authority markers, technical health, and more.

03

Test

For each signal, we run logistic regression, compute odds ratios, and apply Benjamini-Hochberg correction to control for false discoveries.

04

Validate

A signal only becomes a recommendation if it has a corrected q-value below 0.10. Popular advice that fails this threshold gets cut.

KEY FINDINGS

What signals make AI engines cite your content?

Our research identified seven categories of signals statistically associated with AI citation. Here are the three pillars where the strongest signals cluster.

Deep Dive

Content Structure and Extractability

When someone asks ChatGPT a question, the AI breaks it into multiple sub-queries and searches for the best answer to each one. Pages that cover the specific sub-queries an AI generates are dramatically more likely to be cited. Question-format headings, answer-first paragraphs, numbered lists, and FAQ sections all help AI match your content to what it’s looking for.

Measured Impact

2-5x

higher citation odds for top content signals

OUR COMMITMENT

What we publish, and what we don’t.

We publish what works and what doesn’t. If a popular GEO tactic fails our statistical tests, we say so openly.

We acknowledge the limitations of our research. These are observational associations, not guaranteed causes. The data represents a point-in-time snapshot. AI engines evolve, and we re-test accordingly.

Each engine is analyzed independently. A signal that works on ChatGPT might have no effect on Google AI Overviews. We surface which signals matter where, so your optimization is targeted.

We believe the data speaks for itself. If a signal doesn’t pass statistical validation, it doesn’t become a recommendation.

Frequently Asked

Questions about our research.

We re-run our analysis regularly as AI engines update their models. The signals that drive citations shift over time, and our recommendations reflect current AI engine behavior. Each update tests the full signal set against fresh data.
Our methodology uses established statistical methods (logistic regression, Benjamini-Hochberg correction) applied to proprietary data. While not submitted to academic journals, our approach follows the same rigor: transparent methodology, reproducible analysis, and clear reporting of limitations.
Each AI engine uses different criteria to decide what to cite. A signal that drives citations on ChatGPT might have no effect on Gemini. By analyzing each engine independently, we can give targeted recommendations instead of generic advice that averages across engines.
No. Our research identifies statistical associations between page signals and AI citations. Pages with these signals are significantly more likely to be cited, but AI engines make their own decisions and change their models over time. We are transparent about this.
Our dataset spans thousands of pages across hundreds of domains in multiple industries. The exact numbers grow with each research cycle. Rather than anchoring to a specific count that changes, we focus on having enough statistical power to detect meaningful effects.
SEO signals help you rank in Google’s search results: keywords, backlinks, page speed, domain authority. GEO signals help AI engines decide whether to cite your content: content extractability, sub-query coverage, freshness markers, schema markup for AI parsers. Many signals overlap, but the ones unique to GEO are what most companies are missing.
Free Audit

30 minutes. Ranked actions. Zero obligation.

See how AI engines see your content.
Our audit tests your content against every validated signal and shows you exactly where the gaps are.

Free audit. Real data. Yours to keep.

What you get

A practical GEO action plan your team can use immediately.

30-minute review

A live walkthrough of the pages AI is reading and the gaps costing you citations.

Ranked next steps

Top fixes prioritized by measured impact, not generic SEO advice.

Written handoff

A clean summary your team can act on even if you never hire us.

Last updated: March 2026