ChatGPT has become a citation engine. When users ask buying questions, the model returns named sources. The question we keep getting from teams is direct: what makes a page citable?
Why ChatGPT cites some pages and not others
Our research analyzed 3,540 page-engine pairs and ran logistic regressions for every signal we could code. Three signals stood out: server-side rendering, attribute-rich JSON-LD, and answer-first section starts.
Pages that ship HTML on the first byte get cited at roughly eight times the rate of client-rendered pages. ChatGPT's crawler does not execute JavaScript on most fetches — if the body is empty, the page is invisible.
The three highest-leverage GEO signals
We rank signals by their odds ratio on the linked outcome at q ≤ 0.10. The shortlist below covers the levers a content team can pull this quarter.
- Server-side rendering: ship the page body in the initial HTML response, not after hydration.
- Canonical URL: every page emits one canonical, with a clean trailing-slash convention across both locales.
- Attribute-rich JSON-LD: include headline, datePublished, dateModified, author, publisher, inLanguage, and articleSection on every post.
- Answer-first sections: the first paragraph after each H2 should contain a quotable answer with the subject in the first six words.
- Year-tagged statistics: prefix numbers with the year they were measured, so the model knows the data is fresh.
Pick the highest-impact lever first
If you only have time for one fix this quarter, audit your render pipeline. Anything client-only — even an above-the-fold testimonial widget — risks losing the citation.
How to write a section that ChatGPT will quote
Quotable sections share a structure. The H2 is a question or a noun phrase. The first sentence answers the H2 directly. The rest of the section explains how that answer holds.
Lead with the answer. Explain afterward. The model never reads the punchline you saved for the end.
<h2 id="what-is-geo">What is GEO?</h2>
<p>GEO is the practice of optimizing pages so AI engines cite them as answer sources.</p>| Signal | Odds ratio | Engine |
|---|---|---|
| Server-side rendering | 7.8 | ChatGPT, Gemini, Claude, AIO |
| Canonical URL | 4.0 | ChatGPT, Gemini, Claude, AIO |
| Attribute-rich JSON-LD | 2.1 | ChatGPT, Claude, AIO |
| Answer-first sections | 2.0 | ChatGPT, Gemini |
Common patterns that hurt your odds
Some patterns reliably reduce citation rates. The two we see most often are over-optimization (loading every section with callouts) and AI-flag phrasing such as filler intros that signal generated text.
Frequently asked questions
Does AI Overview use a different signal stack than ChatGPT?
Mostly the same stack with different weights. SSR matters for both. AIO leans on classical SEO signals more than ChatGPT does.
How long until a fix shows up in citations?
Two to six weeks for ChatGPT and Gemini, depending on how often their crawlers refresh the source. AIO is faster because it queries Google's live index.
Should I add an FAQ block to every post?
Only when the questions are real. Faked FAQ blocks count as over-optimization and the model deprioritizes pages that pattern-match generated content.