We got tired of guessing. So we built a way to know.
Cited exists because the GEO industry was giving advice nobody had tested. We decided to test it.
The problem that wouldn't let us sleep.
Here's what bothered us: AI engines were changing how people find information, and the advice companies were getting about it was almost entirely unverified. "Add schema markup." "Write longer content." "Use more headings." Sounds reasonable. But nobody had actually measured whether any of it worked.
We started by asking a simple question: which content signals actually make AI engines cite a page? Not "which signals sound like they should help" — which ones measurably increase citation rates across ChatGPT, Gemini, and Google AI Overviews?
To answer that, we built a research pipeline. We collected AI responses to real queries, matched them against the pages being cited, and measured over 270 features on each page. Then we ran the statistics. Logistic regression per signal. Odds ratios. Multiple-testing correction. The full rigor you'd expect from academic research, applied to a business problem nobody else was measuring.
What we found surprised us. Some of the most popular GEO advice had zero measurable effect. And some of the most impactful signals were things almost nobody was talking about. That gap between popular advice and measured reality became the foundation for Cited.
Convictions that shape everything we build.
Measurement over opinion
The GEO industry runs on assumptions. We think that’s lazy and dangerous. If you can’t measure whether a tactic works, you shouldn’t recommend it. Every signal in our system has a number behind it.
Actionability over monitoring
Dashboards that show you’re invisible are interesting for about five minutes. The question that matters is: what do I do about it? That’s the question we exist to answer.
Honesty over hype
We’ll tell you when a signal doesn’t work. We’ll tell you when our data has limitations. We’d rather lose a deal by being honest than win one by overselling. GEO is real, but it’s not magic.
How we work.
Research-First
We don’t recommend things we haven’t tested. Our research covers thousands of pages across thousands of domains, validated against 3 AI engines with real statistical methods. If a signal doesn’t pass, it doesn’t ship — no matter how intuitive it seems.
Engine-Specific
ChatGPT, Gemini, and Google AI Overviews each use different criteria to decide what to cite. Generic ‘AI optimization’ advice ignores this. We test each engine independently and tell you exactly which signals matter where.
Implementation, not reports
We don’t hand you a PDF and disappear. We work with your team to make the changes, track the impact, and adjust when AI engines evolve. The value is in the doing, not the documenting.
30 minutes. Ranked actions. Zero obligation.
30 minutes. Real signals. Your pages.
What you get
30-minute review
A live walkthrough of the pages AI is reading and the gaps costing you citations.
Ranked next steps
Top fixes prioritized by measured impact, not generic SEO advice.
Written handoff
A clean summary your team can act on even if you never hire us.
Last updated: March 2026