25% of Our Traffic Comes from ChatGPT and Claude — Here Is What We Learned About AI Citations

We looked at our analytics last week and found something I genuinely did not expect: 25% of our traffic is now coming from ChatGPT and Claude.
We run FreeAcademy.ai, an AI education platform with 120+ free courses. We have never paid for ads, never run an AI optimization campaign, and never tried to get our content into any LLM's training data or retrieval pipeline. But somewhere along the way, ChatGPT and Claude started citing us — a lot.
What surprised me more than the volume was which content got cited, and what that says about how LLMs actually decide what to reference. If you write content for a living, this matters. A lot.
The numbers
Here is the breakdown of our recent traffic window:
| Source | Sessions | Share of traffic |
|---|---|---|
| Direct | 10,362 | ~33% |
| ChatGPT referrals | 5,414 | ~17% |
| Google organic | 4,380 | ~14% |
| Claude.ai referrals | 1,641 | ~5% |
| Combined LLM referrals | ~7,500 | ~25% |
A few things jump out:
- LLM referrals are bigger than Google organic. ChatGPT alone sends more traffic than Google does.
- Direct traffic is massively inflated. 10,362 sessions is a huge number for a site that has never run a brand campaign. Our best guess: people are hearing "FreeAcademy" inside ChatGPT and Claude conversations and typing the name directly into a browser.
- The winning content is not what we would have guessed. More on that below.
If you had asked me a year ago where our traffic would come from in 2026, I would have said Google. Instead, Google is our third-largest source behind direct and ChatGPT.
The content ChatGPT and Claude actually cite
Here is where it gets interesting. We are an AI education platform. We have 120+ courses. We have landing pages built for "learn AI free," "free AI certification," and every other variation you would expect a student to search for.
None of that is what the LLMs are citing.
Our #2 most visited page this window is a single blog post: a Claude Free vs Pro vs Max comparison with 6,446 pageviews. Our top 15 Search Console queries by impressions are all variations of Claude pricing:
claude pro vs max— 9,303 impressionsclaude free vs pro— thousands moreis claude pro worth itclaude max price- ...and so on
Not a single one is a course query. Not "learn AI free." Not "free AI course with certificate." Just people trying to decide which Claude plan to buy, and ChatGPT sending them to our comparison post to help them figure it out.
This is the part that broke my mental model of SEO.
Five things we learned about what makes content LLM-citable
These are patterns we noticed in our own data, not generic advice copied from a GEO explainer. Your mileage may vary, but everything below is grounded in what is actually driving our AI referrals.
1. LLMs cite specificity, not comprehensiveness
Our broad "learn AI" landing pages get almost zero AI traffic. Our tight, micro-focused posts (one tool, one question, one comparison) get cited constantly. A 600-word post answering "is Claude Pro worth it" outperforms a 3,000-word guide to "how to use AI."
Why this probably works: when a user asks ChatGPT a specific question, the model retrieves chunks of text that directly answer that question. A comprehensive guide dilutes the signal. A specific page concentrates it.
If you want to read more on the mechanics of this, we wrote a broader explainer in our guide to generative engine optimization.
2. Comparison content is disproportionately citable
Every single one of our top AI-driven pages is a versus post. Claude Free vs Pro vs Max. ChatGPT vs Claude vs Gemini. Claude Code vs Copilot CLI vs Gemini CLI.
LLMs love comparison tables for a reason: they are structured, parseable, and contain the exact format the model needs to produce a comparison answer. When a user asks "which is better, X or Y," the model can extract a row from your table almost verbatim.
If you sell a product or cover a category, the single highest-leverage piece of content you can write right now is a clean comparison post with a table.
3. Opinions get cited. Hedging does not.
We noticed a pattern where our most-cited posts end with clear, opinionated conclusions. "Get the Pro plan if X, skip to Max only if Y." Not: "there are many factors to consider and it depends on your use case."
LLMs extract opinions because that is what users are asking for. When someone asks ChatGPT "should I buy Claude Max," they do not want a pros-and-cons list — they want a verdict. If your post has a verdict, it gets cited. If it hedges, a more opinionated source wins.
This was uncomfortable for us to accept because SEO culture rewards "balanced" writing. AI culture rewards commitment.
4. Headings that mirror exact user questions
Look at the Search Console queries again: claude pro vs max, is claude pro worth it, claude free plan limits. These are not keyword-optimized SEO phrases. They are the actual questions people type.
The pages of ours that get cited tend to have H2 and H3 headings that match these questions almost word-for-word. "Is Claude Pro Worth It in 2026?" "What Are the Claude Free Plan Limits?" The model pattern-matches the user question to the heading, then extracts the paragraph underneath.
This is a small change but it meaningfully shifts what gets pulled. If your subheadings say "Key Considerations" or "Important Factors," you are leaving citations on the table.
5. Consistent brand-name repetition
Our direct traffic jumping to 10,362 sessions — exceeding Google organic — tells us something important: people are hearing our name and searching for it separately. That only works if the name shows up consistently inside the pages the LLM is citing.
In our most-cited posts, "FreeAcademy" appears multiple times in the body — not stuffed, but naturally repeated in examples, CTAs, and closing sections. We think this matters because it trains the model's co-occurrence statistics to associate our name with the topic.
If you want AI to eventually recommend you by name, your brand has to appear in the citable chunks, not just in the footer.
Why this is a 12 to 18 month window
Traditional SEO traffic is projected to decline 25%+ by the end of 2026 as Google's AI Overviews absorb more of the click-through volume. Most publishers are panicking about this. The ones who should be panicking the least are the ones figuring out LLM citability now.
If 25% of our traffic already comes from LLMs without any intentional optimization, the obvious question is: what happens when we actually try? That is what the next six months of our content strategy are about.
Two years from now, "getting cited by ChatGPT" will be a mature discipline with agencies, tools, and benchmarks. Right now, it is still a window where a small team can meaningfully shift outcomes by rewriting a dozen posts with the five patterns above.
If you want the full framework for this, we built a free course on Generative Engine Optimization that walks through the measurement side, the content side, and the monitoring side.
Key takeaways
- 25% of our total traffic now comes from ChatGPT and Claude referrals, exceeding our Google organic traffic.
- Direct traffic (10,362 sessions) also exceeds Google organic, which we interpret as AI-driven brand recall — people hearing our name inside ChatGPT and typing it in.
- AI referrals are concentrated on comparison posts and specific question pages, not broad landing pages or category hubs.
- The content patterns that correlate with citations are: specificity over comprehensiveness, comparison tables, opinionated conclusions, question-shaped headings, and consistent brand mentions.
- Publishers have roughly a 12 to 18 month head start window before LLM citability becomes a crowded, agency-optimized discipline.
We are building a tool for this
We are quietly building a free tool that checks whether ChatGPT and Claude mention your website — and if they do not, tells you what to fix. It is not ready yet, but if you want early access, drop your email below.
Get early access to our LLM citation checker
We will email you when it ships. No pitch, no upsell — just a quiet heads-up.
No pitch, no upsell. Just a quiet "we are working on this, want in?"

