A junior at Harvard turned in a senior thesis with seven citations to papers that did not exist. ChatGPT had invented the authors, the journals, the page numbers — all of it, in plausible academic format. The student didn't check. The advisor did. The student is no longer at Harvard.
This is not a hypothetical horror story. It's happening every week, on every campus. Fake citations are the single most common way smart students get themselves in serious academic trouble with AI. You need a research workflow that makes this impossible.
Why ChatGPT alone is dangerous for citations
When you ask ChatGPT for sources, it isn't searching anything. It's predicting what a citation in your topic area would probably look like, based on patterns from its training data. Most of the time the patterns produce something plausible that happens to be real. Sometimes — about a third of the time, in real student tests — the output is plausible but completely fabricated.
The author names sound right. The journal exists. The page numbers are in a reasonable range. Even the topic of the paper matches what you needed. None of it is real. You can't verify a citation by reading it back to ChatGPT either — it will confidently confirm its own hallucination.
Same problem with Claude, Gemini, and any other general-purpose chatbot. They are not search engines. They are pattern completers. Treat any citation they produce as a guess until you have personally located the paper on the actual journal's website.
Perplexity: the right starting point
Perplexity is a search engine wearing AI's clothes. It actually queries the web for every answer, and every claim in its response links to a real webpage. This is a fundamentally different mode of operation from ChatGPT, and it solves the fake-citation problem at the source.
For research, Perplexity is your first stop. Always.
Find me peer-reviewed studies published since 2018 on the effectiveness
of cognitive behavioral therapy for adolescent anxiety. Prioritize
meta-analyses. Include effect sizes if available.
Perplexity returns a synthesized answer with footnotes pointing to actual sources. You click them. They are real papers. This is the difference.
For deeper academic search, Perplexity AI for Research is worth the hour.
The 3-source rule
Never cite something you've only seen in one place. Once you have a claim from Perplexity, find it confirmed in at least two more sources before relying on it in your work. This rule is mostly about catching errors — not just AI errors, but the errors that creep into all summaries, including human ones.
The pipeline:
- Find a candidate claim in Perplexity. It points to a paper.
- Open the paper. Actually open it. Not the abstract Perplexity already showed you. The full text.
- Search Google Scholar for the paper title. Look at how many people have cited it and what they say.
- Find two more sources that either confirm, complicate, or rebut the claim. A claim that nobody else has tested is a weak foundation.
- Cite all three. This is what real research looks like.
The whole pipeline takes maybe twenty minutes per claim. Compare to the alternative: cite the first source, get challenged by your professor, scramble. The students who do the pipeline once write papers that nobody can poke holes in. The students who skip it produce work that falls apart on the lightest pressure.
How to spot a hallucinated paper
Before you cite anything an AI produced, run the smell test:
Does the journal exist? Search the journal name on Google. Look at its website. If you can't find a publisher, the journal is probably fake.
Does the paper exist on the journal's website? Search the journal's archive for the title. Real papers will be there. Fake ones won't.
Does the DOI resolve? Every real paper has a DOI (digital object identifier). Paste it into doi.org. If it returns the actual paper, it's real. If it returns an error or a different paper, you have a problem.
Does Google Scholar know the paper? Search the title in quotes. Real papers show up. Fake ones don't. If the only place the paper appears online is in your AI's output, it doesn't exist.
Are the authors real, in the right field? Look up the lead author. They should have other publications in adjacent areas. If they have no online footprint at all, or if their other work is in unrelated fields, the citation is probably fabricated.
You should run this check on every AI-suggested citation before it goes in your bibliography. It takes thirty seconds per source. It is the difference between graduating and not graduating.
A complete research workflow
End to end, for a paper or thesis chapter:
1. Frame the question. Use ChatGPT or Claude to refine your research question.
I want to write about [topic]. My question is currently [vague version].
Help me sharpen this into a researchable question. Suggest three
narrower formulations, and tell me which one is most likely to have
existing literature.
2. Initial mapping. Use Perplexity to map the literature.
3. Verify. Run the smell test on every paper Perplexity surfaces. Open the real paper. Read the abstract and conclusion.
4. Snowball. Each real paper cites others. Use Google Scholar to see who's cited the paper since.
5. Build a notebook. Upload the real PDFs to NotebookLM. Ask cross-source questions like "Where does Smith 2019 disagree with Chen 2021?"
6. Draft. When you make a claim, the citation should be a paper you've read enough of to defend in conversation.
7. Final citation pass. Before submission, go through every citation once more. Confirm page numbers. Confirm quotes are actually in there.
You can do all seven steps for a 15-page paper in two evenings. Compare to the pre-AI version, which involved physical library walking and dead ends.
When AI is the only place a fact lives
Don't cite the chatbot. Find a real source or rewrite the section to not depend on the fact. If no primary source exists, the fact is probably wrong. AI fabricates plausible-sounding statistics constantly.
The discipline you're building
The point of research is to learn the habit of verifying claims before you act on them. AI threatens to break the verification reflex by making information feel authoritative when it isn't. The students who develop the reflex — open the paper, check the DOI, follow the citation — graduate with a skill that's getting rarer and more valuable every year.

