Detecting Hallucinations & AI Misinformation
A hallucination is when AI produces information that sounds correct but isn't. This is one of the most important responsible-AI skills you can develop — every chatbot, every day, will sometimes make things up. The skill is not to stop using AI but to know exactly when to verify.
What You'll Learn
- Why all major AI models hallucinate (and probably always will)
- Five types of hallucination and how each typically shows up
- A four-step verification workflow you can run in 60 seconds
- How to use Perplexity, Claude, ChatGPT, and Gemini together to cross-check facts
Why Hallucinations Happen
Large language models do not look up answers. They generate the most likely next token based on patterns in their training data. If the training data does not have a precise answer — or if the model's pattern matching slightly misfires — it confidently produces a plausible-sounding answer that may be wrong.
This is not a bug to be fixed in the next version. It is fundamental to how the architecture works. Newer models hallucinate less, but no model is hallucination-free.
Five Types of Hallucination
| Type | Example |
|---|---|
| Fabricated facts | "The 2019 Nobel Prize in Physics was awarded to..." (wrong recipient) |
| Fake citations | A made-up academic paper, court case, or URL |
| Confused entities | Mixing two different people with similar names |
| Reasoning errors | Math or logic that looks right but breaks under inspection |
| Outdated information | Confidently stating last year's CEO is still CEO |
Famous example: in 2023, a New York lawyer was sanctioned for submitting a brief with six court cases ChatGPT had completely fabricated, including fake citations and fake quotes. The lesson is now memorized in every law school.
The Four-Step Verification Workflow
Run this every time you plan to act on AI output for anything that matters.
Step 1: Ask "Is this verifiable?"
Some content can be verified (facts, citations, statistics, code that runs). Some cannot (opinions, brainstorming, creative writing). If it cannot be verified, you do not need to verify it — but you should not present it as fact.
Step 2: Spot the high-risk patterns
These four patterns are the most common hallucination triggers:
- Specific numbers ("85% of companies..." — where did that number come from?)
- Named sources ("according to a 2022 Harvard study..." — does that study exist?)
- Quotes ("As Sundar Pichai said..." — did he?)
- URLs (chatbots are notorious for inventing URLs that look real)
Anytime you see one of these, treat it as suspect until verified.
Step 3: Cross-check with a search-grounded tool
Plain ChatGPT and Claude do not browse the web by default unless you turn on a tool. Perplexity, Gemini's Google Search grounding, and ChatGPT's web mode do — and they cite sources. Use them to verify the suspicious claim.
Try this prompt in Perplexity:
"I read that [PASTE THE CLAIM]. Verify this claim. Provide citations and indicate whether it is correct, incorrect, or partially correct."
Read the cited sources, not just the summary. Sometimes Perplexity's summary is right but the underlying source actually disagrees.
Step 4: Sanity-check the source
Even cited sources can be wrong, paid placements, or AI-generated content. Check:
- Is the source a known publisher (peer-reviewed journal, major news org, official body)?
- Does the source actually say what was claimed?
- When was it published? Has it been retracted or updated?
A Cross-Model Verification Trick
If a fact is borderline, run the same question through three different models and compare. Genuinely well-known facts (Einstein won the Nobel Prize, Paris is the capital of France) will match across all three. Hallucinations rarely match.
"What was the exact title and date of the paper Yoshua Bengio, Geoffrey Hinton, and Yann LeCun won the Turing Award for in 2018?"
If ChatGPT says one thing, Claude says another, and Gemini says a third — at least one is hallucinating. Verify with the official Turing Award page.
Specific Drills You Can Run This Week
Drill 1 — Fake citation hunt. Ask ChatGPT: "Give me five academic papers about the ethics of facial recognition, with full citations." Then check whether the papers exist. Search Google Scholar for the title and authors.
Drill 2 — Outdated information check. Ask any chatbot for the current CEO of a company in the news. Compare to the company's actual website. Note the chatbot's training cutoff.
Drill 3 — Math validation. Ask: "If a startup raised $5M at a $20M valuation and gives 4% equity to early hires across 8 people equally, what does each hire end up with at IPO if the company sells for $1B and there's no further dilution?" Then redo the math by hand. Models often get this wrong in subtle ways.
Drill 4 — Quote verification. Ask: "Give me three quotes from Sam Altman about AI safety, with the source and year." Then search the actual quote in Google. You will find that some are paraphrased, some misattributed, and some entirely invented.
When Models Will Sometimes Refuse
Some models, especially Claude, will say "I'm not sure" or "I don't have reliable information about that" — which is a good behavior. Reward it. The worst hallucinations come from models that confidently produce wrong answers.
You can also lower hallucination risk by adding to your prompt:
"If you are not sure of any specific fact, citation, statistic, or quote, say so explicitly rather than inventing one."
A Workflow For School Assignments
Many universities now allow AI-assisted research with disclosure. The responsible workflow is:
- Brainstorm with the chatbot — broad topic exploration, identifying themes.
- Get a list of starting points — keywords, authors, subfields.
- Verify each citation in Google Scholar or your library catalog before using it.
- Read the original sources — don't trust the chatbot's summary of someone else's paper.
- Disclose AI use as your university requires.
Skip step 3 and you become the next case study. Follow it and AI becomes a powerful research accelerator.
Key Takeaways
- Hallucinations are intrinsic to large language models, not a bug to be patched.
- Five types: fabricated facts, fake citations, confused entities, reasoning errors, outdated info.
- The four-step workflow (verifiable? high-risk pattern? cross-check? sanity-check source?) takes under a minute.
- Perplexity, Gemini's grounded search, and ChatGPT's web mode are useful verification partners.
- Disclose AI use, verify every citation, and never publish without checking.

