Privacy, Hallucinations & When Not to Trust AI
This is the most important lesson in the course. You can build great budgets, plans, and projections with AI, but if you trust it on the wrong question — at the wrong moment — you can lose real money.
In this lesson, we look at the two main risks: privacy (what data you should never share) and hallucinations (when AI confidently makes things up). You will leave with a "trust map" you can apply to any AI tool, plus a checklist for high-stakes financial questions.
What You'll Learn
- What information to never share with an AI tool
- What hallucinations are, how to spot them, and how to defend against them
- The "trust map" — when to trust AI alone, when to verify, and when to never use it
- How to handle AI "confident wrong" answers gracefully
Privacy: What Never Goes in a Prompt
Treat AI tools like a chatty stranger on a train. They are smart, helpful, and can answer your money questions — but you would not show them your driver's license.
Never share:
- Your full Social Security Number, Aadhaar, NIN, or government ID number
- Bank account numbers or routing numbers
- Credit/debit card numbers (full or partial), including the CVV
- Tax filing numbers (EIN, full ITIN)
- Account login credentials, passwords, OTPs
- Full birth date combined with full name and address
- Insurance policy numbers and member IDs (some are full PII)
Be cautious sharing:
- Combined identifiers that could ID you (full name + employer + city)
- Detailed medical or legal situations that could be sensitive
- Recovery questions or codes from any account
Safe to share:
- Approximate income, expenses, and savings amounts
- Goals and timeline
- General job/industry, country, age range
- Anonymized account details ("a US savings account paying 4.5%")
Why This Matters
Three reasons:
1. Conversations may be used for training. Free tiers of consumer chat AI may use your conversations to improve the model unless you turn that off in settings. While companies de-identify training data, you still want to minimize what you share.
2. Data breaches happen. Any service can be breached. The best defense is to never give them sensitive data in the first place.
3. Phishing patterns. Scammers can build prompts that look like legitimate questions. If you train yourself never to paste sensitive info, you cannot be tricked.
Turning Off Training (Quick Guide)
For consumer free tiers:
- ChatGPT: Settings → Data Controls → "Improve the model for everyone" → Off
- Claude: Anthropic states that consumer Claude does not train on your chats by default; verify the latest in Settings → Privacy
- Gemini: Settings → Data & Privacy → Gemini Apps Activity → can pause/delete
- Perplexity: Settings → AI Data Retention
These change occasionally — check the latest from each provider.
Hallucinations: When AI Makes Things Up
A hallucination is when an AI generates content that sounds correct but is factually wrong. Not occasionally. Frequently.
In personal finance, hallucinations look like:
- A made-up IRS Publication number ("Publication 523-A")
- A made-up fund ticker ("VTSAX-X")
- A made-up tax credit ("the Young Adult Investor Credit")
- A subtly wrong contribution limit ($7,200 instead of $7,000)
- A correct concept attributed to the wrong country
- A confidently stated calculation with a math error in step 3
The dangerous thing is the tone never changes. Hallucinations are delivered with the same confidence as correct answers. So you must verify, not feel.
The Trust Map
Use this to decide how much to trust AI on any question:
Trust alone (low stakes, common knowledge):
- "What does compounding mean?"
- "Is a Roth IRA the same as a Traditional IRA?"
- "How does dollar-cost averaging work?"
- General concept explanations and definitions
Trust after a quick cross-check (medium stakes):
- "What is a typical APR for credit cards?"
- "What are some strategies to pay down debt?"
- "What is the math on this scenario?"
Pattern: paste the same question into a second tool. If both agree, you are probably fine.
Trust only after verifying on the official source (high stakes):
- Any specific contribution limit, tax bracket, deduction amount
- Any specific fund/ETF ticker, fee, or holding
- Any government rule or eligibility threshold
- Any account-specific rate or fee
Pattern: get the AI answer, then go directly to the official source — IRS.gov, your tax authority, the bank's site, the fund's site — to confirm.
Never use AI for (no matter how confident it sounds):
- Specific stock picks or sell signals
- Predicting future market direction
- Personalized legal advice
- Personalized medical advice (sometimes mixed with finance, e.g., HSA decisions)
- Filing your tax return without human review
How to Spot a Hallucination
Some quick tells:
1. The "too clean" reference. A made-up IRS publication will have a clean number that fits the pattern. Check IRS.gov directly.
2. Disagreement across tools. If ChatGPT, Claude, and Gemini give three different numbers, at least two are wrong.
3. A specific number with no source. "The current Roth IRA limit is $7,200." Ask: "What is your source for that number, with a link?"
4. Math that doesn't add up. Quickly run the calculation yourself or in a calculator. Models occasionally err in the middle of long arithmetic.
5. Country confusion. "Your 401(k) limit in the UK is..." — wrong country. Read carefully.
A Defensive Workflow
For any high-stakes financial question, run this:
- Ask the question to one AI tool. Get the answer.
- Ask "What is your source for this? Link please." If it makes one up, you will often catch it because Perplexity (or Gemini with browsing) cannot retrieve it.
- Cross-check on a second tool. Ideally Perplexity, since it cites live sources.
- Verify on the official source. For tax: IRS.gov. For SEC: SEC.gov. For your bank: the bank's own site.
- Only act after step 4.
This takes 5–10 minutes for a question that affects thousands of dollars. Worth it.
What to Do When AI Is Confidently Wrong
You will eventually catch an AI in a clear mistake. Useful follow-ups:
"That number looks wrong to me. Let me check IRS.gov... Yes, the actual 2026 Roth IRA limit is $7,000, not $7,200. Please don't guess specific numbers — say 'verify the current limit on IRS.gov' instead."
The model will adjust for the rest of the conversation. Some people ignore this; the disciplined ones use it as a way to keep getting better answers.
Special Risks to Know About
1. AI scams targeting your finances. Scammers use AI to write convincing phishing emails and fake "AI financial advisor" services. If something appears unsolicited and asks for money or login info, it is a scam. Use AI to help you analyze the message:
"I got this message claiming to be from [bank]. Walk me through the red flags that suggest it is a scam: [paste]"
2. AI investment advice scams. If a service says "our AI predicts the next big stock," it is selling lottery tickets. Real investing is index funds and time, not predictions.
3. Voice and video deepfakes. Scammers can clone a relative's voice. If anyone asks for emergency money over the phone, hang up and call back on a known number.
A Pre-Decision Checklist
Before any high-stakes financial decision, run these five questions:
- Is the AI's number traceable to an official source?
- Do two AI tools agree on this answer?
- Is the source within the last 12 months?
- Does this involve a real, verifiable institution and product?
- If I am wrong, is the cost recoverable?
If any answer is "no," slow down. Verify.
Key Takeaways
- Never share account numbers, SSNs, passwords, OTPs, or full government IDs with AI.
- Hallucinations are confident wrong answers — they sound correct but are factually incorrect.
- Build a trust map: trust alone for concepts, cross-check for medium-stakes, verify on official sources for high stakes, and never use AI for specific stock picks or filing taxes.
- Always ask for sources and use Perplexity to verify high-stakes numbers.
- Use AI itself to spot phishing and investment scams aimed at you.
- The 5-question pre-decision checklist saves money and prevents painful mistakes.

