Privacy & Prompt Hygiene
Every prompt you send to an AI is data about you, your work, and possibly other people. "Prompt hygiene" is the discipline of being thoughtful about what goes into the chat box. This single lesson can prevent the kind of privacy mistake that has cost real people their jobs, reputations, and even legal cases.
What You'll Learn
- What actually happens to your data after you send a prompt
- The five categories of information you should never paste into a public chatbot
- A "redact-then-prompt" technique you can use immediately
- The difference between consumer chatbots and enterprise / privacy-protected versions
What Happens When You Hit Send
This varies by tool, but in general:
- Your prompt and the response are stored at least temporarily.
- They may be reviewed by humans for safety or quality reasons.
- They may be used to train future models — depending on your settings and the tool.
- They may be visible to system administrators if you use a school or company account.
- They may be subject to subpoena.
This is not paranoia — it is publicly documented in the privacy policies of every major model provider. In 2023, Samsung banned employees from using ChatGPT after engineers pasted confidential code into it. The code was then potentially part of OpenAI's training data.
The Five Categories You Should Never Paste
Treat these as bright lines. Even if you "trust the company," do not put these into a public chatbot:
- Personal information about other people — addresses, phone numbers, medical info, financial info — especially without consent.
- Credentials and secrets — passwords, API keys, access tokens, SSH keys, OAuth secrets.
- Confidential work data — unreleased product info, source code under NDA, internal financials, customer data.
- Legal or compliance-sensitive content — health records, immigration documents, legal strategy, GDPR-protected EU resident data.
- Anything you would not want screenshotted on Reddit — embarrassing personal content, things you'd be fired for, things that would identify a third party in a sensitive situation.
The friction-free test: "Would I be comfortable if this prompt and response showed up in a news story tomorrow?"
A Real Cost of Bad Prompt Hygiene
In 2024, a healthcare worker pasted a redacted-but-not-really patient case into a free chatbot to "help write a discharge summary." The patient was identifiable from the combination of details. The hospital faced a HIPAA violation report and the worker was disciplined.
The lesson is not "don't use AI in healthcare." The lesson is "use the right tool" — in that case a HIPAA-compliant medical AI tool, or carefully redacted text in a privacy-protected enterprise account.
The Redact-Then-Prompt Technique
This is the most important practical skill in the lesson. Before pasting anything that might be sensitive, redact it.
Original (DO NOT SEND):
"Help me write a follow-up email to Maria Gonzalez, our customer at Acme Bank, about the failed payment of $4,300 on her account 8829-1124. Her email is maria.g@acmebank.com."
Redacted (safe to send):
"Help me write a polite follow-up email to a banking customer about a failed payment. Use placeholders for the customer name, amount, and account number. Tone should be empathetic and professional."
Then you fill in the real details yourself, locally, after the AI returns the draft. The AI never sees the sensitive information.
This works for almost everything: legal drafts, medical notes, code, financials, HR letters, performance reviews.
A Prompt Template You Can Reuse
Save this in your notes app:
"I want help with [TASK]. The real content involves sensitive information I will not share. Treat the input I'm about to give you as redacted, with placeholders like [NAME], [DATE], [AMOUNT], [COMPANY]. Produce output that I can fill in myself afterward. Below is the redacted version: \n\n[paste]"
This single habit prevents the majority of consumer-AI privacy incidents.
Consumer vs Enterprise: Why It Matters
| Tier | Examples | Privacy properties |
|---|---|---|
| Free / consumer | Free ChatGPT, free Claude, free Gemini | May train on your data; subject to provider's general privacy policy |
| Paid consumer | ChatGPT Plus, Claude Pro, Gemini Advanced | Often opt-out of training by default, but check current policies |
| Enterprise / API | OpenAI Enterprise, Anthropic for Work, Google Workspace AI | Contractual data protection; do not train by default |
| Domain-specialized | HIPAA-compliant clinical AI, legal AI, financial AI | Built for sensitive data with compliance certifications |
Rule of thumb: the higher the tier, the better the privacy protections — but always check the current policy. Policies change.
Your Account Settings Matter
If you only do one thing after this lesson, do this:
- ChatGPT: Settings → Data Controls → "Improve the model for everyone" → off (if you want).
- Claude: Settings → Privacy → review training settings.
- Gemini: Activity → Gemini Apps Activity → review and adjust.
Each provider currently lets you opt out of having your prompts used for training, but the exact path changes. Search "[provider name] training opt out" and follow the official instructions.
Also turn on or review: chat history retention, third-party app access, and any "memory" or "personalization" features that store information about you across conversations.
Other People's Privacy
Privacy is not just about your own data. When you paste someone else's information — a colleague's CV, a friend's medical question, a customer's complaint — you are making a privacy decision for them.
This is one of the under-discussed responsibilities of AI use. A good rule: if you would not feel comfortable telling the person "I shared your details with an AI to help me figure out what to do," do not share them.
Hands-on: Audit Your Last Week of Prompts
Open your chatbot's history. Scroll back through the last week of prompts. For each prompt:
- Is there any personally identifiable information you would not have wanted leaked?
- Is there any confidential work or school content?
- Did you paste content about another person without their consent?
Most students are surprised by how much "low-grade" sensitive content they have shared. The exercise is not meant to scare you — it is meant to calibrate your future prompts.
Key Takeaways
- Every prompt is data; assume it could be reviewed, retained, or leaked.
- Five never-paste categories: third-party info, credentials, confidential work, legal/health data, anything embarrassing.
- The redact-then-prompt technique solves most real privacy problems.
- Tier matters — consumer vs enterprise versions have very different protections.
- Audit your account settings and your prompt history at least once a quarter.

