Secure Prompting and Attorney-Client Privilege
In an introductory legal AI course you learn the headline rule: "don't paste confidential client information into ChatGPT." That rule is correct but incomplete. In active litigation, you face questions that introductory rules cannot answer. What about a redacted deposition transcript? A document already produced to opposing counsel? A hypothetical that closely tracks the facts? Each of these involves real judgment calls.
This lesson gives you a working framework for secure prompting in litigation, grounded in the way courts and bar associations have been writing about confidentiality and privilege through 2025 and 2026.
What You'll Learn
- The three-tier confidentiality model used by most modern firms
- When AI use can waive attorney-client privilege or work product protection
- How to prompt safely without sanitizing away the useful context
- A practical pre-prompt checklist you can apply in seconds
The Three-Tier Confidentiality Model
Most modern firms place every AI tool into one of three tiers.
Tier A β Open / consumer. Examples: free ChatGPT, free Claude.ai, free Gemini, public versions of Perplexity. Default settings may use prompts to train future models. Acceptable only for fully generic queries with no client-identifying or matter-specific content.
Tier B β Enterprise sandboxed. Examples: ChatGPT Enterprise, Claude for Work, Gemini for Workspace, Microsoft Copilot with data protection. Your firm holds a contract that excludes prompts from training and limits retention. Acceptable for internal work product, generic research, and most non-privileged drafting.
Tier C β Legal-grade grounded. Examples: Westlaw Precision AI, Lexis+ AI with ProtΓ©gΓ©, CoCounsel, Harvey, Everlaw, Relativity aiR. These run in a single-tenant or VPC environment under a SaaS contract with explicit confidentiality language, often with on-premises or jurisdiction-bound hosting options. Appropriate for privileged content and active matters.
The basic rule: never use a tier below the sensitivity of the data. The harder rule: also do not use a tier above what the task needs, because Tier C tools are expensive and slow.
When AI Can Waive Privilege
Attorney-client privilege protects confidential communications between a lawyer and client made for legal advice. Work product doctrine protects materials prepared in anticipation of litigation. Both can be waived.
In the AI context, three waiver scenarios are realistic in 2026:
-
Voluntary disclosure to a third party. If your AI vendor's terms allow them to use your prompts to train models, you may be voluntarily disclosing protected content to a third party. Tier A tools default to this. Tier B and C contractually exclude it.
-
Disclosure during discovery. Opposing counsel can request your AI prompts and outputs if they are relevant. A few 2025 and early 2026 cases have addressed whether prompts and outputs are work product. The conservative practice: assume both are discoverable unless your firm has structured a deliberate work product claim.
-
Inadvertent disclosure via a shared environment. Multi-tenant AI environments without proper isolation can leak content between organizations through retrieval-augmented generation or shared embeddings. Tier C tools are designed to prevent this; lower tiers are not.
Prompting Without Sanitizing Away the Signal
A common mistake is over-redacting prompts to the point that the AI produces useless output. The reverse mistake is leaving identifying details. The middle path uses role substitution.
Replace specific identifying facts with role-based placeholders that preserve the legal structure of the problem.
Bad β leaks identifying detail:
Acme Industries terminated Jane Smith on June 3, 2025, after she filed
an OSHA complaint about lead exposure at the Cleveland plant. She now
wants to sue for retaliation under Ohio Revised Code 4113.52.
Good β preserves legal structure, removes identifying detail:
A manufacturing employer terminated an at-will employee approximately
30 days after the employee filed an OSHA complaint about a workplace
safety hazard at the employer's facility. The employee wants to bring
a state retaliatory discharge claim in Ohio. What are the elements,
key evidentiary considerations, and recent appellate decisions?
The legal substance is identical. The second version produces equally useful research while keeping the matter de-identified in a Tier B environment.
When to Skip Sanitization
Inside Tier C tools, you generally do not need to sanitize. The platforms are contracted for this. Sanitizing in Tier C is over-engineering, and it can actually hurt output quality because the model cannot see the real document set you want it to reason over.
The decision tree is simple:
- Tier A: never use for matter content, no exceptions.
- Tier B: always sanitize to role-based facts.
- Tier C: do not sanitize unless the prompt would expose facts outside the vendor's contracted scope.
The Pre-Prompt Checklist
Before you press send on any litigation-related prompt, run this five-second checklist.
- Tier check. Is this tool the right tier for the data I am about to enter?
- Identifier scan. Does the prompt include any names, account numbers, addresses, or unique facts?
- Privilege check. Would this prompt, if read aloud in court, constitute or describe a privileged communication?
- Output handling. Where will I save the output? Is it going into a matter-managed system with an audit trail?
- Verification plan. What is my plan to verify every factual claim and citation the model produces?
If you cannot answer all five quickly, do not send the prompt.
A Note on Shared Drives and Copilot
Microsoft 365 Copilot is a special case worth flagging. When a firm enables Copilot, the model can read across SharePoint, OneDrive, and Teams. That power is also a risk: Copilot can surface privileged matter content from one team to a user in another team if permissions are sloppy.
Before turning on Copilot at a firm, audit your existing SharePoint and OneDrive permissions. A 2026 best practice is to lock down matter-related sites to need-to-know access before Copilot is allowed to index them.
Key Takeaways
- Use a three-tier model: consumer, enterprise sandboxed, legal-grade grounded.
- Match the tier to the sensitivity of the data, then stop.
- Avoid waiver by reviewing your vendor terms, treating outputs as potentially discoverable, and using isolated environments for privileged work.
- Use role substitution to preserve legal structure while removing identifying facts in Tier B.
- Audit your existing document permissions before enabling firmwide Copilot.

