AI Ethics, HIPAA & Client Confidentiality
This is the most important lesson in the course. Every other lesson assumes you handle AI ethically and protect client confidentiality. This lesson tells you exactly how. If you internalize one lesson from this course, make it this one.
What You'll Learn
- The four ethical frameworks that govern AI use in social work
- HIPAA, FERPA, 42 CFR Part 2, and state confidentiality rules as they apply to AI tools
- The identifier-stripping discipline that protects you and your clients
- When to disclose AI use to clients, supervisors, and external parties
- A simple decision tree for "can I use AI for this task?"
The Four Ethical Frameworks
Social workers using AI are governed by four overlapping standards:
1. The NASW Code of Ethics (2021 update)
Standards 1.04 (Competence), 1.05 (Cultural Awareness), 1.07 (Privacy and Confidentiality), 1.08 (Access to Records), and 3.04 (Client Records) all directly apply to AI use. The 2021 update added explicit language about technology in practice. Read these standards.
2. State Licensing Board Rules
Most state licensing boards (Board of Behavioral Sciences in California, OASAS and OPWDD in New York, LPSW boards across states) are issuing AI guidance. Some states require disclosure of AI use in clinical documentation. Check your state's most recent guidance.
3. Federal Health Privacy Law
- HIPAA governs Protected Health Information for healthcare providers and business associates
- 42 CFR Part 2 governs substance use treatment records — more protective than HIPAA
- FERPA governs school records, including those held by school social workers
- Title 42 governs federally-funded SUD programs
4. Agency Policy and Funder Requirements
Your agency may have its own AI policy. Many county and state contracts now include AI clauses. Read your agency's policy. If your agency doesn't have one, ask your supervisor or compliance officer.
The Identifier-Stripping Discipline
This is the single behavior that keeps you out of trouble.
Before you paste anything into a free or non-BAA-covered AI tool, strip:
- Names — full, first, last, nicknames, family member names
- Addresses — full or partial
- Dates — date of birth, dates of treatment encounters, dates of incidents (use ages and intervals instead: "age 34", "6 weeks ago")
- Phone numbers, email addresses, fax numbers
- Medical record numbers, Medicaid numbers, Social Security numbers, case numbers
- Vehicle license plates, device serial numbers
- Biometric identifiers, photos, voice recordings
- Internal facility identifiers (specific cottage, unit, room, classroom number)
- Any combination of details that could identify the client (e.g., "the only Eritrean family on our caseload")
The HIPAA Safe Harbor method specifies 18 categories of identifiers to remove. When in doubt, remove it.
What Counts as an AI Tool That Needs De-Identification?
- ChatGPT (free or paid) — requires de-identification on standard plans
- Claude (free or paid) — requires de-identification
- Google Gemini (free) — requires de-identification
- Perplexity (free or paid) — requires de-identification
- Any AI tool not covered by a Business Associate Agreement with your agency
- Any free transcription service (Otter.ai free, Fireflies free) — requires de-identification
What does not require de-identification (typically):
- AI features inside your EHR if your EHR vendor is a HIPAA business associate
- ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace if your agency has signed a BAA
- Microsoft 365 Copilot under your agency's enterprise BAA
If you're not sure whether a tool is covered, assume it isn't.
42 CFR Part 2 — Substance Use Records
If your client receives substance use treatment in a federally-funded program, 42 CFR Part 2 applies. This regulation is stricter than HIPAA. You generally need specific, written client consent to disclose information about substance use treatment — and that includes (in most interpretations) entering it into a non-BAA-covered AI tool. Strip aggressively when working with SUD content. Ask your compliance officer when in doubt.
FERPA — School Social Workers
School records (and notes by school social workers about students) are governed by FERPA. Disclosure to AI vendors falls under FERPA's "school official" exception only if the vendor has been formally designated and the disclosure is for legitimate educational interests. Most school districts have not yet designated ChatGPT or Claude as approved vendors. Treat school records like PHI — strip identifiers.
Disclosure: Who, When, What
To clients: Most states do not yet require advance disclosure that you might use AI to draft documentation. Best practice (rapidly emerging) is to disclose during informed consent: "Like many providers, I sometimes use AI tools to help organize and draft my clinical notes. I review and verify everything before it becomes part of your record. No identifying information about you is shared with these tools."
To supervisors: Disclose your AI use openly and routinely. Supervisors are accountable for your work; they need to know what tools you're using.
To courts and external bodies: When AI was used substantively (e.g., to draft a court report), disclose if asked. Some jurisdictions now require it. The honest framing: "I authored this document. I used AI to organize and summarize source content, similar to using a template or word processor. I verified every fact and authored all clinical opinions and recommendations."
To funders: Increasing numbers of funders ask. Disclose honestly and specifically.
The Decision Tree
Before using AI for any task involving client content, ask:
- Is the content already de-identified? If yes, proceed with any AI tool.
- If not, is the AI tool covered by a BAA? If yes (your EHR's AI, your enterprise tool with BAA), proceed.
- If neither, can I de-identify before pasting? If yes, do it. Then proceed.
- If I cannot de-identify without losing the meaning? Stop. Do not use the tool. Do the work manually or wait until your agency provides a BAA-covered option.
That four-step check, run before every AI interaction with client content, will keep you safe.
Specific Risk Scenarios
You paste a real client name into ChatGPT. Treat as a potential breach. Notify your supervisor and your agency's privacy officer the same day. Most agencies require an internal incident report and possibly a HIPAA breach assessment. This is fixable if reported quickly; it becomes much worse if it surfaces later.
A coworker shows you a Custom GPT that already contains client documents. Decline to use it. Notify your supervisor. The Custom GPT may have already created a breach.
Your agency hasn't given you guidance. Ask, in writing. Email your supervisor: "I want to use AI tools to support my documentation. What is our agency's current policy on AI use, and which tools are approved for content involving PHI?" Save the response.
A funder asks if you used AI in a proposal. Disclose honestly. Specify what AI did (organized, drafted) and what you did (verified statistics, authored claims, finalized).
Voice Dilution: A Subtler Risk
Beyond privacy, there's a slow-burn ethical risk: if all your documentation is AI-drafted with the same prompt, your distinctive clinical voice fades. Your notes start sounding like every other AI-drafted note. Over time this can flatten the nuance that makes good clinical documentation valuable. Counter this by editing AI output in your own voice rather than accepting it verbatim.
Common Pitfalls
- Treating "I removed the name" as full de-identification (ages, addresses, and combinations of details still identify)
- Forgetting that 42 CFR Part 2 is stricter than HIPAA
- Assuming agency policy when none exists (always ask in writing)
- Not reporting a paste-error breach because it "feels small" (the cover-up is always worse than the mistake)
- Letting your clinical voice flatten through over-reliance on AI drafts
Key Takeaways
- Four overlapping frameworks govern AI use: NASW Code, state licensing rules, federal privacy law (HIPAA/42 CFR Part 2/FERPA), and agency policy
- The identifier-stripping discipline is the single most important habit; the HIPAA Safe Harbor method specifies 18 categories
- 42 CFR Part 2 is stricter than HIPAA for substance use records; FERPA governs school records
- Disclose AI use to supervisors routinely, to clients in informed consent, to courts when substantive, to funders when asked
- Use the four-step decision tree before any AI interaction involving client content
- Report any identifier-paste error to your supervisor and privacy officer the same day — early reporting protects everyone

