AI Ethics & Data Privacy for Nonprofits
Nonprofits hold sacred trust with donors, beneficiaries, volunteers, and funders. That trust is the foundation of your mission. AI tools, used without care, can put that trust at real risk — through data leaks, biased output, misattributed quotes, or simply sloppy work presented as polished. This lesson walks through the practical ethics and privacy framework every nonprofit manager should adopt before scaling AI use across their organization.
What You'll Learn
- The five biggest ethical and privacy risks when using AI at a nonprofit
- A practical data-handling checklist you can apply before any AI task
- How to write an internal AI-use policy in an afternoon
- How to talk to your board, funders, and donors about your AI use
The Five Risk Categories
1. Donor and Beneficiary Data
The single biggest risk. Pasting donor names, giving histories, addresses, or — most seriously — beneficiary data (health records, immigration status, housing status, financial hardship) into consumer AI tools can violate donor agreements, grant contracts, and in some cases federal laws like HIPAA.
Rule of thumb: treat consumer AI chat windows (free ChatGPT, free Claude, free Gemini) like a public forum. Do not paste anything you would not post on your Facebook page.
2. Hallucinated Facts
AI tools confidently make up statistics, funder names, regulation citations, and historical facts. If this content reaches a funder or donor unchecked, the reputational damage can be severe.
Mitigation: verify every factual claim against an authoritative source before it leaves your organization.
3. Invented Beneficiary Voices
Covered in the Impact Storytelling lesson but worth repeating: AI must never put words in a real beneficiary's mouth, fabricate a person, or present a composite as a single individual without clear labeling.
4. Algorithmic Bias
AI models are trained on the open internet. They carry biases — around race, gender, disability, language fluency, region, socioeconomic class. For nonprofits working with marginalized communities, this bias can show up as:
- Deficit-framing language about your beneficiaries
- Subtle assumptions about who "counts" as a donor
- Volunteer screening rankings that amplify unfair proxies
Mitigation: always review AI output for bias. Involve your program staff and beneficiaries in reviewing AI-generated content about them.
5. Voice Dilution
If all your content is AI-drafted without human editing, your organization's distinctive voice disappears. Funders, donors, and community partners can tell. This is a slower, quieter risk than privacy breaches but equally damaging to your mission over time.
The Data-Handling Checklist
Before pasting anything into an AI tool, run through this checklist:
- Is this my data to share? Do any contracts, NDAs, or grant agreements restrict how this data is handled?
- Is this personally identifying? Does it include names, addresses, full giving histories, beneficiary identifiers?
- Is this protected? Health, immigration, housing status, financial hardship — these deserve stricter handling.
- Could I redact or anonymize it? Most AI tasks work just as well with first names only and redacted identifiers.
- What is my plan's data policy? ChatGPT Team and Enterprise, Claude Team, and Google Workspace with Gemini Business have stronger data protections than free tiers.
- Would I be comfortable showing this to a funder? If not, rethink.
Tier Your Tools by Risk
Build a simple tool tier system:
- Tier 1 (public/generic): Free ChatGPT, Free Claude, Free Gemini, Perplexity. Use only for non-sensitive, generic content: brainstorming, general grant research, public information.
- Tier 2 (internal): ChatGPT Plus/Team, Claude Pro/Team, Google Workspace with Gemini. Use for internal drafts that include organizational context but redacted individual data.
- Tier 3 (restricted): Only enterprise-grade tools with a signed BAA (Business Associate Agreement) may touch protected health data, and only if the tool is explicitly configured for it. Most consumer AI tools do not meet this bar.
Write Your AI-Use Policy
A good nonprofit AI-use policy is short. One or two pages. It covers:
- Principles — why you are adopting AI and what values guide your use (mission, beneficiary dignity, donor trust, staff capacity).
- Approved tools — which AI tools are allowed, which are not.
- Data rules — what can and cannot be pasted, tiered by data type.
- Review requirements — what AI output always requires human review before leaving the organization.
- Disclosure — when and how you disclose AI use to funders, donors, and the public.
- Attribution — how you cite AI assistance when appropriate.
- Training — what training new staff receive.
- Violations — what happens when the policy is breached.
Draft it with Claude or ChatGPT, refine with your leadership team and a board member (ideally one with legal or technology experience), and review annually.
Disclosing AI Use
Increasing numbers of funders are adding questions to grant applications about AI use. Some require disclosure of any AI use; others only require disclosure of generative AI use in the proposal itself.
Principles for disclosure:
- Be honest and specific. "We used AI to help draft an initial version of this proposal, which staff then heavily reviewed and edited" is widely accepted.
- Check funder requirements. Read each RFP carefully — some funders explicitly prohibit AI-drafted content; others explicitly welcome it.
- Preserve accountability. Whatever AI produced, a human on your team must own every claim and be able to defend it.
Some donors may ask you directly. A simple answer: "We use AI tools to save staff time on drafting, research, and repetitive tasks, which lets more of our budget flow into programs. Every communication that reaches a donor or beneficiary is reviewed by a human on our team."
Attribution in Content
For most nonprofit content (appeals, social posts, thank-yous, grant narratives), you do not need to publicly disclose AI assistance — any more than you would disclose that your grant writer used spell-check. The output is your organization's.
For content where AI use is material (a report whose conclusions were derived from AI analysis, or a beneficiary story composed entirely from AI with loose supervision), disclose clearly.
Building Organizational Literacy
Every staff member who uses AI should understand:
- What can and cannot go into consumer AI tools
- How to verify AI-generated facts
- How to recognize and correct biased output
- When to escalate to leadership
Schedule a 60-minute all-staff training when you first roll out AI widely. Refresh annually.
Special Note on Community and Beneficiary Voice
If your nonprofit serves communities that have historically been described about rather than heard from, AI presents a particular risk of perpetuating paternalism. Run a simple filter on AI-generated content:
- Who is the subject? Who is the object?
- Is the community speaking, or being spoken about?
- Would a community member recognize themselves in this content?
When in doubt, share AI-drafted community content with community members before publication.
Key Takeaways
- Treat consumer AI chat windows like public forums; paste nothing you would not post on Facebook
- Tier your AI tools by risk level and match sensitive data only to tools with appropriate protections
- Every nonprofit using AI at scale needs a short, clear AI-use policy reviewed annually
- Disclose AI use to funders per their requirements; preserve human accountability for every claim
- Protect beneficiary voice by reviewing AI output for bias, paternalism, and deficit-framing before publication

