Deploying AI Safely with Insurance Regulatory Compliance
Insurance is one of the most heavily regulated industries on the planet. Every state insurance department, the NAIC, federal laws like HIPAA and GLBA, and a growing patchwork of state privacy and AI-specific laws all govern how AI can be used. This final lesson gives you a practical compliance framework for using AI in your daily work and for building it into a team or department.
What You'll Learn
- The regulatory frameworks that govern AI in insurance
- The NAIC Model Bulletin on AI and what it requires
- A practical compliance checklist for individual AI use
- How to build an AI governance program for a team or department
The Regulatory Landscape
The major frameworks that touch AI in insurance:
NAIC Model Bulletin on the Use of AI Systems by Insurers (2023)
Adopted by a growing number of states, this bulletin sets expectations for how insurers govern AI:
- Maintain a written AI governance program
- Document AI use cases and risk classifications
- Monitor AI systems for accuracy, fairness, and bias
- Test third-party AI systems before deployment
- Provide consumer disclosures where required
- Notify the commissioner of material AI-related issues
If you are a licensed insurance professional, your carrier's AI governance program almost certainly applies to your AI use. Read it.
Colorado SB 21-169 and Regulation 10-1-1
Colorado's groundbreaking law on AI in insurance practices requires insurers to test their algorithms for unfair discrimination by race and other protected classes. As of 2026, this is in force for life insurance underwriting.
New York DFS Circular Letter 7 (2024)
New York's Department of Financial Services has issued guidance on the use of AI by insurers, focused on data quality, model risk management, and consumer protection.
California Insurance Commissioner Bulletin 2022-5
California's bulletin on the use of AI in insurance underwriting and pricing emphasizes anti-discrimination and explainability requirements.
State Privacy Laws
CCPA/CPRA, VCDPA, CPA, TDPSA, and others may apply to consumer-facing uses of AI. Most insurance is partially exempt under GLBA, but customer interactions outside of policy administration may not be.
HIPAA and GLBA
Federal frameworks for PHI and financial information. AI use must respect Business Associate Agreements (HIPAA), Safeguards Rule (GLBA), and contractual data protections.
EU AI Act
If you write any European business, the EU AI Act classifies most insurance pricing and underwriting AI as "high risk," triggering documentation, transparency, and oversight requirements.
The NAIC Model Bulletin's Core Concepts
The bulletin introduces several concepts you should know:
- AIS (Artificial Intelligence Systems): Any system using AI to make or support decisions affecting consumers.
- Predictive Models: A subset of AIS used in underwriting, pricing, marketing, and claims.
- AI Governance: The carrier's framework for overseeing AIS use.
- High-Risk AIS: Systems that materially affect consumer access to insurance, pricing, or claim outcomes.
- Third-Party AIS: Systems sourced from vendors (which most consumer AI tools are).
The bulletin asks carriers to inventory AIS, classify by risk, document governance, test for bias, and disclose to consumers where appropriate.
A Practical Compliance Checklist for Individual Use
Before you use AI for any insurance task:
- Approved tool? Is the AI tool on your carrier's approved list? If not, ask before using.
- Right tier? Are you using a tier (Free / Plus / Team / Enterprise) appropriate for the data category?
- Right data? Is the data you are about to paste de-identified to the level your carrier policy requires?
- Right purpose? Is this an approved use case? (Drafting and analysis are usually approved; decisions usually are not.)
- Right oversight? Will a licensed human review the output before it leaves the carrier?
- Right documentation? Are you logging the tool, prompt, input, output, and review per carrier audit-trail policy?
If any answer is "no" or "I don't know," stop and ask.
A Practical Governance Framework for a Team
If you are leading a claims, underwriting, or service team adopting AI, you need a lightweight governance framework. Most carriers already have one; if yours does not, this template gives you a starting point.
1. Inventory
Maintain a list of AI tools in use:
- Tool name and vendor
- Owner
- Use cases
- Data categories permitted
- Risk classification (Low / Medium / High)
2. Use Case Approval
For each use case, document:
- Description of the workflow
- Inputs and outputs
- Human-in-the-loop steps
- Risk classification
- Approver
3. Data Governance
For each tool, document:
- Permitted data categories
- Required de-identification steps
- Vendor BAA / DPA in place (if applicable)
- Data residency
4. Bias and Fairness Monitoring
For tools that affect consumer outcomes:
- Sample testing for outputs across demographics
- Drift monitoring
- Annual bias audit
5. Audit Trail
For each AI use:
- Tool, version, prompt, input, output
- Reviewer and approval
- Disposition
6. Incident Response
If something goes wrong:
- Internal escalation path
- Customer notification protocol
- DOI notification protocol
- Post-incident review and tool changes
7. Training
For each team member:
- Initial AI training
- Annual refresher
- Sign-off on AI use policy
What Regulators Are Watching
Across recent state and federal communications, regulators are paying close attention to:
- Underwriting decisions that disadvantage protected classes
- Pricing that uses opaque AI features
- Claims handling speed and consistency
- Fraud screening that disproportionately flags certain demographics
- Consumer disclosures about AI use
- Cybersecurity of AI vendor pipelines
Expect enforcement to grow. Carriers that document their AI governance well are positioned to defend; those that do not are not.
Building a Personal AI Workflow That Will Hold Up
A few habits that protect you over a career:
- Use only approved tools. Even for personal-feeling tasks, default to your carrier's approved AI.
- De-identify by reflex. Build the muscle memory to redact before pasting.
- Document by reflex. Note in your file or workflow tool which AI you used and what you produced.
- Review by reflex. Treat every AI output as a draft to be reviewed, not a finished work.
- Flag concerns. If you see a use case that does not fit your governance policy, raise it. The downside is enormous; the upside of asking is just a conversation.
What This Course Covered
You now have:
- A foundational understanding of how modern AI works in insurance contexts
- Specific prompts for policy summarization, customer communications, claims documentation, and underwriting
- Workflows for triage, risk assessment, fraud signals, and renewals
- The skills to build a custom GPT for your specialty
- An advanced toolkit for coverage analysis
- A compliance framework for safe deployment
Key Takeaways
- The NAIC Model Bulletin, state insurance department guidance, HIPAA, GLBA, and state privacy laws all govern AI in insurance.
- Individual AI use should pass the 6-question checklist: approved tool, right tier, right data, right purpose, right oversight, right documentation.
- Team-level governance needs inventory, use case approval, data governance, bias monitoring, audit trail, incident response, and training.
- The carriers that document and govern AI well will be positioned to use it more aggressively over the next decade. The ones that do not will face enforcement.

