Ethical Considerations for AI in Law
What You'll Learn
In this module, you will learn:
- How the ABA Model Rules of Professional Conduct apply to AI use in legal practice
- The duty of technology competence under ABA Model Rule 1.1 Comment 8
- Confidentiality obligations when using AI tools under Rule 1.6
- Supervisory responsibilities for AI use by staff under Rules 5.1 and 5.3
- Unauthorized practice of law risks with AI-generated legal work
- How to identify and mitigate bias in AI outputs
- Best practices for informed consent and developing an AI use policy for your firm
8.1 Why Ethics Must Come First
AI is transforming legal practice, but the legal profession is fundamentally different from other industries. Lawyers are fiduciaries bound by ethical rules that carry the force of law. Violations can result in discipline, malpractice liability, and harm to clients.
Every time you use an AI tool, you are making ethical decisions. Pasting client facts into a chatbot implicates confidentiality. Relying on AI-generated citations implicates competence. Letting a paralegal use AI without oversight implicates supervision. This module is the foundation that everything else in this course rests on.
8.2 The ABA Model Rules and AI
The ABA Model Rules were not written with AI in mind, but they apply with full force. Several rules are directly implicated every time a lawyer uses AI.
Rule 1.1: Competence and the Duty of Technology Competence
Model Rule 1.1 requires competent representation. Comment 8, adopted in 2012, added that lawyers must stay current with "the benefits and risks associated with relevant technology" -- the duty of technology competence.
For AI, this means:
- You must understand how AI tools work at a functional level -- that large language models generate text probabilistically and can produce fabricated information
- You must verify AI outputs. Submitting an AI-generated brief without checking citations is an ethical violation
- You must stay informed about AI developments in your practice areas
As of 2026, at least 42 states have adopted some version of this duty. Ignorance of AI is not a defense.
Rule 1.6: Confidentiality
Rule 1.6 prohibits revealing client information without informed consent. When entering client data into AI tools, ask:
- Where does the data go? Many platforms use inputs to train future models
- Who has access? Third-party employees and systems may access your data
- Is data stored? Tools may log conversations for debugging or compliance
- Is transmission secure? Data in transit must be encrypted
To protect confidentiality: read each tool's terms of service, use enterprise versions with data processing agreements, anonymize client information before inputting it, and consider on-premises deployments for sensitive matters.
Rules 5.1 and 5.3: Supervisory Responsibilities
Rule 5.1 requires supervising lawyers to ensure other lawyers conform to the Rules. Rule 5.3 extends this to nonlawyer assistants. You cannot hand AI tools to staff and walk away. You must:
- Establish clear policies about approved AI tools and permitted uses
- Provide training on ethical constraints
- Review AI-generated work product before it reaches clients or courts
- Monitor compliance and take corrective action when needed
If a paralegal pastes confidential information into a consumer AI chatbot, the supervising lawyer may face discipline.
8.3 Unauthorized Practice of Law
AI can draft contracts, summarize legal rights, and recommend strategies. But only licensed lawyers can practice law.
The key distinction: providing legal advice to a specific person about their specific situation is practicing law; providing general legal information is not. As a lawyer using AI:
- Never allow AI to communicate legal advice directly to clients without your review
- Do not delegate your professional judgment to a machine
- Be cautious with AI-powered client intake tools that cross from gathering information into providing guidance
- Remember that you are the lawyer. AI is a tool. The judgment must come from you.
8.4 Bias in AI Outputs
AI reflects biases in its training data. In legal contexts, this has concrete implications for justice.
Sources of bias include historical disparities in legal data, selection bias in training datasets, and automation bias -- the human tendency to over-rely on computer outputs.
To mitigate bias: critically evaluate AI outputs in areas with documented disparities (sentencing, hiring, lending, housing), cross-reference with multiple sources, be transparent about AI's limitations, and consider the impact on vulnerable populations.
8.5 Informed Consent and Client Communication
Clients should know that you are using AI, how you are using it, what safeguards protect their information, and that a licensed attorney reviews all AI-generated work. Disclose when clients ask, when courts require it, when AI use materially affects cost or approach, and when AI processes sensitive information. Consider including AI disclosure language in engagement letters and retainer agreements.
8.6 Developing an AI Use Policy
Every firm using AI should have a written policy covering:
- Approved tools -- vetted platforms with specific versions and restrictions
- Prohibited uses -- no unredacted client data in unapproved tools, no unreviewed AI work product
- Confidentiality protocols -- anonymization requirements and data handling procedures
- Quality control -- review processes, verification steps, and documentation requirements
- Training requirements -- initial and ongoing training for all staff
- Incident response -- procedures for AI-related errors or breaches
- Regular review -- quarterly policy updates given the pace of AI development
Key Takeaways
- The duty of technology competence (ABA Model Rule 1.1, Comment 8) requires lawyers to understand AI benefits and risks -- ignorance is not an option
- Confidentiality under Rule 1.6 demands careful evaluation of how AI tools handle client data, including training, storage, and access policies
- Supervisory duties under Rules 5.1 and 5.3 mean you are responsible for AI use by attorneys and staff you supervise
- Unauthorized practice concerns require that AI never replaces your professional judgment or delivers unreviewed legal advice to clients
- AI bias is a real risk in legal applications and must be actively monitored and mitigated
- Informed consent and transparency with clients about AI use builds trust and satisfies ethical obligations
- A written AI use policy is essential, covering approved tools, prohibited uses, confidentiality, quality control, and training
- Ethics is not a constraint on innovation -- it is the framework that enables responsible innovation in legal practice
Quiz
Discussion
Sign in to join the discussion.

