AI in Legal Practice - Opportunities and Limits
What You'll Learn
By the end of this module, you will be able to:
- Explain how large language models work at a conceptual level relevant to legal practice
- Identify the major AI tools available to legal professionals, both general-purpose and legal-specific
- Understand what AI does well in law and where it falls short
- Recognize the hallucination problem and why it is critically dangerous in legal work
- Evaluate current adoption trends and what they mean for your career
Estimated Time: 1-2 hours
1.1 How AI Works (Explained for Lawyers)
Understanding Large Language Models
You do not need a computer science degree to use AI effectively, but understanding the fundamentals will make you a better user. Think of it as understanding how a legal database works — you don't need to know the code behind Westlaw, but understanding how search algorithms function helps you write better queries.
What Are Large Language Models?
Large Language Models (LLMs) are AI systems trained on enormous quantities of text — books, articles, websites, legal documents, court opinions, and much more. During training, the model learns patterns in language: how words relate to each other, how arguments are structured, how legal reasoning flows from premise to conclusion.
When you ask an LLM a question, it does not look up an answer in a database. Instead, it generates a response word by word, predicting what text would be most appropriate given your input and the patterns it learned during training.
A Legal Analogy: Think of an LLM as an extraordinarily well-read associate who has read millions of documents but has never practiced law. This associate can draft eloquent prose, identify relevant legal concepts, and structure persuasive arguments. But this associate also has no ability to verify whether a specific case citation is real, cannot access current court filings, and may confidently state something that is simply wrong.
How LLMs Process Your Requests
When you submit a prompt to an AI tool, the following process occurs:
- Tokenization: Your input is broken into smaller units (tokens) the model can process
- Context Analysis: The model considers your input alongside any prior conversation history
- Pattern Matching: It identifies relevant patterns from its training data
- Sequential Generation: It produces output one token at a time, each influenced by everything that came before
- Output Delivery: You receive the completed response
Critical Implication for Legal Work: Because LLMs generate text by prediction rather than retrieval, they can produce text that looks authoritative but is fabricated. This is the hallucination problem, and it is the single most important limitation for legal professionals to understand.
1.2 Key AI Tools Available to Legal Professionals
General-Purpose AI Assistants
These are the foundational tools most legal professionals start with:
ChatGPT (OpenAI)
- The most widely known AI assistant
- Strong at drafting, summarization, and general research
- GPT-4 and later models offer improved reasoning and accuracy
- Available in free and paid tiers
Claude (Anthropic)
- Excellent at long-document analysis — can process documents up to 200,000 tokens (roughly 150,000 words)
- Strong nuanced reasoning capabilities
- Particularly good at following complex, detailed instructions
- Available in free and paid tiers
Microsoft Copilot
- Integrated with Microsoft 365 (Word, Outlook, Excel, PowerPoint)
- Useful for professionals already embedded in the Microsoft ecosystem
- Can assist with document drafting directly within familiar applications
Legal-Specific AI Tools
A growing ecosystem of tools is purpose-built for legal work:
CoCounsel (Thomson Reuters)
- Integrated with Westlaw, the leading legal research platform
- Designed specifically for legal research, document review, and drafting
- Provides citations linked directly to Westlaw's verified database
- Reduces (but does not eliminate) the hallucination risk for legal research
Harvey AI
- Built specifically for legal professionals
- Used by major law firms including Allen & Overy
- Trained on legal-specific data with guardrails for legal accuracy
- Offers specialized workflows for different practice areas
Lexis+ AI (LexisNexis)
- AI assistant integrated with the LexisNexis research platform
- Links generated responses to verified legal sources
- Designed to complement traditional legal research workflows
Other Notable Tools:
- Spellbook — AI contract review and drafting tool
- Casetext (now part of Thomson Reuters) — AI-powered legal research
- EvenUp — AI for personal injury case documentation
- Luminance — AI for contract analysis and due diligence
1.3 What AI Does Well in Legal Practice
Research and Information Synthesis
AI excels at processing large volumes of information and identifying relevant patterns. For legal professionals, this translates to:
- Summarizing lengthy documents — Condensing a 100-page deposition into key points in minutes
- Identifying relevant legal concepts — Helping you spot issues you might not have considered
- Cross-jurisdictional analysis — Quickly comparing how different jurisdictions handle the same legal issue
- Regulatory research — Synthesizing complex regulatory frameworks into understandable summaries
Drafting and Writing
AI is remarkably capable at generating well-structured legal text:
- First drafts — Producing initial drafts of contracts, motions, memos, and correspondence
- Template customization — Adapting standard templates to specific fact patterns
- Plain-language translations — Converting complex legal concepts into client-friendly explanations
- Proofreading and improvement — Identifying unclear passages and suggesting improvements
Organization and Analysis
- Document categorization — Sorting and classifying documents during discovery or due diligence
- Issue spotting — Identifying potential legal issues from a set of facts
- Argument analysis — Evaluating strengths and weaknesses of legal arguments
- Timeline construction — Organizing facts chronologically from multiple sources
1.4 What AI Cannot Do
AI Cannot Provide Legal Advice
This is the most fundamental limitation. AI tools generate text based on pattern matching, not professional judgment. They do not understand the specific circumstances of your client, the strategic considerations of your case, or the nuances of your local practice.
AI output is always a starting point, never a final product.
AI Cannot Guarantee Citation Accuracy
This deserves special emphasis. LLMs can and do generate case citations that do not exist. The cases sound plausible — they have realistic party names, accurate-sounding reporter citations, and logical holdings — but they are entirely fabricated.
The Landmark Warning: In 2023, a New York attorney submitted a brief containing six case citations generated by ChatGPT. None of the cases existed. The attorney was sanctioned, and the incident became a cautionary tale for the entire profession. This was not an isolated incident — similar situations have occurred in multiple jurisdictions since then.
Why This Happens: LLMs do not retrieve citations from a database. They generate text that looks like a citation based on patterns they learned during training. The model has no mechanism to verify whether a specific case exists.
AI Cannot Replace Professional Judgment
- It cannot assess credibility of witnesses
- It cannot evaluate the strategic implications of a legal position
- It cannot understand the full context of a client relationship
- It cannot navigate the political dynamics of a courtroom or negotiation
- It cannot make ethical judgment calls about conflicts or privilege
AI Has Knowledge Limitations
- Training data cutoffs — Models do not know about recent cases, statutory amendments, or regulatory changes after their training cutoff date
- Jurisdiction gaps — Training data may be skewed toward certain jurisdictions (typically US federal law) and thin in others
- Confidentiality concerns — Anything you input into a general-purpose AI tool may be used for model training, raising serious client confidentiality issues
1.5 Current Adoption Trends
Where the Profession Stands
AI adoption in legal practice is accelerating rapidly:
- Large law firms are leading adoption, with most Am Law 100 firms now using or piloting AI tools
- Mid-size firms are increasingly adopting AI, often starting with legal research and document drafting
- Solo practitioners and small firms are finding AI particularly transformative, as it allows them to compete on capabilities previously available only to larger firms
- In-house legal departments are using AI to manage increasing workloads without proportional headcount increases
What the Data Shows
- Surveys indicate that over 75% of legal professionals have used AI tools in some capacity
- The legal AI market is growing at over 25% annually
- Bar associations across the country are issuing guidance on AI use, signaling that adoption is mainstream enough to require regulatory attention
- Law schools are integrating AI training into their curricula
What This Means for Your Career
The professionals who thrive in this environment will be those who develop what we might call AI-augmented competence — the ability to use AI tools effectively while maintaining the professional judgment, ethical awareness, and critical thinking that remain uniquely human.
This course is designed to build exactly that competence.
Key Takeaways
- LLMs generate text by prediction, not retrieval. Understanding this fundamental mechanism helps you use AI tools more effectively and recognize their limitations.
- Both general-purpose and legal-specific AI tools are available. General-purpose tools like ChatGPT and Claude are powerful and accessible. Legal-specific tools like CoCounsel and Harvey AI offer additional safeguards for legal work.
- AI excels at research synthesis, drafting, and organization. These are areas where you can achieve significant efficiency gains immediately.
- AI cannot provide legal advice, guarantee citations, or replace professional judgment. These limitations are not bugs to be fixed — they are fundamental characteristics of the technology.
- The hallucination problem is critically dangerous in legal practice. Every AI-generated citation must be verified in official databases. No exceptions.
- Adoption is accelerating across the profession. Learning to use AI effectively now is an investment in your career that will compound over time.
Quiz
Discussion
Sign in to join the discussion.

