Ethics, Liability, and Professional Responsibility with AI
The stamp on a set of drawings is not just a signature — it is a legal attestation that the work meets the standard of care of a licensed professional. AI does not change that. Using AI without understanding the ethical and liability implications is one of the fastest ways to damage your career, your firm, and the people who rely on the buildings you design. This final lesson covers the ethics, liability, insurance, and AIA/NSPE guidance every practicing professional needs to understand.
What You'll Learn
- How the AIA, NSPE, and state boards treat AI use
- The standard-of-care implications of AI-assisted work
- Practical firm policies for AI use and documentation
- Data privacy, copyright, and confidentiality rules
- How to think about client disclosure and contract language
The Core Principle
Use of AI does not reduce the licensed professional's responsibility for the work.
Every major professional body has converged on this principle. AIA Code of Ethics, NSPE Code of Ethics, state licensing boards, and Professional Liability (PL) insurance carriers all assume the human professional remains fully responsible for anything that bears a stamp or seal.
This has three immediate implications:
- An AI-generated error is your error if it reaches construction.
- The standard of care is unchanged — your work must meet the care a reasonably prudent licensed professional would apply.
- You cannot contractually disclaim responsibility for AI-assisted work without explicit client consent, and even then most state boards will hold you responsible anyway.
AIA and NSPE Guidance
AIA has published guidance emphasizing that architects must exercise their professional judgment when using AI. Practical AIA-aligned rules:
- Do not use AI as the sole source for any design decision
- Disclose AI use to clients when it materially affects the design or deliverables
- Verify all AI-generated content against established authoritative sources
- Do not share confidential client or project information with public AI without consent
NSPE guidance for engineers is parallel: AI is a tool, not a substitute for professional engineering judgment. The "Competence" and "Professional Responsibility" canons of the NSPE code apply directly.
The Standard of Care, Updated for AI
Traditional standard of care: "the degree of care and skill ordinarily exercised by members of the same profession practicing in similar circumstances." With AI in the profession, the standard of care now includes:
- Knowing what AI can and cannot do reliably
- Verifying AI outputs before applying them
- Documenting the verification
- Maintaining competence in AI-augmented practice
In litigation, if you relied on AI and something failed, the opposing expert will ask: "Did you verify the AI output? How? Can you produce the record?" The honest answer "I just trusted it" is career-ending.
Firm-Level AI Policy
Every AEC firm should have a written AI policy covering:
- Approved tools: which AI tools are approved for project use, and at what subscription tier (enterprise data protection matters)
- Prohibited uses: which tasks are off-limits for AI
- Verification requirements: what must be verified, by whom, and documented
- Confidentiality: what client information can be uploaded, what cannot
- Record retention: what prompts and outputs are archived in the project file
- Training: minimum AI literacy training for licensed professionals
- Disclosure: when and how the firm discloses AI use to clients
If your firm does not have this, draft one. Most PL carriers now ask about it at renewal.
Data Privacy and Confidentiality
Client project data can include:
- Confidential owner strategy documents
- Security-sensitive drawings (federal, utility, data center)
- HIPAA-covered healthcare information
- Financial information and budgets under NDA
- Trade-secret manufacturing processes
- Unstamped work from consultants
Public free-tier AI (ChatGPT Free, Claude Free, Gemini Free) may use your inputs for training or model improvement. This is a direct confidentiality breach if client data is involved.
The rule: Use an enterprise or team-tier AI account (ChatGPT Team/Enterprise, Claude Team/Enterprise, Gemini Enterprise, Microsoft Copilot in M365 Enterprise). These tiers contractually commit to not using your inputs for training. Verify the current terms of service — they do change.
For highly sensitive work (security, defense, proprietary IP), check whether your contract prohibits external AI use entirely.
Copyright and Generative Output
AI-generated images, renderings, and text raise copyright questions:
- In the US, the Copyright Office has ruled that purely AI-generated works are not copyrightable
- Human-directed and edited AI works may be copyrightable in the human-authored portions
- Using AI images in marketing is generally fine; using them as representative of as-built design may create misleading-advertising exposure
- Training data copyright disputes are ongoing — do not rely on AI images for any use that would require a proven, unchallenged copyright chain
For AEC firms, the practical rule: AI images are fine for marketing with disclosure; they should never represent actual built work without a photograph.
Client Disclosure and Contracts
When do you disclose AI use to a client? A reasonable standard:
- Always disclose if AI-generated content appears in a deliverable (narrative, image, report)
- Always disclose if AI is used in cost estimating that informs a budget decision
- Generally disclose if AI is used in significant volume for design decisions
- Not typically disclosed for productivity tasks like drafting internal emails or meeting minutes
Some clients, especially federal, healthcare, and defense, may require specific AI-use clauses in contracts. Read the contract before deploying AI on the project.
Sample contract language a firm might propose:
Architect may use artificial intelligence tools to assist with the production of the Services. Architect remains responsible for the Services and will verify AI-generated content consistent with the applicable professional standard of care. No confidential Owner information will be submitted to public AI systems.
Whether you negotiate that in depends on the client.
Insurance Considerations
Professional Liability (PL) insurance carriers are still calibrating to AI risk. Current typical positions:
- Using AI does not automatically trigger a coverage exclusion
- The carrier expects the professional to maintain the standard of care
- Some carriers require disclosure at renewal of how AI is used
- Cyber liability coverage may apply separately if an AI-related data breach occurs
Confirm with your firm's broker before assuming coverage. Some construction-focused carriers are more flexible than general PL carriers.
The Documentation Discipline
If you do one thing from this lesson, do this:
Retain in the project file: (1) the AI prompt, (2) the AI response, (3) the verification you performed, (4) the final human-reviewed output.
This can be as simple as a folder called "AI-assistance" with dated text files. If something ever goes to litigation, this record is the difference between "we used AI reasonably" and "we cannot reconstruct what happened."
A Personal Ethics Checklist
Before every AI use, ask:
- Am I using the correct subscription tier for this content?
- Is the task appropriate for AI (not life-safety, not final stamping)?
- Am I verifying the output against an authoritative source?
- Am I documenting the verification?
- If this project went to litigation tomorrow, could I defend how I used AI?
Answer no to any of these and stop.
Thinking About the Profession
AI is reshaping how architects and engineers work, but not what we are responsible for. The value of our profession is the combination of design judgment, technical competence, ethical commitment, and accountability. AI can make us faster, sharper, and more thorough. It cannot make us less accountable.
Use AI to do more of the work that matters — design, coordination, client service — and less of the work that does not. Do that, and the profession gets better. Ignore the ethics and liability realities, and the profession gets worse.
Key Takeaways
- Use of AI does not reduce the licensed professional's responsibility or standard of care
- AIA and NSPE guidance, state boards, and PL carriers all assume full human accountability
- Firms need a written AI policy covering approved tools, uses, confidentiality, and documentation
- Use enterprise-tier AI for any project-confidential content; never free tier
- Retain prompts, outputs, and verification records in the project file for every meaningful AI use

