What AI Can and Cannot Do on a Project
An architect who trusts an AI spec section without reading it will ship a specification that contradicts the drawings. An engineer who accepts a beam size without verifying the load path can miss a connection failure. Knowing where AI fails is just as important as knowing where it helps — especially when your license, insurance, and reputation are attached to the work.
This lesson gives you a blunt map of what AI is reliable at, what it is dangerous at, and how to build a personal rule for when to trust it.
What You'll Learn
- The six categories of AEC tasks AI is reliable at
- The five failure modes that trap architects and engineers
- Why the "licensed professional standard of care" still rules everything
- A personal trust policy you can apply to every AI interaction
Where AI Is Genuinely Reliable
These categories are, in 2026, good enough that the time savings outweigh the review burden:
- Drafting prose — design narratives, meeting minutes, client emails, project descriptions, award submissions, marketing write-ups.
- Summarizing long documents — condensing a 300-page geotech report, a 500-page spec, or a 50-page owner program into the key clauses.
- Translating between formats — turning a hand-marked punch list photo into a structured CSV, or an email thread into an RFI log entry.
- Explaining codes and standards — walking through IBC occupancy classification, IECC energy requirements, ASCE 7 wind calculations as explanation. (The citations themselves still need verification.)
- Pattern-matching and comparison — comparing two versions of a drawing, finding inconsistencies between a spec and a schedule, catching common coordination errors.
- Basic calculations with code interpreter — unit conversions, area takeoffs from a table, tributary area calcs, simple beam sizing as a starting check.
For these tasks, AI will save you hours per week with a manageable review burden.
The Five Failure Modes That Trap AEC Professionals
1. Hallucinated Code Citations
AI will confidently tell you "per IBC 1607.12.1.3 the live load is 100 psf" when that section number does not exist or says something different. The citation looks authoritative and the number is plausible — and that is exactly the problem.
Mitigation: Never trust a code section without opening the code document. Better yet, paste the relevant code PDF into the chat and ask the AI to cite from that document.
2. Wrong Code Edition or Jurisdiction
The AI may answer with IBC 2018 when your jurisdiction has adopted IBC 2021 with local amendments. California, New York City, Florida (after HVHZ), and Chicago all have significant amendments that generic AI does not apply unless you name them explicitly.
Mitigation: Always state the edition and jurisdiction in your prompt. Ask the AI "what amendments might this jurisdiction have adopted?" as a second check.
3. Silent Unit Errors
An AI might mix imperial and metric mid-calculation. You ask for kip-feet of moment and it quietly returns kN-m. Or it might assume psi instead of psf. These errors are catastrophic in structural and mechanical work and almost impossible to catch by skim-reading.
Mitigation: State units explicitly ("report all loads in psf, moments in kip-ft, stresses in ksi"), and always spot-check at least one value against hand calculation.
4. Confidently Wrong Spec Coordination
AI will draft a spec section that reads beautifully but references an ASTM standard that does not apply, or calls for a product that is no longer manufactured, or conflicts with the drawings. The spec will still look professional.
Mitigation: Always cross-reference the spec AI drafts against a current master. Check that every referenced standard exists and is current. Never issue an AI-drafted spec without your spec writer or senior review.
5. Invented Product or Manufacturer Data
Ask an AI for "typical U-values of a triple-glazed curtain wall system" and it might invent a specific product model and number that does not exist. This is especially dangerous in design development when consultants use the numbers for performance modeling.
Mitigation: For any product data, require the AI to say "generic / typical range" rather than quoting a manufacturer model. Confirm actual performance from the manufacturer's data sheet before committing.
The Standard of Care Still Rules
The AIA and NSPE have issued guidance (and many state boards have reinforced this) that using AI does not lower the standard of care expected of a licensed professional. Translation: if your stamped drawings contain an error, "the AI suggested it" is not a defense. You are still responsible for every number, every note, every callout, every code reference.
This has practical implications:
- Keep a record of how AI was used on the project (prompts + outputs) so you can demonstrate reasonable review
- Do not delegate "final check" tasks to AI — they must be done by a qualified human
- Confirm your Professional Liability insurance carrier's position on AI-assisted work (most carriers are fine as long as the standard of care is met, but some require disclosure)
- If the project has a client-specific AI restriction in the contract, follow it
Tasks Where AI Is Usually Not Worth the Risk
Avoid or severely limit AI for:
- Final stamping-stage calculations without full hand verification
- Life-safety code analysis without opening the code book
- Life-safety plan review (egress, exit signage, fire-rated assemblies) without drawing-level review by a qualified person
- ADA and accessibility compliance (the nuances are too jurisdictional; one wrong dimension is a lawsuit)
- Geotechnical interpretation of a specific site report (force the AI to summarize, not interpret recommendations)
- Structural connection design that has not been independently verified
Your Personal Trust Policy
Adopt a simple three-tier policy for every AI interaction:
- Green (trust with spot-check): Prose, summaries, document comparisons, explanations of familiar codes.
- Yellow (trust, but verify every number): Trial beam sizes, spec drafts, unit conversions, RFI responses.
- Red (never trust without independent verification): Anything that will be stamped, any code citation you cannot open the book to confirm, any product model number, anything life-safety related.
Write this on a sticky note near your monitor. Re-read it every time you are tempted to accept an AI output.
A Useful Mental Model
Think of AI as a brilliant but hung-over junior engineer — fast, articulate, and occasionally alarming. You would never stamp a junior engineer's work without reviewing it. Same rule applies here.
Key Takeaways
- AI is reliable for prose, summaries, comparisons, explanations, and starting-point calcs
- The five failure modes: hallucinated citations, wrong code edition, unit errors, bad spec coordination, invented product data
- The licensed professional standard of care does not change because AI was used
- Adopt a green/yellow/red trust policy and apply it consistently
- Treat AI like a junior engineer: useful, fast, and always reviewed before the stamp

