Prompting LLMs for Engineering Calculations Safely
The first time most engineering students ask ChatGPT to "calculate the bending stress on this cantilever beam", they get an answer that looks beautiful — clean LaTeX, perfect units, confident final number — and is wrong. Maybe it picked the wrong section modulus. Maybe it used the wrong yield strength for the alloy. Maybe it confused metric and imperial midway through.
This lesson teaches you the prompting patterns that actually work for engineering calculations: how to set up the problem, force the model to show its work, catch hallucinations early, and never sign off on a number you have not independently verified.
What You'll Learn
- The five rules of engineering prompts: state givens, fix units, demand derivation, force assumptions to surface, and ask for a sanity check
- Why LLMs hallucinate material properties and how to prevent it
- A reusable prompt template for any back-of-the-envelope calculation
- How to use the LLM as a "second pair of eyes" rather than a calculator
Why LLMs Get Engineering Calculations Wrong
LLMs are pattern matchers, not calculators. Modern models call out to a real Python interpreter for arithmetic when they recognize the need, but they still make errors that students and junior engineers should be able to spot:
- They pull material properties from training data that may be outdated or generic ("aluminum yield strength is around 270 MPa" — for what alloy temper?).
- They confuse principal axes, sign conventions, or factor-of-safety conventions between American and European textbooks.
- They silently switch units in the middle of a derivation.
- They produce a "final answer" that is the average of two contradicting textbook approaches.
None of these errors are random. They are predictable, which means they are preventable.
The Five Rules of Engineering Prompts
Rule 1 — State all givens explicitly, with units. Do not say "a steel beam under load". Say "a simply supported AISI 1018 steel beam, length 2.0 m, rectangular cross-section 50 mm wide x 100 mm deep, with a point load of 5 kN at midspan".
Rule 2 — Fix the unit system at the top of the prompt. "Work in SI units throughout. Convert any non-SI inputs to SI before proceeding. Report all final answers in SI plus the imperial equivalent."
Rule 3 — Demand the full derivation, not just the answer. "Show every formula symbolically before substituting numbers. List the source textbook or standard for each formula."
Rule 4 — Force assumptions to the surface. "Before solving, list every assumption you are making, including material isotropy, boundary conditions, neglected effects, and assumed factor of safety."
Rule 5 — Ask for a sanity check. "After you compute the answer, compare it against a back-of-the-envelope estimate using a simpler formula or rule of thumb. Flag any discrepancy greater than 20 percent."
Used together, these rules turn a black-box answer into something you can actually review.
A Reusable Engineering Prompt Template
Copy this template and adapt it for any homework problem, design check, or sanity calculation:
ROLE: You are a careful engineering tutor checking my work, not doing it for me.
PROBLEM:
\{Paste the problem statement, including all numerical givens with units.\}
REQUIREMENTS:
1. Restate the problem in your own words and list every given with units.
2. List every assumption you are making and which would change the answer if relaxed.
3. State the governing equation symbolically with a textbook reference (e.g. "Hibbeler, Mechanics of Materials, Ch. 6").
4. Substitute numbers step by step, showing unit cancellations.
5. Give the final answer with units and an appropriate number of significant figures.
6. Sanity-check the answer against a back-of-the-envelope estimate.
7. List 2-3 ways this calculation could be wrong in practice.
I will independently verify every number before using it.
The last line — "I will independently verify every number before using it" — is not just a note to yourself. It is a prompt that nudges the model toward more conservative, traceable reasoning.
Hallucinated Material Properties: The Most Common Trap
If you ask an LLM "what is the yield strength of 6061-T6 aluminum?", you will usually get a number close to 276 MPa (40 ksi), which is correct for that alloy and temper. But ask about "Inconel 718 at 650 C" and you may get a number that is off by 30 percent, because the answer depends heavily on heat treatment and test direction, and the model is averaging conflicting training sources.
Three defenses:
- Treat any material property you get from an LLM as a hypothesis, not a fact. Look it up in MMPDS (Metallic Materials Properties Development and Standardization), MIL-HDBK-5, or your supplier's datasheet.
- Ask the model to cite a source. It will sometimes make one up — but the act of asking surfaces uncertainty.
- Paste the datasheet into the prompt yourself. This is the highest-leverage move. If the model has the supplier's PDF in context, it cannot guess.
When NOT to Use an LLM for Calculations
There are problem types where the cost of being wrong is higher than the time saved. Skip the LLM and go straight to a verified tool when:
- The result will appear on a stamped drawing or signed analysis.
- You are doing certification work (FAA, EASA, NRC).
- The calculation feeds directly into a flight test, structural test, or production decision.
- The geometry is complex enough that hand calculations cannot bound the answer.
For these, use Ansys, Abaqus, NASTRAN, or hand calculations in your verified spreadsheet template — covered in later lessons.
The "Second Pair of Eyes" Workflow
The safest, highest-value use of an LLM for calculations is not to do the calculation, but to check yours.
- You work the problem on paper or in a spreadsheet.
- You paste your solution into the LLM and say: "I am an engineering student. Find any errors in this work. Specifically check unit consistency, sign conventions, the formulas used, and whether the answer is the right order of magnitude."
- You investigate every flag the model raises, then decide whether it is a real error or a false alarm.
This flips the failure mode. Instead of accepting a fluent wrong answer, you are stress-testing a known answer. False positives waste five minutes. False negatives are caught by you, not the LLM.
Key Takeaways
- LLMs are pattern matchers, not solvers — they will produce confident, wrong answers if you let them.
- Use the five rules: state givens, fix units, demand derivation, surface assumptions, demand sanity checks.
- Material properties are the single most-hallucinated category — verify against datasheets or MMPDS.
- Never use an LLM as the sole source of a number that ends up on a stamped drawing or in certification work.
- The "second pair of eyes" pattern — you solve, AI checks — is safer than the reverse.

