Prompt Engineering Patterns for Clinical Use
Once you have basic prompts working, the next level is patterns — repeatable techniques that consistently improve AI output across many veterinary tasks. The patterns in this lesson are drawn from how experienced clinical-AI users prompt, adapted to veterinary workflows. Internalize these and you stop fighting the model and start steering it.
What You'll Learn
- Six high-value prompting patterns for clinical use: Role + Constraints, Few-Shot, Chain-of-Thought, Self-Critique, Format-First, Multi-Pass
- When to reach for each pattern
- Combined patterns for high-stakes tasks (specialist letters, complex SOAP, board-quality referral notes)
Pattern 1 — Role + Constraints
Always assign a role and add at least one explicit constraint. The role narrows the model's voice; the constraint narrows the format.
Generic: "Help me explain CKD to an owner."
Role + Constraints: "You are a senior small-animal vet known for explaining things clearly to scared pet owners. Explain stage 2 CKD in a 12-year-old DSH cat. Constraints: 5th-grade reading level, under 200 words, no medical jargon, end with 'here's what we recommend' and a 3-bullet plan."
The output gap is consistently large. The role-and-constraint pattern alone elevates 80 percent of clinical writing.
Pattern 2 — Few-Shot Examples
Show the model how you write, not just what you want. This is the single most powerful pattern for matching your clinic's voice.
"Here are two examples of how I write discharge instructions. Example 1: [paste a real one with identifying info removed]. Example 2: [paste another]. Now write a discharge in the same voice for: [new case]."
Two examples is usually enough. Three is plenty. The model picks up cadence, sentence length, your typical sign-off, your voice quirks. Few-shot prompting is why your team's Custom GPT (Module 4) gradually feels more like you — its instructions are essentially institutionalized few-shot examples.
Pattern 3 — Chain-of-Thought Reasoning
For diagnostic or workup questions, explicitly ask the model to reason step by step. This reduces shortcuts and surfaces logic you can audit.
Generic: "What's the most likely diagnosis for this case?"
Chain-of-Thought: "Walk through this case step by step. First, summarize the key findings. Then list the body systems involved. Then build a differential. Then for each top differential, list the supporting and refuting evidence. Finally, recommend the next single most useful diagnostic. Case: [paste]."
The model's reasoning becomes visible — and you can spot the step where its logic goes off the rails before you trust the conclusion. This pattern is essential for any case where the answer matters clinically.
Pattern 4 — Self-Critique
After getting an answer, ask the model to critique its own response. It will surface gaps it just made.
Step 1: "Build a treatment plan for [case]."
Step 2: "Now critique that plan. What did you miss? What is the weakest part? What would a board-certified internist add?"
Step 3: "Now revise the plan incorporating the critique."
This three-step pattern often produces a noticeably better final plan than a single-pass prompt. It works especially well for specialist letters, treatment plans, and any prose where you'll be judged on completeness.
Pattern 5 — Format-First
Specify the exact output structure before asking for content. The model treats format like a template — everything else fills in around it.
"Build the following table about NSAID options for canine OA. Columns: Drug, Mechanism, Dose Range, Top 3 Side Effects, Monitoring, Approximate Cost (USD), Best Patient Profile. Rows: carprofen, meloxicam, deracoxib, robenacoxib, grapiprant. After the table, add a 1-paragraph summary on how to choose between them for a 9-year-old MN Lab with normal bloodwork."
Format-first prompting is dramatically cleaner than asking for the same information in prose and then trying to extract it into a table.
Pattern 6 — Multi-Pass Refinement
For any document longer than a paragraph, write in passes — each pass with a single goal. Trying to get a 600-word client letter "perfect" in one prompt is much harder than three deliberate passes.
Pass 1 — content: "Draft a 500-word letter to a client summarizing the workup, diagnosis, and treatment plan for [case]. Cover what we found, what we did, what we recommend next, and what to watch for. Tone: warm but clinical."
Pass 2 — voice: "Now rewrite that to sound more like how I talk to my regular clients. Drop any phrasing that sounds corporate. Read it back to yourself out loud and adjust the cadence."
Pass 3 — accuracy: "Now read the letter and flag any clinical claim that isn't strictly supported by the case data I provided. Mark them with [VERIFY] and stop — do not invent supporting facts."
Each pass focuses the model on one thing. The cumulative quality is materially higher.
Combined Patterns for High-Stakes Tasks
Here are three real combinations experienced veterinary AI users reach for.
The board-quality referral letter
Role + Constraints + Few-Shot + Self-Critique.
"You are a small-animal GP veterinarian writing a referral letter to a board-certified internist. Here is an example of my voice in a previous referral letter: [paste]. Now write a letter for: [new case]. After drafting, critique the letter — what would the internist want to know that I haven't included? Then output the revised final version."
The complex SOAP for an ICU patient
Role + Format-First + Chain-of-Thought.
"You are a veterinary ICU scribe. Output a SOAP with a numbered problem list and a problem-organized plan. Reason step by step through the case before producing the SOAP — list the active problems first, then map findings to each problem, then build the plan. Case: [paste]."
The plain-language client educational handout
Role + Constraints + Multi-Pass.
"You are a small-animal vet known for clear teaching. Write a 1-page client handout explaining [condition] for an owner whose pet was just diagnosed. Constraints: 5th-grade reading level, no jargon, warm tone, includes a glossary box for the 4 terms they'll Google later, ends with 'here's what we'll do' as a 4-bullet plan."
Then run a voice-matching pass and an accuracy-flag pass.
Three Habits That Compound
1. Save your best prompts. Build a Notes file or shared doc of your highest-ROI prompts. The first time a prompt works beautifully, save it. By month three you have a personal prompt library that handles 90 percent of your daily writing.
2. Steal from each other. Vets in colleague groups, social media, and conferences increasingly share useful prompts. Borrow shamelessly. There's no IP in a good prompt structure.
3. Track time saved. For one week, jot down the minutes you saved on each AI-assisted task. The total is usually larger than expected and gives you the data point to expand AI use across your clinic.
What Not to Do
Three anti-patterns to drop.
Asking for "the best" answer. AI outputs are probabilistic. Ask for a defensible answer or a ranked list — not "the best."
Giving a wall of context with no question. Pasting an entire boarded specialist consult and writing "thoughts?" produces unfocused output. State the specific task.
Ignoring the format request. If you ask for a 60-word answer and get 200 words, do not just accept it. Reply "shorten to 60 words" — and the model will. Reinforce your constraints; the model is reflective by design.
Key Takeaways
- Six patterns: Role + Constraints, Few-Shot, Chain-of-Thought, Self-Critique, Format-First, Multi-Pass
- Few-Shot is the single most powerful voice-matching technique
- Multi-pass writing produces materially better long documents
- Save your best prompts in a personal library and refine over time
- Combining patterns is what separates novice from expert AI users in clinical work

