Prompt Engineering for Support Agents
You've been prompting all course. This lesson is about leveling up from basic prompts to structured, reliable, reusable prompts that consistently produce high-quality output for support work. Think of it as the difference between driving an automatic and driving a manual -- once you understand the gears, you extract far more from the same engine.
What You'll Learn
- The six advanced prompt patterns that work best in support
- Chain-of-thought prompting for tricky policy decisions
- Few-shot prompting: teaching AI your brand voice with examples
- How to test and iterate on prompts systematically
The Six Advanced Prompt Patterns
Pattern 1: Role + Rules + Task
This is the workhorse. Three blocks:
ROLE: You are a senior support agent at [Company], known for [trait].
RULES:
- [Rule 1]
- [Rule 2]
- [Rule 3]
TASK: [What you want done]
Example:
ROLE: You are a senior support agent at HealthyMeals, known for warmth and practical problem-solving.
RULES:
- Reply under 120 words
- Always use the customer's first name once
- Never say "unfortunately" or "per our policy"
- If you don't know a policy, say so and flag for human review
TASK: Draft a reply to this ticket: [paste ticket]
Pattern 2: Few-Shot Examples (Your Brand Voice, Encoded)
AI learns tone fastest by example. Include 2-3 past exemplar replies:
Here are three replies that match our brand voice perfectly:
EXAMPLE 1:
Customer: "My order hasn't arrived."
Reply: "Hi Sarah -- really sorry your order is running late. I just checked and it's stuck in transit; I've reshipped it with priority shipping today, and you should have it by Friday. Thanks for your patience!"
EXAMPLE 2:
[another exemplar]
EXAMPLE 3:
[another exemplar]
Now, using the same voice and style, draft a reply to this ticket: [paste]
Few-shot is more powerful than any amount of tone description. Paste 3 great replies and AI will match them with surprising accuracy.
Pattern 3: Chain-of-Thought for Policy Decisions
For tricky decisions where you want AI to reason carefully:
A customer is requesting a refund. Think step-by-step:
1. What did the customer order and when?
2. What is our policy on this type of product/time window?
3. Are there any flagged exceptions (bereavement, repeated defects, etc.)?
4. What does the customer's history suggest (VIP? churn risk?)?
5. What is the right recommendation?
6. Draft the reply.
Paste your reasoning for steps 1-5, then the draft reply for step 6.
Customer ticket: [paste]
Our refund policy: [paste]
Customer history: [paste]
Chain-of-thought makes AI reason more carefully and makes its reasoning auditable. You can see exactly why it recommended what it did.
Pattern 4: Structured Output (JSON / Markdown)
When you need to plug AI output into another system or a template:
Analyze this ticket and return a JSON object with exactly these fields:
- category: string
- urgency: "low" | "medium" | "high"
- sentiment: "positive" | "neutral" | "frustrated" | "angry"
- recommendedReply: string (under 130 words)
Return only valid JSON, no commentary.
Ticket: [paste]
Consistent structured output makes automation possible.
Pattern 5: Constrained Length & Style
Without constraints, AI writes long and wanders. Always specify:
- Word count: "Under 120 words"
- Reading level: "8th-grade reading level"
- Format: "Three short paragraphs, no bullet points"
- Voice: "First person singular, active voice"
Example:
Draft a reply to [ticket]. Hard constraints:
- Under 100 words
- 8th-grade reading level
- Three short paragraphs
- No em-dashes
- One apology only
Pattern 6: Self-Critique
Get better output by asking the model to review its own draft:
Step 1: Draft a reply to the ticket below.
Step 2: Critique your own draft against these criteria:
- Empathy: Does it acknowledge feelings?
- Accuracy: Any invented facts?
- Brand voice: Uses contractions, no corporate phrases?
- Length: Under 130 words?
Step 3: Rewrite the draft incorporating your critique.
Return only the final rewritten draft.
Ticket: [paste]
Self-critique often catches issues you'd have to edit out manually.
Few-Shot Deep Dive: Encoding Your Exemplars
Spend 30 minutes pulling your 10 highest-CSAT replies from the last quarter. Anonymize and save them as your "exemplar library." Every time you prompt for a reply, paste 2-3 relevant exemplars.
Organize exemplars by category:
- Billing apologies
- Shipping delays
- Bug acknowledgements
- Feature request responses
- Cancellation responses
- Refund grants
- Refund denials (gracious)
Over time, this library becomes your team's most valuable prompting asset.
Testing Prompts Systematically
Good prompts are built by iteration, not inspiration. A simple test protocol:
Step 1: Create a test set
Pick 10 anonymized real tickets that span your common scenarios.
Step 2: Define success criteria
For each ticket, what does a "great" reply look like? (Could be your team lead's own reply.)
Step 3: Run your prompt across the test set
For each ticket, paste prompt + ticket into AI, save the output.
Step 4: Score outputs
For each output, grade: Accurate? On-brand? Right length? Right tone?
Step 5: Adjust the prompt
Wherever you saw systematic failures, tweak the prompt. Re-run.
After 3-4 iterations, your prompt will be rock-solid. Save it. Reuse it.
Common Prompt Engineering Mistakes
Mistake 1: Overly vague tone descriptions
"Be friendly" is vague. "Use contractions. Start with an acknowledgement of the customer's feeling. End warmly. Maximum one apology per reply" is specific and works.
Mistake 2: Too many instructions at once
If your prompt has 15 rules, the model will forget some. Pick your top 5-8, make them explicit, drop the rest.
Mistake 3: Not specifying what NOT to do
Explicit "never" rules are as important as "always" rules. Especially:
- Never invent policies
- Never promise specific dates
- Never use "unfortunately" or "per our policy"
- Never sign off without your name
Mistake 4: Mixing tasks in one prompt
"Triage this ticket and write a reply and suggest a KB article" -- splits AI's attention. Do each as a separate prompt.
Mistake 5: Not including the policy you care about
AI will invent policies if you don't give it the real ones. Always paste the policy text.
The Reusable Prompt Template for Your Team
Here's a filled template you can adapt and paste anytime:
ROLE: You are a senior support agent at [YOUR COMPANY], known for warmth and clarity.
RULES:
- Reply in under 130 words
- Use the customer's first name once if provided
- Start with one sentence acknowledging their feeling
- Never say "unfortunately," "per our policy," "we value your feedback"
- Use contractions
- Sign off as: "-- [YOUR NAME], [COMPANY] Support"
- If you don't know something, say so; never invent policies or dates
EXEMPLARS (match this tone):
[paste 2 exemplar replies]
POLICY (use only this info for policy claims):
[paste relevant policy excerpt]
TICKET:
[paste]
TASK: Draft a reply.
Save this as a text file. Every reply you draft starts from this template.
Prompt Libraries for Support
If you want a headstart, public prompt libraries like PromptHero, FlowGPT, and Anthropic's prompt library have support-specific prompts. They're uneven in quality -- use them as inspiration, not gospel. The best prompts are ones you've refined on your own tickets.
Key Takeaways
- Use the Role + Rules + Task structure for 80% of prompts
- Few-shot examples (pasting 2-3 exemplars) are the fastest way to lock in brand voice
- Chain-of-thought makes tricky policy decisions transparent and reviewable
- Always include what NOT to do -- explicit "never" rules matter as much as positive rules
- Build a reusable prompt template for your team and iterate on it over time

