Advanced Prompt Engineering for Consultants
The basics of prompting get you to the 70th percentile. The next 30 percentage points come from techniques that, in 2026, are still poorly known outside professional AI users. This lesson teaches the patterns that consistently turn good AI output into client-ready output.
What You'll Learn
- The five advanced patterns: role priming, chain-of-thought, few-shot, self-critique, and decomposition
- How to build reusable system prompts for consulting work
- Multi-step prompt chains for complex deliverables
- How to debug when AI gives you what you asked for but not what you wanted
Pattern 1: Role Priming with Specificity
A vague role ("you are a consultant") gives you vague output. A specific role gives you the voice you actually want.
Weak: "You are a strategy consultant."
Strong: "You are a senior partner at a top-3 strategy firm with 15 years' experience in industrial-goods M&A. You write the way Roger Martin does — clear, structured, slightly contrarian, no buzzwords. You believe most strategy is wishful thinking and you are unafraid to say so."
The second prompt produces noticeably sharper writing because it gives AI a specific voice to imitate. Build a small library of strong role-primes for your common situations.
Pattern 2: Chain-of-Thought ("Think Step by Step")
Modern models do better reasoning when you ask them to think through a problem before answering. For consulting work this is gold.
A regional grocery chain asks why their margin has dropped 200 bps over 2 years despite rising same-store sales. Before answering: (1) list the 4-5 candidate causes, (2) for each, the data we would expect to see if it were the cause, (3) the data we would expect if it were not the cause. Then state your most likely diagnosis with confidence level and what would change your mind.
This forces explicit reasoning, which both improves quality and gives you a paper trail you can challenge.
Pattern 3: Few-Shot Examples
For any deliverable where you have a known "good" example, show it to the AI before asking for new output.
Below are 2 examples of the action titles we want for our deck. Notice the structure: a specific claim with a number and a so-what. Now generate 10 action titles for the topics that follow, in the same style.
Example 1: "Customer churn has risen from 8% to 14% in 18 months — driven entirely by the 25–34 cohort, where neobanks now capture 60% of new accounts."
Example 2: "DTC investments produced 14% revenue growth in 2025 but at -180bps margin impact — the breakeven path requires consolidating fulfillment within 12 months."
Topics: [list]
Few-shot is the single most powerful technique for matching your firm's voice. Two examples often outperform a paragraph of stylistic instructions.
Pattern 4: Self-Critique Loops
Get AI to grade its own work and improve it. This produces output 30–50% better than single-shot prompting.
Step 1: Draft an executive summary of [topic].
Step 2: Now critique your own draft as a McKinsey engagement manager would. Identify the 3 weakest sentences and the single missing 'so what'.
Step 3: Rewrite the executive summary applying your critique.
Step 4: Repeat the critique-and-rewrite cycle one more time.
You can run this in a single conversation with simple "now do step 2" prompts. The output is dramatically tighter than a single draft.
Pattern 5: Task Decomposition
For any complex deliverable, break it into the smallest stable steps. Each step gets its own focused prompt; the outputs compose into the final deliverable.
A board memo, for example, is not one prompt. It is:
- Capture the situation in 5 lines
- Frame the central question
- Generate 3 candidate answers
- Pick the strongest answer with reasoning
- List the 3 supporting arguments
- List the strongest counter-argument and the response
- Propose 3 implications and the recommended next step
- Compose the memo from all of the above
- Critique and refine
Decomposition prevents AI from skipping steps under the weight of a complicated single prompt. It also lets you save and reuse each step as a building block.
Building Reusable System Prompts
Modern tools let you save persistent system prompts (Custom GPTs, Claude Projects' system instructions, the Custom Instructions field in ChatGPT). Use them. A consulting-tuned system prompt looks like this:
You are an AI assistant for [Firm Name], a strategy consulting practice focused on industrial goods. Always:
- Write at board-level abstraction unless asked otherwise
- Use action titles for any slide content (not topic titles)
- Avoid the words: synergy, leverage, unlock, journey, ecosystem, holistic
- When a claim depends on a number, ask me for the source rather than inventing one
- When proposing recommendations, always list the strongest counter-argument and how to handle it
- Default tone: confident, direct, slightly contrarian, no hedging
You only write this once. After that, every interaction starts on a higher floor.
Multi-Step Prompt Chains
For repeated workflows, document a prompt chain you can re-run. Example for a market scan:
- Definition prompt → defines the market and segmentation
- Research prompt (Perplexity) → gathers data points
- Triangulation prompt (Claude) → produces sized estimate with confidence
- Critique prompt → exposes weak assumptions
- Visualization prompt → suggests slide-friendly visual concepts
- Action-title prompt → converts findings into deck-ready titles
You will run this chain for every market scan you do. Save it. Refine it after each engagement.
Debugging Bad Output
When AI gives you what you literally asked for but not what you wanted, the diagnosis is usually one of these:
- Underspecified output format. Add: "Output as a markdown table with columns X, Y, Z."
- Missing perspective. Add: "From the perspective of [audience]."
- No constraints. Add what to avoid: "Do not use the word 'strategic.' Do not list more than 5 items."
- Single-shot when you needed reasoning. Add: "Before answering, think through X."
- Missing examples. Show one good example and one bad example.
If two of these still do not fix it, you are probably asking AI for something it cannot reliably do — escalate to a human or restructure the task.
Common Pitfalls
- Treating prompts as one-shot magic. The best prompts come from 2–3 rounds of iteration.
- Hoarding prompts in chat history. Save them in a permanent place — Notion, Obsidian, or your firm's wiki.
- Skipping the role prime. This is one of the highest-leverage two-line additions.
- Ignoring the system-prompt feature. Custom Instructions and Project instructions compound across every future chat.
Key Takeaways
- Five patterns to master: specific role priming, chain-of-thought, few-shot examples, self-critique loops, and task decomposition.
- Save reusable system prompts in Custom GPTs, Claude Projects, or Custom Instructions — they raise the baseline of every interaction.
- Decompose complex deliverables into the smallest stable steps; each step becomes a reusable building block.
- When debugging weak output, check for underspecified format, missing perspective, no constraints, no reasoning step, or no examples.
- The difference between 70th-percentile and 95th-percentile AI output is iteration — three focused refinement cycles, not one big prompt.

