Module 6: Multi-Agent Systems
Agents That Work Together
Introduction: Why One Agent Isn't Enough
In previous modules, we built powerful single agents that can reason, use tools, remember context, and search documents. But real-world tasks often require different kinds of expertise.
Consider creating a research report:
- A researcher needs to find and verify information
- A writer needs to craft clear, engaging prose
- An editor needs to check for accuracy and quality
Trying to make a single agent do all three well is like hiring one person to be your researcher, writer, and editor simultaneously. It works, but specialized agents working together produce far better results.
In this module, you'll learn how to build multi-agent systems where agents communicate, delegate, and collaborate.
6.1 Agent Communication Patterns
Pattern 1: Sequential (Pipeline)
Agents pass work to each other in a fixed order:
Agent A → Agent B → Agent C → Final Output
Best for: Linear workflows where each step builds on the previous one.
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
llm = ChatOpenAI(model="gpt-4o")
def researcher(topic: str) -> str:
"""Research agent gathers information."""
response = llm.invoke([
SystemMessage(content="You are a research assistant. Gather key facts and data about the topic. Be thorough and cite specific details."),
HumanMessage(content=f"Research this topic: {topic}")
])
return response.content
def writer(research: str, topic: str) -> str:
"""Writer agent creates content from research."""
response = llm.invoke([
SystemMessage(content="You are a professional writer. Create a well-structured article from the research provided. Use clear language and logical flow."),
HumanMessage(content=f"Write an article about '{topic}' using this research:\n\n{research}")
])
return response.content
def editor(article: str) -> str:
"""Editor agent reviews and improves the content."""
response = llm.invoke([
SystemMessage(content="You are a senior editor. Review this article for clarity, accuracy, grammar, and engagement. Return the improved version."),
HumanMessage(content=f"Edit and improve this article:\n\n{article}")
])
return response.content
# Sequential pipeline
topic = "The impact of AI agents on software development"
research = researcher(topic)
draft = writer(research, topic)
final_article = editor(draft)
print(final_article)
Pattern 2: Supervisor/Worker
A supervisor agent delegates tasks to specialized workers:
┌─→ Worker A ─┐
Supervisor ───┼─→ Worker B ─┼──→ Supervisor → Final Output
└─→ Worker C ─┘
Best for: Complex tasks where the supervisor needs to decide which specialist to engage.
Pattern 3: Collaborative (Debate)
Agents discuss and refine ideas together:
Agent A ←→ Agent B ←→ Agent C
↕ ↕ ↕
Shared State
Best for: Decision-making, brainstorming, quality assurance.
6.2 Multi-Agent Systems with LangGraph
The Supervisor Pattern
LangGraph is ideal for multi-agent systems because its graph-based architecture naturally models agent interactions. Let's build a supervisor that routes tasks to specialized workers.
from typing import Literal, TypedDict, Annotated
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage, BaseMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
llm = ChatOpenAI(model="gpt-4o")
# --- Define State ---
class AgentState(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
next_agent: str
research: str
draft: str
final_output: str
# --- Define Worker Agents ---
def research_agent(state: AgentState) -> dict:
"""Research specialist that gathers information."""
messages = state["messages"]
last_message = messages[-1].content
response = llm.invoke([
SystemMessage(content="""You are an expert researcher. Your job is to:
1. Identify the key aspects of the topic
2. Gather relevant facts, statistics, and examples
3. Organize your findings clearly
Output your research as structured notes."""),
HumanMessage(content=f"Research the following topic thoroughly:\n\n{last_message}")
])
return {"research": response.content}
def writing_agent(state: AgentState) -> dict:
"""Writing specialist that creates content."""
research = state.get("research", "")
response = llm.invoke([
SystemMessage(content="""You are a skilled technical writer. Your job is to:
1. Transform research notes into a polished article
2. Use clear headings and logical structure
3. Make complex topics accessible
4. Include an introduction and conclusion"""),
HumanMessage(content=f"Write a comprehensive article based on this research:\n\n{research}")
])
return {"draft": response.content}
def editing_agent(state: AgentState) -> dict:
"""Editing specialist that reviews and improves content."""
draft = state.get("draft", "")
response = llm.invoke([
SystemMessage(content="""You are a meticulous editor. Your job is to:
1. Fix any grammatical or stylistic issues
2. Improve clarity and readability
3. Ensure factual consistency
4. Enhance engagement and flow
5. Return the final polished version"""),
HumanMessage(content=f"Review and improve this article:\n\n{draft}")
])
return {"final_output": response.content}
def supervisor(state: AgentState) -> dict:
"""Supervisor that decides which agent should work next."""
research = state.get("research", "")
draft = state.get("draft", "")
final_output = state.get("final_output", "")
if not research:
return {"next_agent": "researcher"}
elif not draft:
return {"next_agent": "writer"}
elif not final_output:
return {"next_agent": "editor"}
else:
return {"next_agent": "done"}
def route_to_agent(state: AgentState) -> Literal["researcher", "writer", "editor", "__end__"]:
"""Route to the next agent based on supervisor decision."""
next_agent = state.get("next_agent", "")
if next_agent == "done":
return "__end__"
return next_agent
# --- Build the Graph ---
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("supervisor", supervisor)
workflow.add_node("researcher", research_agent)
workflow.add_node("writer", writing_agent)
workflow.add_node("editor", editing_agent)
# Define edges
workflow.add_edge(START, "supervisor")
workflow.add_conditional_edges(
"supervisor",
route_to_agent,
{
"researcher": "researcher",
"writer": "writer",
"editor": "editor",
"__end__": END,
}
)
# Workers always return to supervisor
workflow.add_edge("researcher", "supervisor")
workflow.add_edge("writer", "supervisor")
workflow.add_edge("editor", "supervisor")
# Compile
multi_agent = workflow.compile()
# --- Run ---
result = multi_agent.invoke({
"messages": [HumanMessage(content="The future of AI agents in enterprise software")],
"next_agent": "",
"research": "",
"draft": "",
"final_output": "",
})
print(result["final_output"])
How It Works
- The supervisor checks the state and decides which agent should work next
- The conditional edge routes execution to the chosen worker
- Each worker performs its specialized task and updates the state
- Control returns to the supervisor to decide the next step
- When all work is done, the supervisor routes to
END
6.3 Creating Specialized Agents
Agent Specialization Best Practices
Each agent should have:
- A clear role: One responsibility, well-defined
- A focused system prompt: Detailed instructions for its specialty
- Appropriate tools: Only the tools relevant to its task
- Defined output format: Consistent structure other agents can consume
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
llm = ChatOpenAI(model="gpt-4o")
# Specialized agent factory
def create_specialist(role: str, instructions: str):
"""Create a specialized agent with a specific role."""
def agent(input_text: str) -> str:
response = llm.invoke([
SystemMessage(content=f"You are a {role}.\n\n{instructions}"),
HumanMessage(content=input_text)
])
return response.content
return agent
# Create specialists
fact_checker = create_specialist(
role="fact-checking specialist",
instructions="""Your job is to review content for factual accuracy.
For each claim, mark it as:
- VERIFIED: If the claim is widely accepted as true
- QUESTIONABLE: If the claim needs citation or verification
- INCORRECT: If the claim is demonstrably false
Provide a brief explanation for each assessment."""
)
seo_optimizer = create_specialist(
role="SEO optimization specialist",
instructions="""Your job is to optimize content for search engines.
1. Suggest a compelling title tag (under 60 characters)
2. Write a meta description (under 160 characters)
3. Identify 5-7 target keywords
4. Suggest header structure improvements
5. Recommend internal/external linking opportunities"""
)
tone_analyst = create_specialist(
role="tone and audience analyst",
instructions="""Analyze the content's tone and audience fit.
1. Identify the current tone (formal, casual, technical, etc.)
2. Assess reading level (beginner, intermediate, expert)
3. Suggest tone adjustments for the target audience
4. Flag any jargon that should be explained
5. Rate engagement level (1-10) with suggestions to improve"""
)
6.4 Framework Comparison: CrewAI and AutoGen
CrewAI Overview
CrewAI is a framework specifically designed for multi-agent collaboration with role-playing agents:
# CrewAI example (for reference, not used in our main project)
# pip install crewai
from crewai import Agent, Task, Crew
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI",
backstory="You are a veteran researcher at a leading tech think tank.",
verbose=True,
)
writer = Agent(
role="Tech Content Writer",
goal="Create engaging technical content",
backstory="You are a renowned technical writer known for clear explanations.",
verbose=True,
)
research_task = Task(
description="Research the latest trends in AI agents for 2025",
expected_output="A comprehensive research brief with key findings",
agent=researcher,
)
writing_task = Task(
description="Write a blog post based on the research",
expected_output="A polished blog post ready for publication",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True,
)
result = crew.kickoff()
CrewAI Strengths:
- Simple, intuitive API for defining agent teams
- Built-in role-playing and backstory support
- Task delegation happens automatically
- Good for rapid prototyping of multi-agent workflows
CrewAI Limitations:
- Less control over execution flow
- Limited state management compared to LangGraph
- Fewer options for custom routing logic
AutoGen Overview
AutoGen (by Microsoft) focuses on conversational multi-agent patterns:
# AutoGen example (for reference)
# pip install autogen-agentchat
from autogen import AssistantAgent, UserProxyAgent
assistant = AssistantAgent(
name="assistant",
llm_config={"model": "gpt-4o"},
system_message="You are a helpful AI assistant.",
)
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
code_execution_config={"work_dir": "coding"},
)
user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci numbers.",
)
AutoGen Strengths:
- Excellent for code generation and execution workflows
- Supports human-in-the-loop naturally
- Agents can execute code in sandboxed environments
- Good for pair programming scenarios
AutoGen Limitations:
- More complex setup for non-conversational workflows
- Less intuitive for non-linear agent interactions
- Heavier framework with more dependencies
When to Use What
| Use Case | Recommended Framework |
|---|---|
| Complex stateful workflows | LangGraph |
| Quick team-based prototypes | CrewAI |
| Code generation and execution | AutoGen |
| Production applications | LangGraph |
| Simple sequential pipelines | LangChain |
We use LangGraph in this course because it gives you the most control and is best suited for production systems.
Project: Content Creation Crew
Let's build a multi-agent content creation system where a researcher, writer, and editor collaborate to produce high-quality articles.
Setup
mkdir content-crew
cd content-crew
python -m venv venv
source venv/bin/activate
pip install langchain langchain-openai langgraph
Create .env:
OPENAI_API_KEY=your_api_key_here
The Complete Multi-Agent System
Create content_crew.py:
import os
from typing import Literal, TypedDict, Annotated
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage, BaseMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
load_dotenv()
llm = ChatOpenAI(model="gpt-4o")
# --- State Definition ---
class ContentState(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
topic: str
research_notes: str
first_draft: str
editor_feedback: str
final_article: str
revision_count: int
status: str
# --- Agent Definitions ---
def research_node(state: ContentState) -> dict:
"""Researcher agent: gathers comprehensive information on the topic."""
topic = state["topic"]
print(f"\n[Researcher] Investigating: {topic}")
response = llm.invoke([
SystemMessage(content="""You are an expert research analyst. Your task is to:
1. Break down the topic into key subtopics
2. Provide specific facts, statistics, and recent developments
3. Identify expert opinions and notable quotes
4. Note any controversies or different perspectives
5. List potential sources and references
Structure your output as clear, organized research notes with headers.
Be thorough but focused. Aim for depth over breadth."""),
HumanMessage(content=f"Conduct thorough research on: {topic}")
])
print("[Researcher] Research complete.")
return {"research_notes": response.content, "status": "researched"}
def writer_node(state: ContentState) -> dict:
"""Writer agent: creates a polished article from research notes."""
research = state["research_notes"]
topic = state["topic"]
feedback = state.get("editor_feedback", "")
print(f"\n[Writer] {'Revising' if feedback else 'Drafting'} article...")
prompt_parts = [f"Topic: {topic}\n\nResearch Notes:\n{research}"]
if feedback:
prompt_parts.append(f"\n\nEditor Feedback to Address:\n{feedback}")
prompt_parts.append(f"\n\nPrevious Draft:\n{state.get('first_draft', '')}")
response = llm.invoke([
SystemMessage(content="""You are a talented technical writer. Create a compelling article that:
1. Opens with a strong hook that draws readers in
2. Uses clear headings and subheadings for structure
3. Explains complex concepts with relatable analogies
4. Includes specific examples and data points from the research
5. Maintains a professional yet engaging tone
6. Ends with a thought-provoking conclusion
If editor feedback is provided, revise the article to address all feedback points.
The article should be 600-800 words."""),
HumanMessage(content="\n".join(prompt_parts))
])
print("[Writer] Draft complete.")
return {"first_draft": response.content, "status": "drafted"}
def editor_node(state: ContentState) -> dict:
"""Editor agent: reviews the draft and provides feedback or approves."""
draft = state["first_draft"]
revision_count = state.get("revision_count", 0)
print(f"\n[Editor] Reviewing draft (revision #{revision_count + 1})...")
response = llm.invoke([
SystemMessage(content=f"""You are a senior editor with high standards. Review this article carefully.
This is revision #{revision_count + 1}. If this is revision 2 or higher, be more lenient.
Evaluate:
1. Factual accuracy and consistency
2. Writing quality and clarity
3. Structure and flow
4. Engagement and readability
5. Grammar and style
If the article meets your standards (or this is revision 2+), respond with:
APPROVED
Followed by the final version with any minor corrections.
If it needs significant revision (only on first review), respond with:
NEEDS_REVISION
Followed by specific, actionable feedback."""),
HumanMessage(content=f"Review this article:\n\n{draft}")
])
content = response.content
if "APPROVED" in content:
# Extract the final version (everything after APPROVED)
final = content.split("APPROVED", 1)[1].strip()
print("[Editor] Article approved!")
return {
"final_article": final if final else draft,
"editor_feedback": "",
"revision_count": revision_count + 1,
"status": "approved",
}
else:
# Extract feedback
feedback = content.split("NEEDS_REVISION", 1)[1].strip() if "NEEDS_REVISION" in content else content
print("[Editor] Requesting revisions.")
return {
"editor_feedback": feedback,
"revision_count": revision_count + 1,
"status": "needs_revision",
}
# --- Routing Logic ---
def should_continue(state: ContentState) -> Literal["writer", "__end__"]:
"""Decide if the article needs more revision or is ready."""
if state.get("status") == "approved":
return "__end__"
return "writer"
# --- Build the Graph ---
workflow = StateGraph(ContentState)
# Add nodes
workflow.add_node("researcher", research_node)
workflow.add_node("writer", writer_node)
workflow.add_node("editor", editor_node)
# Define flow
workflow.add_edge(START, "researcher")
workflow.add_edge("researcher", "writer")
workflow.add_edge("writer", "editor")
# Conditional: editor either approves or sends back for revision
workflow.add_conditional_edges(
"editor",
should_continue,
{
"writer": "writer",
"__end__": END,
}
)
# Compile
content_crew = workflow.compile()
# --- Run ---
def main():
print("=" * 60)
print(" Content Creation Crew")
print(" Researcher -> Writer -> Editor")
print("=" * 60)
topic = input("\nEnter a topic for the article: ").strip()
if not topic:
topic = "How AI agents are transforming customer service"
print(f"\nCreating article about: {topic}")
print("-" * 60)
result = content_crew.invoke({
"messages": [],
"topic": topic,
"research_notes": "",
"first_draft": "",
"editor_feedback": "",
"final_article": "",
"revision_count": 0,
"status": "started",
})
print("\n" + "=" * 60)
print(" FINAL ARTICLE")
print("=" * 60)
print(result["final_article"])
print("\n" + "=" * 60)
print(f"Completed after {result['revision_count']} revision(s)")
if __name__ == "__main__":
main()
Running the Crew
python content_crew.py
Example Output
============================================================
Content Creation Crew
Researcher -> Writer -> Editor
============================================================
Enter a topic for the article: The rise of AI coding assistants
Creating article about: The rise of AI coding assistants
------------------------------------------------------------
[Researcher] Investigating: The rise of AI coding assistants
[Researcher] Research complete.
[Writer] Drafting article...
[Writer] Draft complete.
[Editor] Reviewing draft (revision #1)...
[Editor] Requesting revisions.
[Writer] Revising article...
[Writer] Draft complete.
[Editor] Reviewing draft (revision #2)...
[Editor] Article approved!
============================================================
FINAL ARTICLE
============================================================
(The polished article appears here)
============================================================
Completed after 2 revision(s)
Key Takeaways
- Multi-agent systems outperform single agents on complex tasks by leveraging specialization
- Sequential pipelines work best for linear workflows; supervisor patterns work best for dynamic routing
- LangGraph is ideal for production multi-agent systems because of its explicit state management and routing
- CrewAI is great for rapid prototyping; AutoGen excels at code generation workflows
- Agent specialization requires clear roles, focused prompts, and well-defined output formats
- Revision loops (editor sends back to writer) are a powerful pattern for improving output quality
Exercise: Extend the Content Crew
Before moving to Module 7, try these enhancements:
- Add a fact-checker agent that verifies claims in the article before the editor reviews it
- Add an SEO optimizer agent that suggests title tags, meta descriptions, and keywords
- Implement a maximum revision limit to prevent infinite loops
- Give the researcher agent a web search tool so it can gather real information
Next up: Module 7, where we prepare our agents for production with logging, error handling, and deployment strategies.

