Module 3: Orchestration with LangGraph.js
Managing Complexity
Introduction: When Simple Loops Break
In Module 2, we built agents that could use tools. But we relied on the LLM to manage the workflow autonomously with maxSteps.
This works for simple tasks, but what about complex workflows that need:
- Conditional branching: "If the stock price is above $100, send an alert, otherwise do nothing"
- Human approval: "Before sending this email, ask me to review it"
- Persistent state: "Remember the user's preferences across multiple interactions"
- Parallel execution: "Research these 5 companies simultaneously"
- Error recovery: "If the API fails, try an alternative data source"
For these scenarios, we need stateful orchestration. Enter LangGraph.
3.1 Why Chains Break
The Linear Chain Problem
Traditional LLM chains follow a simple pattern:
Input → LLM → Tool → LLM → Tool → Output
This works until you need:
Conditional logic:
// Can't do this with simple chains:
if (stockPrice > 100) {
sendAlert()
} else {
logToDatabase()
}
Loops with exit conditions:
// Can't do this either:
while (!taskComplete && retries < 3) {
result = attemptTask()
taskComplete = evaluateResult(result)
}
Human-in-the-loop:
// Or this:
draft = generateEmail()
approval = await askHuman("Send this email?")
if (approval) {
sendEmail(draft)
}
The Graph Solution
LangGraph treats your agent as a state machine with:
- Nodes: Functions that process state
- Edges: Connections between nodes
- Conditional Edges: Branching logic
- State: A shared object passed between nodes
┌─────────────┐
│ Start │
└──────┬──────┘
│
┌──────▼──────┐
│ Research │
└──────┬──────┘
│
┌──────▼──────┐
┌─┤ Evaluate │
│ └──────┬──────┘
│ │ (good result)
│ │
│ ┌──────▼──────┐
│ │ Draft │
│ └──────┬──────┘
│ │
│ ┌──────▼──────┐
└─► Human Review◄──┐
└──────┬──────┘ │
│ (approved)
┌──────▼──────┐ │
│ Send │ │ (rejected)
└──────┬──────┘ │
│ │
┌──────▼──────┐ │
│ End │◄─┘
└─────────────┘
3.2 Introduction to LangGraph
Installation
npm install @langchain/langgraph @langchain/core @langchain/openai
Core Concepts
1. State
State is a TypeScript object that flows through the graph:
interface AgentState {
messages: BaseMessage[]
nextAction?: string
toolResults?: any[]
requiresApproval?: boolean
}
2. Nodes
Nodes are functions that read and modify state:
async function researchNode(state: AgentState): Promise<Partial<AgentState>> {
const query = state.messages[state.messages.length - 1].content
const results = await searchWeb(query)
return {
toolResults: results,
nextAction: 'analyze'
}
}
3. Edges
Edges define the flow:
// Fixed edge: always go from A to B
graph.addEdge('research', 'analyze')
// Conditional edge: choose next node based on state
graph.addConditionalEdges(
'evaluate',
(state) => state.confidence > 0.8 ? 'proceed' : 'retry'
)
A Simple Graph Example
import { StateGraph } from '@langchain/langgraph'
import { BaseMessage, HumanMessage, AIMessage } from '@langchain/core/messages'
// Define state shape
interface SimpleState {
messages: BaseMessage[]
count: number
}
// Create nodes
async function stepOne(state: SimpleState): Promise<Partial<SimpleState>> {
console.log('Step 1: Processing...')
return {
count: state.count + 1,
messages: [...state.messages, new AIMessage('Completed step 1')]
}
}
async function stepTwo(state: SimpleState): Promise<Partial<SimpleState>> {
console.log('Step 2: Finalizing...')
return {
count: state.count + 1,
messages: [...state.messages, new AIMessage('Completed step 2')]
}
}
// Build the graph
const workflow = new StateGraph<SimpleState>({ channels: {} })
workflow.addNode('step1', stepOne)
workflow.addNode('step2', stepTwo)
workflow.addEdge('__start__', 'step1')
workflow.addEdge('step1', 'step2')
workflow.addEdge('step2', '__end__')
const app = workflow.compile()
// Run it
const result = await app.invoke({
messages: [new HumanMessage('Start the process')],
count: 0
})
console.log('Final count:', result.count)
console.log('Messages:', result.messages)
3.3 Building a ReACT Agent
The ReACT Pattern
ReACT = Reason + Act + Observe
1. REASON: Think about what to do
2. ACT: Execute a tool or make a decision
3. OBSERVE: Review the result
4. REPEAT until task complete
Implementation with LangGraph
import { StateGraph } from '@langchain/langgraph'
import { ChatOpenAI } from '@langchain/openai'
import { HumanMessage, AIMessage, ToolMessage } from '@langchain/core/messages'
interface ReactState {
messages: BaseMessage[]
nextStep: 'reason' | 'act' | 'respond' | 'end'
toolCalls: any[]
}
// Node 1: Reasoning
async function reasonNode(state: ReactState): Promise<Partial<ReactState>> {
const llm = new ChatOpenAI({
modelName: 'gpt-4-turbo',
temperature: 0
})
const response = await llm.invoke(state.messages)
// Check if LLM wants to use a tool
if (response.tool_calls && response.tool_calls.length > 0) {
return {
messages: [...state.messages, response],
toolCalls: response.tool_calls,
nextStep: 'act'
}
}
// No tool needed, just respond
return {
messages: [...state.messages, response],
nextStep: 'respond'
}
}
// Node 2: Action (execute tools)
async function actNode(state: ReactState): Promise<Partial<ReactState>> {
const toolMessages: ToolMessage[] = []
for (const toolCall of state.toolCalls) {
// Execute the tool
const result = await executeToolByName(toolCall.name, toolCall.args)
toolMessages.push(
new ToolMessage({
content: JSON.stringify(result),
tool_call_id: toolCall.id
})
)
}
return {
messages: [...state.messages, ...toolMessages],
nextStep: 'reason' // Loop back to reasoning
}
}
// Node 3: Respond
async function respondNode(state: ReactState): Promise<Partial<ReactState>> {
return {
nextStep: 'end'
}
}
// Routing function
function routeNext(state: ReactState): string {
switch (state.nextStep) {
case 'reason': return 'reason'
case 'act': return 'act'
case 'respond': return 'respond'
case 'end': return '__end__'
default: return '__end__'
}
}
// Build the graph
const workflow = new StateGraph<ReactState>({ channels: {} })
workflow.addNode('reason', reasonNode)
workflow.addNode('act', actNode)
workflow.addNode('respond', respondNode)
workflow.setEntryPoint('reason')
workflow.addConditionalEdges('reason', routeNext)
workflow.addConditionalEdges('act', routeNext)
workflow.addEdge('respond', '__end__')
const app = workflow.compile()
Using the ReACT Agent
const result = await app.invoke({
messages: [
new HumanMessage('What is the current price of Apple stock and is it a good buy?')
],
nextStep: 'reason',
toolCalls: []
})
console.log(result.messages)
What happens:
1. REASON: "I need to get Apple's current stock price"
→ Calls getStockPrice tool
2. ACT: Executes getStockPrice('AAPL')
→ Returns: { price: 178.25, change: +2.3% }
3. REASON: "I have the price. Now I need to analyze if it's a good buy"
→ Calls analyzeBuySignal tool
4. ACT: Executes analyzeBuySignal('AAPL', 178.25)
→ Returns: { signal: 'HOLD', reasoning: '...' }
5. RESPOND: "Apple is trading at $178.25, up 2.3% today.
Based on technical indicators, I'd suggest holding..."
3.4 Human-in-the-Loop
Why Human Approval Matters
Some actions are too sensitive to fully automate:
- Sending emails
- Making purchases
- Posting to social media
- Deleting data
You need a pause point where the human reviews and approves.
Implementation Pattern
interface ApprovalState {
messages: BaseMessage[]
pendingAction?: {
type: string
details: any
}
approved?: boolean
}
// Node: Draft Email
async function draftEmailNode(state: ApprovalState): Promise<Partial<ApprovalState>> {
const llm = new ChatOpenAI({ modelName: 'gpt-4-turbo' })
const draft = await llm.invoke([
new HumanMessage('Draft an email to the CEO summarizing our Q4 results')
])
return {
messages: [...state.messages, draft],
pendingAction: {
type: 'send_email',
details: {
to: 'ceo@company.com',
subject: 'Q4 Results Summary',
body: draft.content
}
}
}
}
// Node: Human Review (interrupts execution)
async function humanReviewNode(state: ApprovalState): Promise<Partial<ApprovalState>> {
console.log('\n==== PENDING ACTION ====')
console.log('Type:', state.pendingAction?.type)
console.log('Details:', state.pendingAction?.details)
console.log('=======================\n')
// In a real app, this would pause and wait for user input via UI
const approved = await promptUser('Approve this action? (yes/no): ')
return {
approved: approved === 'yes'
}
}
// Node: Execute or Cancel
async function executeNode(state: ApprovalState): Promise<Partial<ApprovalState>> {
if (state.approved) {
// Execute the action
await sendEmail(state.pendingAction!.details)
return {
messages: [...state.messages, new AIMessage('Email sent successfully')]
}
} else {
return {
messages: [...state.messages, new AIMessage('Action cancelled by user')]
}
}
}
// Build graph with approval checkpoint
const workflow = new StateGraph<ApprovalState>({ channels: {} })
workflow.addNode('draft', draftEmailNode)
workflow.addNode('review', humanReviewNode)
workflow.addNode('execute', executeNode)
workflow.addEdge('__start__', 'draft')
workflow.addEdge('draft', 'review')
workflow.addEdge('review', 'execute')
workflow.addEdge('execute', '__end__')
const app = workflow.compile()
With Interrupt Pattern (Advanced)
LangGraph supports checkpoints that let you pause execution:
import { MemorySaver } from '@langchain/langgraph'
const checkpointer = new MemorySaver()
const app = workflow.compile({
checkpointer,
interruptBefore: ['execute'] // Pause before this node
})
// First run: executes up to 'execute' node
const result1 = await app.invoke(
{ messages: [new HumanMessage('Send summary email')] },
{ configurable: { thread_id: '123' } }
)
// Show pending action to user
console.log('Pending:', result1.pendingAction)
// User approves, resume execution
const result2 = await app.invoke(
{ ...result1, approved: true },
{ configurable: { thread_id: '123' } }
)
console.log('Final:', result2.messages)
Complete Example: Research Agent with Approval
import { StateGraph } from '@langchain/langgraph'
import { ChatOpenAI } from '@langchain/openai'
import { HumanMessage, AIMessage } from '@langchain/core/messages'
interface ResearchState {
messages: BaseMessage[]
query: string
researchResults?: string
draft?: string
approved?: boolean
}
// Node 1: Research
async function researchNode(state: ResearchState) {
console.log('Researching:', state.query)
const results = await searchWeb(state.query)
return {
researchResults: results,
messages: [...state.messages, new AIMessage(`Found ${results.length} results`)]
}
}
// Node 2: Draft
async function draftNode(state: ResearchState) {
const llm = new ChatOpenAI({ modelName: 'gpt-4-turbo' })
const draft = await llm.invoke([
new HumanMessage(`Summarize these research results: ${state.researchResults}`)
])
return {
draft: draft.content,
messages: [...state.messages, draft]
}
}
// Node 3: Review
async function reviewNode(state: ResearchState) {
console.log('\n=== DRAFT FOR REVIEW ===')
console.log(state.draft)
console.log('========================\n')
const approval = await promptUser('Approve? (yes/no): ')
return { approved: approval === 'yes' }
}
// Node 4: Send
async function sendNode(state: ResearchState) {
if (state.approved) {
await sendEmail(state.draft!)
return { messages: [...state.messages, new AIMessage('Email sent!')] }
}
return { messages: [...state.messages, new AIMessage('Cancelled')] }
}
const workflow = new StateGraph<ResearchState>({ channels: {} })
workflow.addNode('research', researchNode)
workflow.addNode('draft', draftNode)
workflow.addNode('review', reviewNode)
workflow.addNode('send', sendNode)
workflow.addEdge('__start__', 'research')
workflow.addEdge('research', 'draft')
workflow.addEdge('draft', 'review')
workflow.addEdge('review', 'send')
workflow.addEdge('send', '__end__')
const app = workflow.compile()
// Run
await app.invoke({
messages: [new HumanMessage('Research Tesla Q4 earnings')],
query: 'Tesla Q4 2024 earnings report'
})
Visualizing Your Graph
LangGraph can generate visual diagrams:
import { drawMermaid } from '@langchain/langgraph/web'
const diagram = await drawMermaid(app.getGraph())
console.log(diagram)
Outputs a Mermaid diagram you can paste into tools like Mermaid Live Editor.
Key Takeaways
- LangGraph enables stateful, complex agent workflows
- Graphs consist of nodes (functions), edges (flow), and state (data)
- ReACT pattern (Reason → Act → Observe) is a fundamental agent architecture
- Human-in-the-loop adds critical oversight for sensitive actions
- Conditional edges allow dynamic branching based on state
Exercise: Build a Conditional Workflow
Create an agent that:
- Checks a stock price
- If price > $100, draft a "BUY" alert
- If price < $50, draft a "SELL" alert
- Otherwise, do nothing
- Require human approval before sending any alert
Next up: Module 4, where we add memory and RAG (Retrieval-Augmented Generation) to make agents truly intelligent.

