Integrating External Tools
External tools extend what prompt chains can do beyond text generation. This lesson covers how to integrate APIs, databases, and other tools into your workflows.
Why Tools in Chains?
Pure LLM chains are limited to:
- Generating text
- Analyzing provided text
- Transforming text formats
With tools, chains can:
- Fetch real-time data
- Execute code
- Query databases
- Call external APIs
- Interact with systems
Tool Integration Architecture
┌────────────────────────────────────────────────────────────────┐
│ CHAIN STEP │
├────────────────────────────────────────────────────────────────┤
│ │
│ Input → [LLM decides tool use] → Tool Call → [Process Result]│
│ │ │ │
│ ▼ ▼ │
│ ┌─────────┐ ┌─────────────┐ │
│ │ No tool │ │ Tool Result │ │
│ │ needed │ │ Integration │ │
│ └────┬────┘ └──────┬──────┘ │
│ │ │ │
│ └───────────┬──────────┘ │
│ ▼ │
│ Output │
└────────────────────────────────────────────────────────────────┘
Types of Tools
Data Retrieval Tools
const tools = {
searchWeb: {
description: "Search the web for current information",
parameters: { query: "string" },
execute: async ({ query }) => {
const results = await webSearch(query);
return results.map(r => ({ title: r.title, snippet: r.snippet }));
}
},
queryDatabase: {
description: "Query the customer database",
parameters: { customerId: "string" },
execute: async ({ customerId }) => {
return await db.customers.findById(customerId);
}
},
fetchUrl: {
description: "Fetch content from a URL",
parameters: { url: "string" },
execute: async ({ url }) => {
const response = await fetch(url);
return await response.text();
}
}
};
Computation Tools
Loading Prompt Playground...
Action Tools
const actionTools = {
sendEmail: {
description: "Send an email to a recipient",
parameters: {
to: "string",
subject: "string",
body: "string"
},
execute: async ({ to, subject, body }) => {
await emailService.send({ to, subject, body });
return { success: true, messageId: generateId() };
}
},
createTicket: {
description: "Create a support ticket",
parameters: {
title: "string",
description: "string",
priority: "low | medium | high"
},
execute: async (params) => {
const ticket = await ticketSystem.create(params);
return { ticketId: ticket.id, status: 'created' };
}
}
};
Tool Definition Format
Standard Tool Schema
const toolDefinition = {
name: "get_weather",
description: "Get current weather for a location. Use this when the user asks about weather conditions.",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City and state, e.g., 'San Francisco, CA'"
},
units: {
type: "string",
enum: ["celsius", "fahrenheit"],
default: "fahrenheit"
}
},
required: ["location"]
}
};
Tool with Examples
const toolWithExamples = {
name: "search_products",
description: "Search the product catalog",
parameters: {
type: "object",
properties: {
query: { type: "string" },
category: { type: "string" },
maxPrice: { type: "number" }
}
},
examples: [
{
input: "Find me a laptop under $1000",
call: { query: "laptop", maxPrice: 1000 }
},
{
input: "Show running shoes",
call: { query: "running shoes", category: "footwear" }
}
]
};
Tool Execution Patterns
Sequential Tool Use
async function sequentialToolChain(input) {
// Step 1: LLM decides to search
const searchDecision = await llm.chat({
messages: [{ role: 'user', content: input }],
tools: [tools.searchWeb]
});
// Execute the search
const searchResults = await tools.searchWeb.execute(
searchDecision.toolCall.arguments
);
// Step 2: LLM processes results
const analysis = await llm.chat({
messages: [
{ role: 'user', content: input },
{ role: 'assistant', content: searchDecision.content, toolCall: searchDecision.toolCall },
{ role: 'tool', content: JSON.stringify(searchResults) }
]
});
return analysis;
}
Parallel Tool Use
async function parallelToolChain(input) {
// LLM decides multiple tools are needed
const decision = await llm.chat({
messages: [{ role: 'user', content: input }],
tools: [tools.getWeather, tools.getEvents, tools.getNews]
});
// Execute all tool calls in parallel
const results = await Promise.all(
decision.toolCalls.map(call =>
tools[call.name].execute(call.arguments)
)
);
// Combine results for final response
const response = await llm.chat({
messages: [
...previousMessages,
{ role: 'tool', content: JSON.stringify(results) }
]
});
return response;
}
Building Tool-Aware Prompts
Loading Prompt Playground...
Error Handling for Tools
Tool Execution Errors
async function safeToolExecution(tool, args) {
try {
const result = await tool.execute(args);
return { success: true, result };
} catch (error) {
return {
success: false,
error: error.message,
recoverable: isRecoverableError(error)
};
}
}
Handling Tool Failures in Chains
async function chainWithToolRecovery(input) {
const toolResult = await safeToolExecution(tools.searchWeb, { query: input });
if (!toolResult.success) {
// Fallback: Ask LLM to proceed without tool
return await llm.chat({
messages: [{
role: 'user',
content: `${input}\n\nNote: Search was unavailable. Please respond based on your knowledge.`
}]
});
}
// Continue with tool result
return await llm.chat({
messages: [
{ role: 'user', content: input },
{ role: 'tool', content: JSON.stringify(toolResult.result) }
]
});
}
Exercise: Design a Tool Integration
Design tool integration for this scenario:
Loading Prompt Playground...
Key Takeaways
- Tools extend chains beyond text generation
- Define tools with clear descriptions and parameter schemas
- Handle tool execution errors gracefully
- Use sequential tool calls when order matters
- Use parallel tool calls for independent data fetching
- Include tool availability in prompts for proper routing
- Always have fallbacks for tool failures
Next, we'll explore how to incorporate tool results into chain processing.

