Module 9: MCP Best Practices
Patterns for Success
You've learned how to configure, use, and build MCP servers. This module distills that knowledge into actionable best practices. These patterns come from real-world experience and will help you build robust, maintainable MCP integrations.
Configuration Best Practices
1. Use descriptive server names
{
"mcpServers": {
"current-project-code": { ... },
"company-wiki": { ... },
"dev-database": { ... }
}
}
Not:
{
"mcpServers": {
"fs1": { ... },
"server2": { ... }
}
}
Claude references these names, and so will you when debugging.
2. Separate concerns
Instead of one filesystem server with everything:
{
"all-files": {
"args": [".", "/docs", "/data", "/config"]
}
}
Use separate servers:
{
"source-code": { "args": ["./src"] },
"documentation": { "args": ["./docs"] },
"config-files": { "args": ["./config"] }
}
This makes permissions clearer and debugging easier.
3. Document your configuration
Create a companion file explaining your setup:
# MCP Configuration Notes
## Servers
### source-code
- Purpose: Access to project source files
- Scope: ./src directory only
- Used for: Code review, refactoring assistance
### dev-database
- Purpose: Query development database
- Connection: Local PostgreSQL on port 5432
- Credentials: Uses DATABASE_URL env var
- Note: Read-only user, no access to users table
4. Version your configuration
For projects, commit .mcp.json but gitignore sensitive overrides:
# .gitignore
.mcp.local.json
.env
Tool Design Best Practices
When building custom servers:
1. Write clear descriptions
Claude uses descriptions to decide when to use tools. Be specific:
// Too vague
{
name: "search",
description: "Searches for stuff"
}
// Clear and useful
{
name: "search_codebase",
description: "Search for files and code patterns in the project. Use this to find where functions are defined, locate imports, or find files by name. Returns file paths and matching content."
}
2. Use self-documenting parameter names
// Clear parameters
{
name: "create_issue",
inputSchema: {
properties: {
title: { description: "Issue title (short, descriptive)" },
body: { description: "Detailed description in markdown" },
labels: { description: "Array of label names to apply" },
assignees: { description: "GitHub usernames to assign" }
}
}
}
3. Return structured data
// Structured response
return {
content: [{
type: "text",
text: JSON.stringify({
status: "success",
filesModified: 3,
changes: [
{ file: "src/index.ts", action: "updated" },
{ file: "src/utils.ts", action: "created" },
{ file: "tests/index.test.ts", action: "updated" }
]
}, null, 2)
}]
};
This helps Claude understand results and communicate them to users.
4. Include error context
// Helpful error
return {
content: [{
type: "text",
text: JSON.stringify({
error: "Database connection failed",
details: "Could not connect to PostgreSQL at localhost:5432",
suggestions: [
"Check if PostgreSQL is running",
"Verify DATABASE_URL environment variable",
"Ensure port 5432 is not blocked"
]
})
}],
isError: true
};
Resource Design Best Practices
1. Use meaningful URIs
// Good URI scheme
"config://app/database"
"logs://app/2024-01-15"
"metrics://dashboard/daily"
// Bad (unclear)
"resource://1"
"data://x"
2. Include metadata
{
uri: "logs://app/error",
name: "Application Error Logs",
description: "Last 1000 error log entries from the application",
mimeType: "text/plain"
}
3. Handle large resources
// Paginated resource reading
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const uri = new URL(request.params.uri);
const page = parseInt(uri.searchParams.get("page") || "1");
const limit = 100;
const data = await fetchData(page, limit);
return {
contents: [{
uri: request.params.uri,
mimeType: "application/json",
text: JSON.stringify({
data,
pagination: {
page,
limit,
hasMore: data.length === limit
}
})
}]
};
});
Performance Best Practices
1. Lazy loading
Don't load everything at startup:
// Bad: Load everything upfront
const server = new Server(...);
const allData = await loadEntireDatabase(); // Slow startup
// Good: Load on demand
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const data = await loadDataForRequest(request); // Fast startup
});
2. Connection pooling
For database servers:
import { Pool } from "pg";
// Create pool once
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Reuse for requests
async function executeQuery(sql: string) {
const client = await pool.connect();
try {
return await client.query(sql);
} finally {
client.release();
}
}
3. Caching
const cache = new Map<string, { data: any; expires: number }>();
async function getCachedData(key: string): Promise<any> {
const cached = cache.get(key);
if (cached && cached.expires > Date.now()) {
return cached.data;
}
const data = await fetchFreshData(key);
cache.set(key, {
data,
expires: Date.now() + 5 * 60 * 1000, // 5 minutes
});
return data;
}
4. Limit response sizes
const MAX_RESPONSE_SIZE = 100 * 1024; // 100KB
function truncateIfNeeded(text: string): string {
if (text.length > MAX_RESPONSE_SIZE) {
return text.slice(0, MAX_RESPONSE_SIZE) +
"\n\n[Response truncated due to size limit]";
}
return text;
}
Error Handling Best Practices
1. Graceful degradation
async function handleToolCall(request: CallToolRequest) {
try {
const result = await primaryMethod();
return success(result);
} catch (primaryError) {
try {
const fallback = await fallbackMethod();
return success(fallback, { note: "Used fallback method" });
} catch (fallbackError) {
return error("Both primary and fallback methods failed");
}
}
}
2. Retry with backoff
async function fetchWithRetry(url: string, maxRetries = 3): Promise<Response> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await fetch(url);
} catch (error) {
if (attempt === maxRetries) throw error;
const delay = Math.pow(2, attempt) * 1000;
await new Promise((r) => setTimeout(r, delay));
}
}
throw new Error("Max retries exceeded");
}
3. Timeout long operations
function withTimeout<T>(promise: Promise<T>, ms: number): Promise<T> {
return Promise.race([
promise,
new Promise<T>((_, reject) =>
setTimeout(() => reject(new Error("Operation timed out")), ms)
),
]);
}
// Usage
const result = await withTimeout(longOperation(), 30000);
Maintainability Best Practices
1. Modular code structure
src/
├── index.ts # Server entry point
├── tools/
│ ├── index.ts # Tool registration
│ ├── search.ts # Search tool implementation
│ └── create.ts # Create tool implementation
├── resources/
│ ├── index.ts # Resource registration
│ └── config.ts # Config resource implementation
└── utils/
├── database.ts # Database utilities
└── validation.ts # Input validation
2. TypeScript for type safety
interface ToolArguments {
query: string;
limit?: number;
format?: "json" | "text";
}
function validateArgs(args: unknown): ToolArguments {
if (!args || typeof args !== "object") {
throw new Error("Invalid arguments");
}
// Type-safe validation
}
3. Automated testing
// tests/tools.test.ts
describe("search tool", () => {
it("returns matching files", async () => {
const result = await searchTool.execute({ query: "function" });
expect(result.content).toBeDefined();
expect(result.isError).toBeFalsy();
});
it("handles empty results", async () => {
const result = await searchTool.execute({ query: "nonexistent123" });
expect(result.content[0].text).toContain("No results");
});
});
4. Version your servers
const server = new Server(
{
name: "my-server",
version: "1.2.3", // Semantic versioning
},
{ ... }
);
Include a CHANGELOG:
# Changelog
## 1.2.3
- Fixed timeout handling in database queries
- Added retry logic for network errors
## 1.2.2
- Improved error messages for permission issues
Operational Best Practices
1. Health checks
Add a simple health check tool:
{
name: "health_check",
description: "Check if the server is functioning correctly",
inputSchema: { type: "object", properties: {} }
}
// Handler
if (name === "health_check") {
const dbConnected = await testDatabaseConnection();
const apiReachable = await testApiConnection();
return {
content: [{
type: "text",
text: JSON.stringify({
status: dbConnected && apiReachable ? "healthy" : "degraded",
checks: {
database: dbConnected ? "ok" : "failed",
api: apiReachable ? "ok" : "failed"
},
timestamp: new Date().toISOString()
})
}]
};
}
2. Graceful shutdown
process.on("SIGTERM", async () => {
console.error("Received SIGTERM, shutting down...");
await pool.end(); // Close database connections
await server.close(); // Close MCP server
process.exit(0);
});
process.on("SIGINT", async () => {
console.error("Received SIGINT, shutting down...");
await pool.end();
await server.close();
process.exit(0);
});
3. Monitor resource usage
setInterval(() => {
const usage = process.memoryUsage();
console.error("Memory usage:", {
heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + "MB",
rss: Math.round(usage.rss / 1024 / 1024) + "MB",
});
}, 60000); // Every minute
Summary Checklist
Configuration:
- Descriptive server names
- Separate servers for separate concerns
- Configuration documented
- Sensitive data in environment variables
Tool Design:
- Clear, detailed descriptions
- Well-documented parameters
- Structured return data
- Helpful error messages
Performance:
- Lazy loading where possible
- Connection pooling for databases
- Caching for repeated requests
- Response size limits
Error Handling:
- Graceful degradation
- Retry logic with backoff
- Timeouts for long operations
Maintenance:
- Modular code structure
- TypeScript for type safety
- Automated tests
- Semantic versioning
Operations:
- Health check endpoints
- Graceful shutdown handling
- Resource monitoring
Key Takeaways
-
Clarity is key - Clear names, descriptions, and documentation
-
Separate concerns - Modular servers and code structure
-
Handle failures - Errors, timeouts, and retries
-
Performance matters - Caching, pooling, lazy loading
-
Plan for maintenance - Testing, versioning, logging
-
Think operationally - Health checks, monitoring, graceful shutdown
Looking Ahead
Best practices are guidelines, not rules. Apply them where they make sense for your situation. In the final module, we'll explore real-world use cases that bring together everything you've learned.
Next up: Module 10 - Real-World MCP Use Cases

