Module 5: The Interface (Next.js & Vercel AI SDK)
The Frontend
Introduction: From CLI to Production UI
Up until now, we've been building agents that run in the terminal. But real applications need:
- Beautiful, responsive interfaces
- Real-time streaming responses
- Rich, interactive components
- Seamless backend integration
This module teaches you how to build production-grade AI interfaces using Next.js and the Vercel AI SDK's UI hooks.
5.1 Streaming UI
Why Streaming Matters
Compare these user experiences:
Without Streaming:
User: "Explain quantum computing"
[30 seconds of loading spinner]
Agent: [Entire response appears at once]
With Streaming:
User: "Explain quantum computing"
Agent: "Quantum computing is..." [text appears word by word in real-time]
Streaming creates a sense of responsiveness and lets users start reading immediately.
The useChat Hook
The Vercel AI SDK provides useChatβa React hook that handles:
- Message history
- Streaming responses
- Loading states
- Error handling
Basic Next.js setup:
npx create-next-app@latest my-ai-app
cd my-ai-app
npm install ai @ai-sdk/openai
Create API route: app/api/chat/route.ts
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
export const runtime = 'edge'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: openai('gpt-4-turbo'),
messages
})
return result.toDataStreamResponse()
}
Create chat UI: app/page.tsx
'use client'
import { useChat } from 'ai/react'
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
{/* Messages */}
<div className="flex-1 overflow-y-auto space-y-4 mb-4">
{messages.map(message => (
<div
key={message.id}
className={`p-4 rounded-lg ${
message.role === 'user'
? 'bg-blue-100 ml-auto max-w-[80%]'
: 'bg-gray-100 mr-auto max-w-[80%]'
}`}
>
<p className="font-semibold mb-1">
{message.role === 'user' ? 'You' : 'Agent'}
</p>
<p>{message.content}</p>
</div>
))}
{isLoading && (
<div className="bg-gray-100 p-4 rounded-lg mr-auto max-w-[80%]">
<p className="font-semibold mb-1">Agent</p>
<p className="text-gray-500">Thinking...</p>
</div>
)}
</div>
{/* Input */}
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask me anything..."
className="flex-1 p-3 border rounded-lg"
/>
<button
type="submit"
disabled={isLoading}
className="px-6 py-3 bg-blue-600 text-white rounded-lg disabled:opacity-50"
>
Send
</button>
</form>
</div>
)
}
Run it:
npm run dev
You now have a full-stack, streaming AI chat app!
Showing Tool Usage
Display what tools the agent is using:
// API route: app/api/chat/route.ts
import { streamText, tool } from 'ai'
import { openai } from '@ai-sdk/openai'
import { z } from 'zod'
const tools = {
getWeather: tool({
description: 'Get current weather for a location',
parameters: z.object({
city: z.string()
}),
execute: async ({ city }) => {
// Mock data
return {
city,
temperature: 72,
condition: 'Sunny'
}
}
})
}
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools,
maxSteps: 5
})
return result.toDataStreamResponse()
}
// Frontend: app/page.tsx
'use client'
import { useChat } from 'ai/react'
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
<div className="flex-1 overflow-y-auto space-y-4 mb-4">
{messages.map(message => (
<div key={message.id}>
{/* User/Assistant messages */}
<div className={`p-4 rounded-lg ${
message.role === 'user' ? 'bg-blue-100' : 'bg-gray-100'
}`}>
<p className="font-semibold">{message.role === 'user' ? 'You' : 'Agent'}</p>
<p>{message.content}</p>
</div>
{/* Tool calls */}
{message.toolInvocations?.map((tool, index) => (
<div key={index} className="mt-2 p-3 bg-yellow-50 rounded border border-yellow-200">
<p className="text-sm font-mono">
π οΈ Using tool: <strong>{tool.toolName}</strong>
</p>
<pre className="text-xs mt-1 overflow-auto">
{JSON.stringify(tool.args, null, 2)}
</pre>
{tool.result && (
<pre className="text-xs mt-1 overflow-auto text-green-700">
Result: {JSON.stringify(tool.result, null, 2)}
</pre>
)}
</div>
))}
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask me anything..."
className="flex-1 p-3 border rounded-lg"
/>
<button type="submit" disabled={isLoading} className="px-6 py-3 bg-blue-600 text-white rounded-lg">
Send
</button>
</form>
</div>
)
}
5.2 Generative UI
Beyond Text: Rendering Components
What if instead of text, the agent could render:
- Interactive charts
- Formatted tables
- Buttons and forms
- Custom React components
This is Generative UIβthe agent generates UI components, not just text.
Using streamUI
// API route: app/api/chat/route.ts
import { streamUI } from 'ai/rsc'
import { openai } from '@ai-sdk/openai'
import { z } from 'zod'
import { StockChart } from '@/components/StockChart'
const tools = {
showStockChart: tool({
description: 'Display a stock price chart',
parameters: z.object({
ticker: z.string(),
period: z.enum(['1D', '1W', '1M', '1Y'])
}),
generate: async function* ({ ticker, period }) {
yield <div className="text-gray-500">Loading chart for {ticker}...</div>
const data = await fetchStockData(ticker, period)
return <StockChart ticker={ticker} data={data} period={period} />
}
})
}
export async function POST(req: Request) {
const { messages } = await req.json()
const result = await streamUI({
model: openai('gpt-4-turbo'),
messages,
tools,
text: ({ content }) => <p>{content}</p>
})
return result.toDataStreamResponse()
}
Frontend:
'use client'
import { useChat } from 'ai/react'
export default function Page() {
const { messages } = useChat({
api: '/api/chat'
})
return (
<div>
{messages.map(message => (
<div key={message.id}>
{message.ui || message.content}
</div>
))}
</div>
)
}
Result:
When the user asks "Show me Apple's stock chart", instead of text, they get an actual interactive React component rendered inline!
Example: Weather Card Component
// components/WeatherCard.tsx
export function WeatherCard({ city, temperature, condition }: {
city: string
temperature: number
condition: string
}) {
return (
<div className="bg-gradient-to-br from-blue-400 to-blue-600 text-white p-6 rounded-xl shadow-lg max-w-sm">
<h2 className="text-2xl font-bold">{city}</h2>
<div className="flex items-center justify-between mt-4">
<div className="text-5xl font-bold">{temperature}Β°F</div>
<div className="text-xl">{condition}</div>
</div>
</div>
)
}
// API route
const tools = {
getWeather: tool({
description: 'Get weather and display a card',
parameters: z.object({
city: z.string()
}),
generate: async function* ({ city }) {
yield <div>Checking weather...</div>
const data = await fetchWeather(city)
return <WeatherCard city={city} temperature={data.temp} condition={data.condition} />
}
})
}
5.3 Deployment
Deploying to Vercel
Vercel is the fastest way to deploy Next.js apps.
1. Install Vercel CLI:
npm i -g vercel
2. Deploy:
vercel
That's it. Your AI agent is live in production.
3. Add Environment Variables:
Go to your Vercel dashboard β Settings β Environment Variables:
OPENAI_API_KEY=sk-...
SUPABASE_URL=https://...
SUPABASE_KEY=eyJ...
Handling Timeouts
Serverless functions have time limits (typically 10-60 seconds). For long-running tasks:
Option 1: Edge Runtime
export const runtime = 'edge' // No timeout, but limited Node.js APIs
Option 2: Background Jobs with Inngest
npm install inngest
// inngest/functions.ts
import { inngest } from './client'
export const researchAgent = inngest.createFunction(
{ id: 'research-agent' },
{ event: 'research/start' },
async ({ event, step }) => {
const results = await step.run('search-web', async () => {
return await searchWeb(event.data.query)
})
const analysis = await step.run('analyze', async () => {
return await analyzeResults(results)
})
await step.run('send-email', async () => {
return await sendEmail(event.data.email, analysis)
})
}
)
Trigger from API:
// app/api/research/route.ts
import { inngest } from '@/inngest/client'
export async function POST(req: Request) {
const { query, email } = await req.json()
await inngest.send({
name: 'research/start',
data: { query, email }
})
return Response.json({ message: 'Research started' })
}
Option 3: Streaming for Long Tasks
Keep the connection alive with streaming:
export async function POST(req: Request) {
const encoder = new TextEncoder()
const stream = new ReadableStream({
async start(controller) {
controller.enqueue(encoder.encode('data: Starting research...\n\n'))
const results = await searchWeb(query)
controller.enqueue(encoder.encode(`data: Found ${results.length} results\n\n`))
const analysis = await analyzeResults(results)
controller.enqueue(encoder.encode('data: Analysis complete\n\n'))
controller.enqueue(encoder.encode(`data: ${JSON.stringify(analysis)}\n\n`))
controller.close()
}
})
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
}
})
}
Complete Production Example
File structure:
my-ai-agent/
βββ app/
β βββ api/
β β βββ chat/
β β βββ route.ts
β βββ page.tsx
β βββ layout.tsx
βββ components/
β βββ ChatInterface.tsx
β βββ StockChart.tsx
βββ lib/
β βββ tools.ts
β βββ supabase.ts
βββ package.json
lib/tools.ts:
import { tool } from 'ai'
import { z } from 'zod'
import { searchWeb, scrapeWebsite } from './research'
import { searchDocuments } from './supabase'
export const tools = {
searchKnowledgeBase: tool({
description: 'Search internal company documents',
parameters: z.object({
query: z.string()
}),
execute: async ({ query }) => {
const results = await searchDocuments(query)
return results.map(r => r.content).join('\n\n')
}
}),
researchWeb: tool({
description: 'Research a topic by searching and reading web pages',
parameters: z.object({
query: z.string()
}),
execute: async ({ query }) => {
const results = await searchWeb(query)
const content = await scrapeWebsite(results[0].url)
return { source: results[0].url, content: content.slice(0, 2000) }
}
})
}
app/api/chat/route.ts:
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
import { tools } from '@/lib/tools'
export const runtime = 'edge'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools,
maxSteps: 5,
system: 'You are a helpful research assistant. Use tools to find information.'
})
return result.toDataStreamResponse()
}
app/page.tsx:
'use client'
import { useChat } from 'ai/react'
export default function Page() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()
return (
<div className="min-h-screen bg-gray-50">
<div className="max-w-3xl mx-auto p-6">
<h1 className="text-3xl font-bold mb-6">Research Agent</h1>
<div className="bg-white rounded-lg shadow-lg p-4 mb-4 h-[600px] overflow-y-auto">
{messages.map(m => (
<div key={m.id} className={`mb-4 ${m.role === 'user' ? 'text-right' : ''}`}>
<div className={`inline-block p-4 rounded-lg ${
m.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100'
}`}>
{m.content}
</div>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
className="flex-1 p-4 border rounded-lg"
placeholder="Ask me to research anything..."
/>
<button
type="submit"
disabled={isLoading}
className="px-8 py-4 bg-blue-600 text-white rounded-lg font-semibold disabled:opacity-50"
>
Send
</button>
</form>
</div>
</div>
)
}
Deploy with vercel and you have a production AI research agent!
Key Takeaways
- useChat hook handles streaming, state, and UI updates automatically
- Streaming provides better UX than waiting for complete responses
- Generative UI lets agents render React components instead of just text
- Vercel makes deployment trivial
- Edge runtime avoids timeouts for long-running agents
Exercise: Build a Multi-Modal Agent
Create an agent that can:
- Display stock charts when asked about stocks
- Show weather cards when asked about weather
- Render a table when comparing data
- Stream text responses for general questions
Next up: Module 6 β The Capstone Project, where you build a complete Business Analyst Agent.

