How to Build Your First AI Agent in 30 Minutes (Python)

What if you could build a program that doesn't just answer questions — but actually does things for you? An AI agent can search the web, read files, analyze data, and chain multiple actions together to accomplish a goal. And you can build one in about 30 minutes.
In this tutorial, you'll build a fully functional AI agent in Python using LangChain. By the end, you'll have an interactive agent that can search the web, read local files, and hold multi-turn conversations with memory.
No prior AI experience required — just basic Python knowledge. If you need a refresher, check out our Python Basics course first.
What You'll Build
We're building a personal research assistant — a Python AI agent that can:
- Search the web for real-time information
- Read and analyze local files on your computer
- Answer follow-up questions using conversation memory
- Reason through multi-step problems using the ReACT pattern
Here's a taste of what interacting with your finished agent looks like:
You: What's the latest news about Python 3.13?
Agent: I'll search for the latest Python 3.13 news...
[Uses web search tool]
Agent: Python 3.13 was released with several exciting features including...
You: Summarize the changelog file in my project
Agent: I'll read that file for you...
[Uses file reader tool]
Agent: Here's a summary of your changelog...
The agent decides on its own which tool to use, when to use it, and how to combine results. That's what makes it an agent — not just a chatbot.
Not sure what an AI agent actually is? Read our explainer: What Are AI Agents and How Do They Work?
Prerequisites
Before we start, make sure you have:
- Python 3.10 or higher installed (download here)
- An API key from either OpenAI or Anthropic
- A code editor (VS Code, PyCharm, or any editor you prefer)
- Basic Python knowledge — variables, functions, loops, and pip
That's it. No machine learning background needed. No GPU required.
Step 1: Install Dependencies (5 Minutes)
First, create a new project directory and set up a virtual environment:
mkdir my-ai-agent
cd my-ai-agent
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Now install the packages we need:
pip install langchain langchain-openai python-dotenv
If you prefer to use Anthropic's Claude instead of OpenAI:
pip install langchain langchain-anthropic python-dotenv
We're also installing python-dotenv to keep your API key out of your code — a security best practice.
Next, create a .env file in your project root to store your API key:
# For OpenAI
OPENAI_API_KEY=your-api-key-here
# OR for Anthropic
ANTHROPIC_API_KEY=your-api-key-here
Important: Never commit your .env file to version control. Add it to your .gitignore right away.
Let's verify everything works. Create a file called agent.py and add:
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
response = llm.invoke("Say hello!")
print(response.content)
If you're using Anthropic, swap the import and model:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)
Run it:
python agent.py
If you see a greeting, you're good to go. If you get an authentication error, double-check your API key in the .env file.
Step 2: Define Your Tools (10 Minutes)
Tools are what turn a language model into an agent. Without tools, it can only generate text. With tools, it can take actions in the real world.
We'll create two tools: a web search tool and a file reader tool.
Tool 1: Web Search
We'll use DuckDuckGo for web search since it doesn't require an API key:
pip install duckduckgo-search
Now define the search tool:
from langchain.tools import tool
from duckduckgo_search import DDGS
@tool
def web_search(query: str) -> str:
"""Search the web for current information. Use this when you need to find
up-to-date facts, news, or information that might not be in your training data."""
with DDGS() as ddgs:
results = list(ddgs.text(query, max_results=3))
if not results:
return "No results found."
output = ""
for r in results:
output += f"**{r['title']}**\n{r['body']}\n{r['href']}\n\n"
return output
The @tool decorator from LangChain turns any Python function into a tool the agent can use. The docstring is critical — it tells the agent when and how to use this tool.
Tool 2: File Reader
This tool lets the agent read any text file on your machine:
import os
@tool
def read_file(file_path: str) -> str:
"""Read the contents of a local file. Use this when the user asks you to
read, analyze, or summarize a file on their computer. The file_path should
be a relative or absolute path to a text file."""
try:
resolved = os.path.abspath(file_path)
with open(resolved, "r", encoding="utf-8") as f:
content = f.read()
if len(content) > 10000:
return content[:10000] + "\n\n[... file truncated at 10,000 characters]"
return content
except FileNotFoundError:
return f"Error: File not found at '{file_path}'"
except Exception as e:
return f"Error reading file: {e}"
Notice a few things:
- We truncate long files to avoid hitting token limits
- We handle errors gracefully so the agent gets useful feedback instead of crashing
- The docstring clearly explains what the tool does and when to use it
How Agents Choose Tools
You might be wondering: how does the agent know which tool to use?
This is where the magic of large language models comes in. When you give an LLM a list of tools (with their names, descriptions, and parameter schemas), it can reason about which tool fits the current task. If you ask "What's happening in the stock market today?", it recognizes that requires fresh data and calls web_search. If you ask "Read my README file", it calls read_file.
This decision-making process is called tool calling (sometimes called function calling), and it's a built-in capability of modern LLMs like GPT-4o and Claude.
Step 3: Create the Agent (10 Minutes)
Now we bring everything together. LangChain provides a high-level function called create_react_agent that implements the ReACT pattern — the most popular approach for building AI agents.
What Is the ReACT Pattern?
ReACT stands for Reasoning + Acting. Here's how it works:
- The agent receives your message
- It reasons about what to do ("I need to search for current information")
- It acts by calling a tool (runs a web search)
- It observes the tool's output
- It reasons again ("I have the search results, now I can answer the question")
- It either acts again (calls another tool) or responds to you
This loop continues until the agent has enough information to give you a final answer. It's the same pattern used by professional AI agent systems in production.
Building the Agent
Here's the complete agent setup:
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langgraph.prebuilt import create_react_agent
from duckduckgo_search import DDGS
import os
# --- LLM ---
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# --- Tools ---
@tool
def web_search(query: str) -> str:
"""Search the web for current information. Use this when you need to find
up-to-date facts, news, or information that might not be in your training data."""
with DDGS() as ddgs:
results = list(ddgs.text(query, max_results=3))
if not results:
return "No results found."
output = ""
for r in results:
output += f"**{r['title']}**\n{r['body']}\n{r['href']}\n\n"
return output
@tool
def read_file(file_path: str) -> str:
"""Read the contents of a local file. Use this when the user asks you to
read, analyze, or summarize a file on their computer. The file_path should
be a relative or absolute path to a text file."""
try:
resolved = os.path.abspath(file_path)
with open(resolved, "r", encoding="utf-8") as f:
content = f.read()
if len(content) > 10000:
return content[:10000] + "\n\n[... file truncated at 10,000 characters]"
return content
except FileNotFoundError:
return f"Error: File not found at '{file_path}'"
except Exception as e:
return f"Error reading file: {e}"
tools = [web_search, read_file]
# --- System Prompt ---
system_prompt = """You are a helpful research assistant. You can search the web
for current information and read local files when asked.
When answering questions:
- Use the web_search tool for questions about current events, recent news, or
anything that might have changed after your training data cutoff
- Use the read_file tool when the user asks you to read or analyze a file
- Always cite your sources when using web search results
- Be concise but thorough in your answers
"""
# --- Create the Agent ---
agent = create_react_agent(
model=llm,
tools=tools,
prompt=system_prompt,
)
That's it. In about 50 lines of code, you've created a fully functional AI agent. The create_react_agent function handles all the complex orchestration — the reasoning loop, tool dispatch, error handling, and response formatting.
Note that we're using langgraph.prebuilt.create_react_agent — LangGraph is the standard way to build agents in the LangChain ecosystem. Install it if you haven't already:
pip install langgraph
Step 4: Add Memory (5 Minutes)
Right now, our agent has no memory. If you ask a follow-up question, it won't remember what you talked about before. Let's fix that.
LangGraph agents support memory through a checkpointer that persists conversation state:
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
agent = create_react_agent(
model=llm,
tools=tools,
prompt=system_prompt,
checkpointer=memory,
)
Now the agent remembers everything within a conversation. When you invoke the agent, you pass a thread_id to identify the conversation:
config = {"configurable": {"thread_id": "session-1"}}
response = agent.invoke(
{"messages": [{"role": "user", "content": "What is LangChain?"}]},
config=config,
)
Same thread_id = same conversation context. Different thread_id = fresh conversation. This means you could run multiple independent conversations simultaneously — each with its own memory.
Why Memory Matters
Without memory, every message is treated as a brand-new conversation. With memory, you can do things like:
You: Search for the top 3 Python web frameworks
Agent: [searches] The top 3 are Django, Flask, and FastAPI...
You: Compare the first two in a table
Agent: [remembers the previous context — knows "first two" means Django and Flask]
This is multi-turn interaction, and it's essential for agents that feel natural to use.
Step 5: Run It (5 Minutes)
Let's create an interactive loop so you can chat with your agent in the terminal. Here's the complete, final version of agent.py:
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from duckduckgo_search import DDGS
import os
# --- LLM ---
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# --- Tools ---
@tool
def web_search(query: str) -> str:
"""Search the web for current information. Use this when you need to find
up-to-date facts, news, or information that might not be in your training data."""
with DDGS() as ddgs:
results = list(ddgs.text(query, max_results=3))
if not results:
return "No results found."
output = ""
for r in results:
output += f"**{r['title']}**\n{r['body']}\n{r['href']}\n\n"
return output
@tool
def read_file(file_path: str) -> str:
"""Read the contents of a local file. Use this when the user asks you to
read, analyze, or summarize a file on their computer. The file_path should
be a relative or absolute path to a text file."""
try:
resolved = os.path.abspath(file_path)
with open(resolved, "r", encoding="utf-8") as f:
content = f.read()
if len(content) > 10000:
return content[:10000] + "\n\n[... file truncated at 10,000 characters]"
return content
except FileNotFoundError:
return f"Error: File not found at '{file_path}'"
except Exception as e:
return f"Error reading file: {e}"
tools = [web_search, read_file]
# --- System Prompt ---
system_prompt = """You are a helpful research assistant. You can search the web
for current information and read local files when asked.
When answering questions:
- Use the web_search tool for questions about current events, recent news, or
anything that might have changed after your training data cutoff
- Use the read_file tool when the user asks you to read or analyze a file
- Always cite your sources when using web search results
- Be concise but thorough in your answers
"""
# --- Create Agent with Memory ---
memory = MemorySaver()
agent = create_react_agent(
model=llm,
tools=tools,
prompt=system_prompt,
checkpointer=memory,
)
# --- Interactive Loop ---
def main():
print("AI Research Assistant")
print("Type 'quit' to exit, 'new' to start a fresh conversation.\n")
config = {"configurable": {"thread_id": "session-1"}}
session_count = 1
while True:
user_input = input("You: ").strip()
if not user_input:
continue
if user_input.lower() == "quit":
print("Goodbye!")
break
if user_input.lower() == "new":
session_count += 1
config = {"configurable": {"thread_id": f"session-{session_count}"}}
print("Started a new conversation.\n")
continue
response = agent.invoke(
{"messages": [{"role": "user", "content": user_input}]},
config=config,
)
# Get the last AI message from the response
ai_message = response["messages"][-1]
print(f"\nAgent: {ai_message.content}\n")
if __name__ == "__main__":
main()
Run your agent:
python agent.py
Example Queries to Try
Here are some things to test:
- Web search: "What are the biggest AI announcements this week?"
- File reading: "Read the file ./agent.py and explain what it does"
- Multi-step reasoning: "Search for the top 3 Python testing frameworks, then tell me which one is best for beginners and why"
- Memory test: Ask a question, then ask a follow-up that references the previous answer
- Combined tools: "Read my requirements.txt file and search for security vulnerabilities in those packages"
If something breaks, that's normal — and it's a great way to learn. Common issues:
- Rate limits: If you're getting rate limit errors, add a small delay between requests or upgrade your API plan
- Search failures: DuckDuckGo occasionally rate-limits automated searches. Wait a moment and try again
- File not found: Make sure you're using the correct relative path from where you're running the script
How It All Fits Together
Let's trace through what happens when you ask: "Search for the latest Python release and summarize it."
- Your message gets added to the conversation history
- The LLM receives the full history plus the list of available tools
- The LLM reasons: "The user wants current information about Python releases. I should use web_search."
- The LLM outputs a tool call:
web_search(query="latest Python release 2026") - LangGraph executes the tool and captures the result
- The result is added to the message history as a tool response
- The LLM receives the updated history and reasons again: "I have the search results. I can now summarize them for the user."
- The LLM generates a final response with the summary
- The response is returned to you
This entire Reason-Act-Observe loop happens automatically. You defined the tools and the system prompt — LangGraph handles the orchestration.
Extending Your Agent
Now that you have a working agent, here are some ideas for making it more powerful:
Add More Tools
The beauty of the tool-based architecture is that adding capabilities is as simple as writing a new function:
@tool
def run_python(code: str) -> str:
"""Execute Python code and return the output. Use this for calculations,
data analysis, or testing code snippets."""
try:
result = {}
exec(code, {"__builtins__": __builtins__}, result)
return str(result) if result else "Code executed successfully (no output)"
except Exception as e:
return f"Error: {e}"
Then add it to your tools list: tools = [web_search, read_file, run_python]
Add a Better System Prompt
The system prompt shapes your agent's personality and behavior. You can make it more specialized:
system_prompt = """You are a senior Python developer assistant. You specialize in:
- Code review and debugging
- Finding and explaining documentation
- Suggesting best practices and design patterns
When reviewing code, always check for:
1. Security issues
2. Performance problems
3. Code style and readability
"""
Switch to a Different Model
Want to try a different LLM? It's a one-line change:
# Use Claude instead of GPT
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)
# Use a local model via Ollama
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3")
The rest of your code stays exactly the same. That's one of the advantages of using LangChain — it abstracts away the differences between providers.
What's Next?
You just built a working AI agent in Python. That's a real accomplishment — most developers haven't built one yet. But there's a lot more to explore.
What we covered today is the foundation. Production-ready agents need:
- Error recovery — what happens when a tool fails mid-task?
- Streaming responses — showing the user what's happening in real time
- Stateful workflows — complex multi-step processes with branching logic
- RAG (Retrieval-Augmented Generation) — grounding agent responses in your own documents and data
- Multi-agent systems — multiple specialized agents collaborating on complex tasks
- Guardrails and safety — preventing agents from taking harmful actions
- Deployment — running agents as web services or API endpoints
Our Agentic AI with Python — LangChain & LangGraph course covers all of this and more. You'll go from the fundamentals we covered today to building a complete customer support agent with stateful workflows, RAG integration, and production deployment.
If you want to explore the Node.js/TypeScript side of agent development, check out our post on Building AI Agents with Node.js and TypeScript.
FAQ
Do I need a paid API key to build AI agents?
You need an API key from OpenAI or Anthropic, and most API calls have a small cost (typically a few cents per conversation). OpenAI offers free trial credits for new accounts, and Anthropic also has a free tier. For this tutorial, you'll spend well under $1 total. You can also use free local models via Ollama — just swap the LLM provider as shown in the "Extending Your Agent" section.
Can I use a different language model instead of GPT-4o?
Absolutely. LangChain supports dozens of LLM providers. You can use Anthropic's Claude, Google's Gemini, Meta's Llama (via Ollama for local execution), Mistral, and many more. The agent code stays the same — you only change the LLM initialization. Claude and GPT-4o are the most capable for agentic tasks as of early 2026, but open-source models are catching up fast.
What's the difference between this tutorial and a production AI agent?
This tutorial gives you a working agent, but production agents need additional layers: persistent memory (database-backed instead of in-memory), streaming responses, error recovery and retry logic, authentication and rate limiting, observability and logging, and safety guardrails. Our Agentic AI with Python course covers all of these topics and walks you through building a production-grade agent from scratch.

