Creating Tasks
Agents define who does the work. Tasks define what work gets done. A well-defined task gives an agent clear instructions, a concrete expected output, and optionally, context from other tasks. In this lesson, you'll learn how to create effective tasks in CrewAI. See the Task docs for the complete API reference.
Anatomy of a Task
Every task in CrewAI has these key attributes:
| Attribute | Required | Purpose |
|---|---|---|
description | Yes | Detailed instructions for the agent |
expected_output | Yes | What the completed output should look like |
agent | Yes | The agent responsible for this task |
context | No | List of other tasks whose outputs feed into this one |
output_file | No | Save the result to a file |
human_input | No | Pause for human review before finalizing |
async_execution | No | Run this task in parallel with others |
Your First Task
from crewai import Agent, Task
researcher = Agent(
role="Research Analyst",
goal="Find accurate, current information",
backstory="Experienced research analyst."
)
research_task = Task(
description="Research the top 5 Python web frameworks in 2025. "
"For each framework, identify: name, GitHub stars, "
"key features, and best use cases.",
expected_output="A structured comparison of 5 Python web frameworks "
"with name, stars, features, and use cases for each.",
agent=researcher
)
Writing Good Task Descriptions
The description is the most important field. It's the prompt that drives the agent's work. Good descriptions are:
Specific — state exactly what needs to be done:
# Bad
Task(description="Research AI trends", ...)
# Good
Task(
description="Research the top 5 AI trends for enterprise adoption in 2025. "
"Focus on: technology name, current adoption rate, "
"key vendors, and projected growth. Use only sources "
"published after January 2025.",
...
)
Structured — tell the agent how to organize its work:
Task(
description="Analyze the competitor landscape for project management tools. "
"Structure your analysis as: "
"1) Market overview (size, growth rate) "
"2) Top 5 competitors with pricing and key features "
"3) Gap analysis — underserved needs "
"4) Opportunities for differentiation",
...
)
Bounded — set clear limits:
Task(
description="Write a blog post about serverless computing. "
"Length: 800-1000 words. "
"Audience: developers with 1-2 years of experience. "
"Tone: conversational but technically accurate. "
"Include at least 2 code examples in Python.",
...
)
Defining Expected Output
The expected_output field tells CrewAI what success looks like. It's used both to guide the agent and to validate the result.
# Vague — agent doesn't know what format to use
Task(
description="Analyze our customer data",
expected_output="An analysis of the data",
...
)
# Clear — agent knows exactly what to produce
Task(
description="Analyze our Q4 customer churn data",
expected_output="A report with: 1) Overall churn rate as a percentage, "
"2) Top 3 churn reasons ranked by frequency, "
"3) Churn rate by customer segment (enterprise/SMB/individual), "
"4) Three actionable recommendations to reduce churn",
...
)
Connecting Tasks with Context
One of CrewAI's most powerful features is task context — the output of one task automatically becomes input for the next.
research_task = Task(
description="Research the latest trends in sustainable packaging",
expected_output="A summary of 5 key trends with supporting data",
agent=researcher
)
analysis_task = Task(
description="Analyze the research findings and identify the top 3 "
"opportunities for our company to adopt sustainable packaging",
expected_output="A prioritized list of 3 opportunities with "
"estimated cost, timeline, and impact",
agent=analyst,
context=[research_task] # Gets the researcher's output automatically
)
writing_task = Task(
description="Write an executive summary based on the analysis",
expected_output="A 500-word executive summary suitable for C-level presentation",
agent=writer,
context=[analysis_task] # Gets the analyst's output
)
When analysis_task runs, the agent automatically receives the full output of research_task as context. No manual passing required.
Saving Output to Files
Use output_file to save a task's result directly to disk:
report_task = Task(
description="Write a comprehensive market research report",
expected_output="A 2000-word report in Markdown format",
agent=writer,
output_file="output/market-report.md"
)
This is useful for long-form outputs like reports, articles, or data exports.
Adding Human-in-the-Loop
Set human_input=True to pause execution and ask for human approval before the task is considered complete:
final_review_task = Task(
description="Compile the final report for client delivery",
expected_output="A polished, client-ready report",
agent=editor,
human_input=True # Will pause and ask for your feedback
)
When this task finishes, CrewAI will display the output and ask you to approve, reject, or provide feedback. This is invaluable for quality-critical workflows.
Async Execution
For tasks that don't depend on each other, you can run them in parallel:
market_research = Task(
description="Research market size and growth trends",
expected_output="Market data summary",
agent=market_researcher,
async_execution=True
)
competitor_research = Task(
description="Research competitor products and pricing",
expected_output="Competitor comparison table",
agent=competitor_analyst,
async_execution=True
)
# This task waits for both async tasks to complete
synthesis = Task(
description="Combine market and competitor research into a strategy doc",
expected_output="Strategic recommendation document",
agent=strategist,
context=[market_research, competitor_research]
)
Both research tasks run simultaneously, and the synthesis task automatically waits for their results.
Task Design Best Practices
One task, one deliverable. Each task should produce a single, clear output. If you need a report and a presentation, make them two tasks.
Description drives quality. The more specific your description, the better the output. Treat it like a brief to a freelancer.
Use context chains. Instead of cramming everything into one task, build a pipeline where each task refines the previous one's work.
Match complexity to capability. Don't assign a simple data lookup to an agent powered by GPT-4 — use a simpler model for simple tasks.
Key Takeaways
- Tasks define the work in CrewAI — the description, expected output, and assigned agent
- Good task descriptions are specific, structured, and bounded
- The
expected_outputfield guides the agent toward the right format and content - Context lets you chain tasks together so one agent's output feeds into the next
- Use
output_fileto save results,human_inputfor quality gates, andasync_executionfor parallel work - Design tasks around single deliverables and connect them with context chains

