CrewAI is an open-source framework for building multi-agent AI systems. Where a single LLM answers questions, a crew of specialized agents can handle complex multi-step workflows — research, analysis, writing, review — with each agent having a distinct role, tools, and goal.
Architecture
Agent: A language model with a role (e.g., Senior Data Analyst), goal (produce actionable insights), backstory (how it thinks), and tools (web search, file read, code execution).
Task: A unit of work with a description and expected output.
Crew: A team of agents executing tasks. Two modes: Sequential (one after another, output feeds next) and Hierarchical (one agent manages and delegates).
Installation
pip install crewai crewai-tools pip install langchain-ollama # for Ollama backend
Research + Writing Crew
import os
from crewai import Agent, Task, Crew, Process
from crewai.tools import SerplyWebSearch, FileWriteTool
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3", base_url="http://localhost:11434")
researcher = Agent(
role="Senior Research Analyst",
goal="Find the most relevant and recent information on the given topic",
backstory="You are an experienced research analyst with expertise in finding accurate, up-to-date information from the web.",
llm=llm, tools=[SerplyWebSearch()], verbose=True
)
writer = Agent(
role="Tech Content Writer",
goal="Write a clear, engaging, well-structured article based on research",
backstory="You are a skilled tech writer who translates complex topics into accessible content.",
llm=llm, tools=[FileWriteTool(file_path="article.md")], verbose=True
)
editor = Agent(
role="Senior Editor",
goal="Ensure the article is polished, accurate, and publication-ready",
backstory="You are a meticulous editor with 15 years of experience in tech journalism.",
llm=llm, verbose=True
)
task1 = Task(description="Research latest self-hosted AI tools and trends.", agent=researcher, expected_output="List of 5-7 key items with summaries and URLs.")
task2 = Task(description="Write a 600-word article from the research.", agent=writer, context=[task1], expected_output="Markdown article saved to article.md.")
task3 = Task(description="Review and revise the article.", agent=editor, context=[task2], expected_output="Final revised article.")
crew = Crew(agents=[researcher, writer, editor], tasks=[task1, task2, task3], process=Process.sequential, verbose=True)
result = crew.kickoff()
print(result)Code Review Crew
Linter + Security reviewer + Architecture reviewer, all working sequentially to provide comprehensive code feedback.
When to Use Multi-Agent
Great for: research pipelines, content pipelines, code review, customer support, data analysis. Not ideal for: simple one-liners (overkill), real-time applications (latency too high).
Troubleshooting
Model not supported: Use from langchain_ollama import ChatOllama; llm = ChatOllama(model="llama3", base_url="http://localhost:11434")
Slow execution: Use smaller models (phi3) for simple agents, set verbose=False