OpenAgents Framework (2026): Lightweight Multi-Agent Systems Made Simple — Tutorial & Examples
What is OpenAgents?
OpenAgents is an open-source AI agent framework that emerged in early 2026, designed for building lightweight multi-agent collaboration systems. Unlike LangGraph's complex state graphs and CrewAI's role-playing paradigm, OpenAgents adopts a message-driven architecture, making inter-agent communication more intuitive and flexible.
Key Features
- 🚀 Lightweight Design: Core library is only 5MB, 40% faster startup than comparable frameworks
- 💬 Message-Driven: Structured message-based agent communication with async conversation support
- 🔌 Plugin System: Built-in tool library + custom tools with hot-swapping
- 📊 Observability: Built-in tracing and logging, no additional configuration needed
- 🌐 Multi-Model Support: Compatible with OpenAI, Anthropic, Ollama, and local models
Why Choose OpenAgents?
| Feature | OpenAgents | LangGraph | CrewAI | AutoGen |
|---|---|---|---|---|
| Learning Curve | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ |
| Runtime Overhead | Low | Medium | Medium | High |
| Multi-Agent Collaboration | Native Support | Manual Orchestration | Fixed Roles | Flexible but Complex |
| Observability | Built-in | Requires LangSmith | Limited | Requires Configuration |
| Community Ecosystem | Rapidly Growing | Mature | Mature | Mature |
Quick Start
Installation
# Basic installation
pip install openagents
# Full feature set (including vector stores and advanced tools)
pip install openagents[full]
# Verify installation
python -c "import openagents; print(openagents.__version__)"
Your First Agent
Create a simple Q&A agent:
from openagents import Agent, Runner
# Define agent
researcher = Agent(
name="Research Assistant",
description="Responsible for searching and organizing information",
instructions="You are a professional research assistant skilled at finding and summarizing technical documentation.",
tools=["web_search", "file_reader"],
model="gpt-4o"
)
# Run a single task
result = Runner.run(
agent=researcher,
input="Please summarize the core features of the OpenAgents framework"
)
print(result.output)
Multi-Agent Collaboration
OpenAgents' core strength lies in multi-agent collaboration. Here's a content creation workflow:
from openagents import Agent, Runner, Handoff
# Define specialized agents
planner = Agent(
name="Planner",
instructions="Analyze requirements, create content outlines and writing plans",
handoffs=["writer", "reviewer"]
)
writer = Agent(
name="Writer",
instructions="Write high-quality content based on the outline",
handoffs=["reviewer"]
)
reviewer = Agent(
name="Reviewer",
instructions="Review content quality, suggest revisions",
handoffs=["writer"] # Can send back for rewrite
)
# Create workflow
workflow = Runner(
agents=[planner, writer, reviewer],
entry_point="planner"
)
# Execute task
result = workflow.run(
input="Write a technical blog post about AI agent frameworks"
)
print(f"Final output: {result.output}")
print(f"Number of iterations: {result.metadata['turns']}")
Advanced Features
Tool Definition
OpenAgents supports flexible tool definition:
from openagents import tool
@tool
def calculate_readability(text: str) -> dict:
"""Calculate text readability score"""
words = len(text.split())
sentences = text.count('.') + text.count('!') + text.count('?')
if sentences == 0:
return {"score": 0, "level": "N/A"}
avg_sentence_length = words / sentences
# Simplified Flesch readability formula
score = 206.835 - 1.015 * avg_sentence_length
return {
"score": round(score, 2),
"level": "Easy" if score > 70 else "Medium" if score > 50 else "Difficult"
}
# Use in an agent
editor = Agent(
name="Editor",
instructions="Optimize text quality and readability",
tools=[calculate_readability]
)
State Management
from openagents import Agent, Runner, State
# Define shared state
class ArticleState(State):
topic: str
outline: list = []
draft: str = ""
revisions: int = 0
feedback: list = []
# Agents can read and write state
writer = Agent(
name="Writer",
state_schema=ArticleState,
instructions="""
1. Read current state.topic and state.outline
2. Write draft and update state.draft
3. If feedback received, increment state.revisions
"""
)
Streaming Output
from openagents import Agent, Runner
agent = Agent(name="Assistant", instructions="Explain complex concepts step by step")
# Stream the response
for chunk in Runner.run_stream(
agent=agent,
input="Explain the basic principles of quantum computing"
):
print(chunk.delta, end="", flush=True)
Practical Example: Automated Technical Blog Generator
Here's a complete multi-agent content generation system:
from openagents import Agent, Runner, tool
from typing import List
# Tool: Get trending topics
@tool
def get_trending_topics(category: str) -> List[str]:
"""Get popular topics for a given category"""
# Actual implementation could connect to RSS, GitHub Trends, etc.
return ["AI Agent Frameworks", "RAG Optimization", "LLM Security Testing"]
# Tool: SEO analysis
@tool
def analyze_seo(content: str) -> dict:
"""Analyze content SEO friendliness"""
return {
"keyword_density": 2.3,
"readability_score": 68,
"suggestions": ["Add H2 headings", "Add internal links"]
}
# Define agent team
topic_researcher = Agent(
name="Topic Researcher",
instructions="Discover and analyze trending technical topics",
tools=[get_trending_topics]
)
outline_creator = Agent(
name="Outline Planner",
instructions="Create detailed writing outlines based on topics",
handoffs=["writer"]
)
content_writer = Agent(
name="Content Writer",
instructions="Write technical articles of 1500+ words",
handoffs=["seo_optimizer"]
)
seo_optimizer = Agent(
name="SEO Optimizer",
instructions="Optimize article SEO and provide revision suggestions",
tools=[analyze_seo],
handoffs=["content_writer"] # Can send back for optimization
)
final_reviewer = Agent(
name="Final Editor",
instructions="Final quality check, approve for publication",
)
# Build workflow
blog_workflow = Runner(
agents=[
topic_researcher,
outline_creator,
content_writer,
seo_optimizer,
final_reviewer
],
entry_point="topic_researcher"
)
# Execute
result = blog_workflow.run(
input="Generate an AI technology blog post",
max_turns=10 # Limit maximum iterations
)
print(f"✅ Complete! {result.metadata['turns']} rounds of collaboration")
print(f"📄 Final word count: {len(result.output)}")
Comparison with Competing Frameworks
OpenAgents vs LangGraph
LangGraph Advantages: - Deep integration with the LangChain ecosystem - Fine-grained control over complex state graphs - Comprehensive enterprise-level support
OpenAgents Advantages: - Simpler API, lower learning curve - Message-driven model aligns better with agent collaboration intuition - Built-in observability, no LangSmith required
OpenAgents vs CrewAI
CrewAI Advantages: - Mature role-playing paradigm - Rich predefined role templates - Extensive community examples
OpenAgents Advantages: - More flexible agent communication patterns - Supports dynamic handoffs (CrewAI requires predefined flows) - Better performance (no redundant abstraction layers)
OpenAgents vs AutoGen
AutoGen Advantages: - Microsoft-backed, high stability - Supports code execution sandbox - Multi-language agent support
OpenAgents Advantages: - Simpler configuration (AutoGen requires significant boilerplate code) - Better debugging experience - More user-friendly documentation
Production Deployment
Docker Deployment
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install openagents[full]
COPY . .
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
ENV OPENAGENTS_LOG_LEVEL=info
CMD ["python", "main.py"]
Monitoring and Logging
import openagents
from openagents.tracing import TracingConfig
# Configure tracing
openagents.configure(
tracing=TracingConfig(
enabled=True,
export_to="otel", # OpenTelemetry
sample_rate=1.0
),
logging={
"level": "INFO",
"format": "json"
}
)
FAQ
Q: Which models does OpenAgents support?
A: It supports all major model providers: - OpenAI (GPT-4o, GPT-4 Turbo) - Anthropic (Claude 3.5/3) - Google (Gemini Pro) - Ollama (local models) - Custom API endpoints
Q: How to handle long contexts?
A: OpenAgents has built-in context compression:
agent = Agent(
name="Assistant",
context_window=128000, # Set context window
truncate_strategy="summary" # Auto-summarize when exceeded
)
Q: Can it be used offline?
A: Yes, when paired with Ollama or local models:
agent = Agent(
name="Local Assistant",
model="ollama/llama3.1:8b",
base_url="http://localhost:11434"
)
Summary
As an emerging AI agent framework in 2026, OpenAgents strikes a good balance between simplicity and flexibility. It's particularly recommended for:
✅ Recommended Scenarios: - Rapid prototyping - Small to medium multi-agent systems - Projects requiring flexible agent communication - Teams looking to reduce learning overhead
❌ Not Recommended For: - Deep LangChain ecosystem integration needs - Ultra-complex state graph control - Enterprise-level SLA support requirements
Resource Links
Next Steps: Try building your first multi-agent workflow with OpenAgents and experience the power of message-driven architecture!