Developing with CrewAI
CrewAI is a Python framework for building multi-Agent collaborative systems, particularly suitable for scenarios that require multiple specialized roles working together to complete complex tasks. CloudBase provides the cloudbase-agent-crewai adapter, enabling CrewAI Agents to seamlessly connect with the AG-UI protocol.
Prerequisites
- Python 3.9+
- An active CloudBase environment
- Configured LLM (see LLM Configuration Guide)
- Created API Key (Get it here)
Install Dependencies
pip install cloudbase-agent-crewai cloudbase-agent-server crewai crewai-tools
Quick Start
Basic Example
from cloudbase_agent.crewai import CrewAIAgent
from cloudbase_agent.server import create_agent_server
from crewai import Agent, Task, Crew
# Create Agents
researcher = Agent(
role="Researcher",
goal="Conduct in-depth research on given topics and provide detailed analysis",
backstory="You are an experienced researcher skilled at gathering and analyzing information.",
verbose=True,
)
writer = Agent(
role="Writer",
goal="Transform research findings into easy-to-understand articles",
backstory="You are a professional writer skilled at converting complex information into accessible content.",
verbose=True,
)
# Create Tasks
research_task = Task(
description="Research the latest developments and trends in {topic}",
expected_output="A detailed research report",
agent=researcher,
)
writing_task = Task(
description="Write a popular science article based on the research report",
expected_output="An article of about 500 words",
agent=writer,
)
# Create Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True,
)
# Create CloudBase Agent
agent = CrewAIAgent(
name="ResearchCrew",
description="Research and writing team",
crew=crew,
)
# Create service
app = create_agent_server(agent)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=3000)
Agent with Tools
from cloudbase_agent.crewai import CrewAIAgent
from cloudbase_agent.server import create_agent_server
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool, WebsiteSearchTool
# Create tools
search_tool = SerperDevTool()
website_tool = WebsiteSearchTool()
# Create Agent with tools
researcher = Agent(
role="Web Researcher",
goal="Use search tools to gather the latest information",
backstory="You are a researcher proficient in web searching.",
tools=[search_tool, website_tool],
verbose=True,
)
analyst = Agent(
role="Data Analyst",
goal="Analyze collected information and extract key insights",
backstory="You are a data analysis expert skilled at extracting value from large amounts of information.",
verbose=True,
)
# Create Tasks
search_task = Task(
description="Search for the latest news and articles about {topic}",
expected_output="A list of collected relevant information",
agent=researcher,
)
analysis_task = Task(
description="Analyze search results and extract key information and trends",
expected_output="Analysis report",
agent=analyst,
)
# Create Crew
crew = Crew(
agents=[researcher, analyst],
tasks=[search_task, analysis_task],
verbose=True,
)
agent = CrewAIAgent(
name="ResearchAnalysisCrew",
description="Research and analysis team",
crew=crew,
)
app = create_agent_server(agent)
Core Concepts
Agent
Agents are core roles in CrewAI, each with specific responsibilities:
from crewai import Agent
agent = Agent(
role="Role name", # Role definition
goal="Goal description", # Objective to achieve
backstory="Background story", # Role background, influences behavior style
tools=[], # List of available tools
llm=None, # Custom LLM (optional)
verbose=True, # Whether to output detailed logs
allow_delegation=True, # Whether to allow delegating tasks to other Agents
max_iter=15, # Maximum iterations
)
Task
Tasks define the specific work an Agent needs to complete:
from crewai import Task
task = Task(
description="Task description, supports {variable} variables",
expected_output="Expected output format",
agent=agent, # Agent responsible for execution
context=[other_task], # Dependent tasks (optional)
tools=[], # Task-specific tools (optional)
async_execution=False, # Whether to execute asynchronously
)
Crew
Crew is a combination of Agents and Tasks, defining collaboration methods:
from crewai import Crew, Process
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential, # Execution method: sequential or hierarchical
verbose=True,
manager_llm=None, # Manager LLM for hierarchical mode
)
Advanced Usage
Hierarchical Execution Mode
from crewai import Crew, Process
from langchain_openai import ChatOpenAI
import os
# Create manager LLM using CloudBase's built-in endpoint
manager_llm = ChatOpenAI(
model="hunyuan-t1-latest",
api_key=os.environ.get("TCB_API_KEY"),
base_url=f"https://{os.environ.get('TCB_ENV_ID')}.api.tcloudbasegateway.com/v1/ai/hunyuan/v1",
)
# Create hierarchical Crew
crew = Crew(
agents=[researcher, writer, analyst],
tasks=[research_task, writing_task, analysis_task],
process=Process.hierarchical,
manager_llm=manager_llm,
verbose=True,
)
Custom LLM
CloudBase has built-in support for multiple LLMs, accessible through a unified HTTP endpoint:
from crewai import Agent
from langchain_openai import ChatOpenAI
import os
env_id = os.environ.get("TCB_ENV_ID")
api_key = os.environ.get("TCB_API_KEY")
# Use Tencent Hunyuan model
hunyuan_llm = ChatOpenAI(
model="hunyuan-turbos-latest", # Options: hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111
api_key=api_key,
base_url=f"https://{env_id}.api.tcloudbasegateway.com/v1/ai/hunyuan/v1",
)
# Use DeepSeek model
deepseek_llm = ChatOpenAI(
model="deepseek-r1-0528", # Options: deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2
api_key=api_key,
base_url=f"https://{env_id}.api.tcloudbasegateway.com/v1/ai/deepseek/v1",
)
agent = Agent(
role="Assistant",
goal="Help users complete tasks",
backstory="You are an intelligent assistant.",
llm=hunyuan_llm,
)
If you need to use external model APIs, you can configure them yourself:
from langchain_openai import ChatOpenAI
# OpenAI (requires your own API Key)
openai_llm = ChatOpenAI(
model="gpt-4",
api_key=os.environ.get("OPENAI_API_KEY"),
)
Task Dependencies
# Task 2 depends on Task 1's result
task1 = Task(
description="Collect data",
expected_output="Data list",
agent=collector,
)
task2 = Task(
description="Analyze data",
expected_output="Analysis report",
agent=analyst,
context=[task1], # Depends on task1
)
Callback Handling
from cloudbase_agent.crewai import CrewAIAgent, CopilotKitState
def on_step_start(step_name: str, state: CopilotKitState):
print(f"Starting step: {step_name}")
def on_step_end(step_name: str, result: str, state: CopilotKitState):
print(f"Step completed: {step_name}, Result: {result}")
agent = CrewAIAgent(
name="CallbackCrew",
description="Team with callbacks",
crew=crew,
on_step_start=on_step_start,
on_step_end=on_step_end,
)
Streaming Output
CrewAI Agent supports streaming output by default through copilotkit_stream:
from cloudbase_agent.crewai import CrewAIAgent, copilotkit_stream
agent = CrewAIAgent(
name="StreamCrew",
description="Streaming output team",
crew=crew,
)
# Streaming output is handled automatically, clients receive real-time execution progress
Common Tools
CrewAI provides a rich set of built-in tools:
from crewai_tools import (
SerperDevTool, # Google search
WebsiteSearchTool, # Website content search
FileReadTool, # File reading
DirectoryReadTool, # Directory reading
CodeInterpreterTool,# Code execution
PDFSearchTool, # PDF search
YoutubeVideoSearchTool, # YouTube search
)
Custom Tools
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class WeatherInput(BaseModel):
city: str = Field(description="City name")
class WeatherTool(BaseTool):
name: str = "get_weather"
description: str = "Get weather information for a specified city"
args_schema: type[BaseModel] = WeatherInput
def _run(self, city: str) -> str:
# In real projects, call a weather API
return f"{city}: 25°C, Sunny"
weather_tool = WeatherTool()
Best Practices
1. Role Design
- Each Agent should have clear responsibility boundaries
- Use detailed backstory to guide Agent behavior style
- Avoid creating Agents with overlapping responsibilities
2. Task Decomposition
- Break complex tasks into multiple smaller tasks
- Set reasonable task dependencies
- Use context to pass contextual information
3. Resource Management
# Set maximum iterations to avoid infinite loops
agent = Agent(
role="Assistant",
goal="Complete tasks",
backstory="You are an assistant.",
max_iter=10, # Maximum 10 iterations
)
4. Error Handling
from cloudbase_agent.crewai import CrewAIAgent
def on_error(error: Exception, state):
print(f"Execution error: {error}")
return {"error": str(error)}
agent = CrewAIAgent(
name="ErrorHandlingCrew",
description="Team with error handling",
crew=crew,
on_error=on_error,
)
Deployment
Cloud Function Deployment
Since CrewAI is a Python framework, CloudRun deployment is recommended. If you need to use Cloud Functions, use the Python runtime.
CloudRun Deployment
Refer to CloudRun Deployment
Dockerfile example:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt --no-cache-dir
COPY . .
ENV PORT=3000
EXPOSE 3000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "3000"]
requirements.txt:
cloudbase-agent-crewai
cloudbase-agent-server
crewai
crewai-tools
uvicorn