How to Build a Powerful Multi-Agent Pipeline Using CAMEL for Scheduling, Web-Enhanced Thinking, Decoding, and Persistent Memory

In this tutorial, we build a workflow for advanced multi-agent research using CAMEL frame. We are building an integrated community of agents, Editor, Researcher, Writer, Analyst, and Archivist, who collectively transform a high-level article into a polished, evidence-based brief. We securely integrate the OpenAI API, orchestrate agent interactions programmatically, and add lightweight persistent memory to store information across runs. By organizing the system with clear roles, JSON-based contracts, and iterative optimization, we show how CAMEL can be used to build reliable, controllable, and uncontrollable agent pipelines. Check it out FULL CODES here.
!pip -q install "camel-ai[all]" "python-dotenv" "rich"
import os
import json
import time
from typing import Dict, Any
from rich import print as rprint
def load_openai_key() -> str:
key = None
try:
from google.colab import userdata
key = userdata.get("OPENAI_API_KEY")
except Exception:
key = None
if not key:
import getpass
key = getpass.getpass("Enter OPENAI_API_KEY (hidden): ").strip()
if not key:
raise ValueError("OPENAI_API_KEY is required.")
return key
os.environ["OPENAI_API_KEY"] = load_openai_key()
Set up an application and we securely upload the OpenAI API key using Colab secrets or encrypted information. We ensure that the runtime is correct by installing dependencies and configure authentication so that the workflow runs securely without exposing information. Check it out FULL CODES here.
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
from camel.agents import ChatAgent
from camel.toolkits import SearchToolkit
MODEL_CFG = {"temperature": 0.2}
model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O,
model_config_dict=MODEL_CFG,
)
We initialize the CAMEL model configuration and create a shared language model using the ModelFactory abstraction. We standardize the model's behavior across agents to ensure consistent, reproducible reasoning across multi-agent pipelines. Check it out FULL CODES here.
MEM_PATH = "camel_memory.json"
def mem_load() -> Dict[str, Any]:
if not os.path.exists(MEM_PATH):
return {"runs": []}
with open(MEM_PATH, "r", encoding="utf-8") as f:
return json.load(f)
def mem_save(mem: Dict[str, Any]) -> None:
with open(MEM_PATH, "w", encoding="utf-8") as f:
json.dump(mem, f, ensure_ascii=False, indent=2)
def mem_add_run(topic: str, artifacts: Dict[str, str]) -> None:
mem = mem_load()
mem["runs"].append({"ts": int(time.time()), "topic": topic, "artifacts": artifacts})
mem_save(mem)
def mem_last_summaries(n: int = 3) -> str:
mem = mem_load()
runs = mem.get("runs", [])[-n:]
if not runs:
return "No past runs."
return "n".join([f"{i+1}. topic={r['topic']} | ts={r['ts']}" for i, r in enumerate(runs)])
We use a lightweight in-memory layer supported by a JSON file. We store artifacts from each execution and retrieve snapshots of previous executions, allowing us to present the continuity and history of content across time. Check it out FULL CODES here.
def make_agent(role: str, goal: str, extra_rules: str = "") -> ChatAgent:
system = (
f"You are {role}.n"
f"Goal: {goal}n"
f"{extra_rules}n"
"Output must be crisp, structured, and directly usable by the next agent."
)
return ChatAgent(model=model, system_message=system)
planner = make_agent(
"Planner",
"Create a compact plan and research questions with acceptance criteria.",
"Return JSON with keys: plan, questions, acceptance_criteria."
)
researcher = make_agent(
"Researcher",
"Answer questions using web search results.",
"Return JSON with keys: findings, sources, open_questions."
)
writer = make_agent(
"Writer",
"Draft a structured research brief.",
"Return Markdown only."
)
critic = make_agent(
"Critic",
"Identify weaknesses and suggest fixes.",
"Return JSON with keys: issues, fixes, rewrite_instructions."
)
finalizer = make_agent(
"Finalizer",
"Produce the final improved brief.",
"Return Markdown only."
)
search_tool = SearchToolkit().search_duckduckgo
researcher = ChatAgent(
model=model,
system_message=researcher.system_message,
tools=[search_tool],
)
We describe the key agent roles and their responsibilities within the workflow. We create specialized agents with clear goals and output contracts, and develop Researcher by embedding a web search tool to find evidence-based answers. Check it out FULL CODES here.
def step_json(agent: ChatAgent, prompt: str) -> Dict[str, Any]:
res = agent.step(prompt)
txt = res.msgs[0].content.strip()
try:
return json.loads(txt)
except Exception:
return {"raw": txt}
def step_text(agent: ChatAgent, prompt: str) -> str:
res = agent.step(prompt)
return res.msgs[0].content
We abstract interaction patterns and agents to perform helper tasks that force structured JSON or free text output. We simplify orchestration by managing the analysis understanding and fallback in one place, making the pipeline more robust to formatting changes. Check it out FULL CODES here.
def run_workflow(topic: str) -> Dict[str, str]:
rprint(mem_last_summaries(3))
plan = step_json(
planner,
f"Topic: {topic}nCreate a tight plan and research questions."
)
research = step_json(
researcher,
f"Research the topic using web search.n{json.dumps(plan)}"
)
draft = step_text(
writer,
f"Write a research brief using:n{json.dumps(research)}"
)
critique = step_json(
critic,
f"Critique the draft:n{draft}"
)
final = step_text(
finalizer,
f"Rewrite using critique:n{json.dumps(critique)}nDraft:n{draft}"
)
artifacts = {
"plan_json": json.dumps(plan, indent=2),
"research_json": json.dumps(research, indent=2),
"draft_md": draft,
"critique_json": json.dumps(critique, indent=2),
"final_md": final,
}
mem_add_run(topic, artifacts)
return artifacts
TOPIC = "Agentic multi-agent research workflow with quality control"
artifacts = run_workflow(TOPIC)
print(artifacts["final_md"])
We organize a complete multi-agent workflow from planning to completion. We pass the artifacts sequentially between agents, use critique-driven refinement, memorize the results, and produce a summary of completed research that is ready for downstream use.
In conclusion, we implemented a practical CAMEL-based multi-agent system that demonstrates real-world research and workflow. We have shown how clearly defined agent roles, improved tool reasoning, and critique-driven refinement lead to higher quality results while reducing misconceptions and structural weaknesses. We've also built a foundation for scalability with persistence of artifacts and allowing reuse across instances. This approach allows us to move beyond single interactions and toward robust agent systems that can be configured for research, analysis, reporting, and decision support functions at scale.
Check it out FULL CODES here. Also, feel free to follow us Twitter and don't forget to join our 100k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.
Michal Sutter is a data science expert with a Master of Science in Data Science from the University of Padova. With a strong foundation in statistical analysis, machine learning, and data engineering, Michal excels at turning complex data sets into actionable insights.



