Generative AI

A Coding Guide to Designing and Orchestrating Advanced React-Based Multi-Agent Workflows with AgentScope and OpenAI

In this tutorial, we build an advanced multi-agent incident response system using AgentScope. We organize multiple ReAct agents, each with a clearly defined role such as routing, measurement, analysis, documentation, and review, and connect them through a structured routing and shared messaging hub. By combining OpenAI models, simple toolkits, and a simple internal runbook, we show how real-world agent workflows can be designed in pure Python without heavy infrastructure or breakable glue code. Check it out FULL CODES here.

!pip -q install "agentscope>=0.1.5" pydantic nest_asyncio


import os, json, re
from getpass import getpass
from typing import Literal
from pydantic import BaseModel, Field
import nest_asyncio
nest_asyncio.apply()


from agentscope.agent import ReActAgent
from agentscope.message import Msg, TextBlock
from agentscope.model import OpenAIChatModel
from agentscope.formatter import OpenAIChatFormatter
from agentscope.memory import InMemoryMemory
from agentscope.tool import Toolkit, ToolResponse, execute_python_code
from agentscope.pipeline import MsgHub, sequential_pipeline


if not os.environ.get("OPENAI_API_KEY"):
   os.environ["OPENAI_API_KEY"] = getpass("Enter OPENAI_API_KEY (hidden): ")


OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")

We set up the execution environment and install all the dependencies needed for the tutorial to run reliably on Google Colab. We securely load the OpenAI API key and initialize the main AgentScope components that will be shared across all agents. Check it out FULL CODES here.

RUNBOOK = [
   {"id": "P0", "title": "Severity Policy", "text": "P0 critical outage, P1 major degradation, P2 minor issue"},
   {"id": "IR1", "title": "Incident Triage Checklist", "text": "Assess blast radius, timeline, deployments, errors, mitigation"},
   {"id": "SEC7", "title": "Phishing Escalation", "text": "Disable account, reset sessions, block sender, preserve evidence"},
]


def _score(q, d):
   q = set(re.findall(r"[a-z0-9]+", q.lower()))
   d = re.findall(r"[a-z0-9]+", d.lower())
   return sum(1 for w in d if w in q) / max(1, len(d))


async def search_runbook(query: str, top_k: int = 2) -> ToolResponse:
   ranked = sorted(RUNBOOK, key=lambda r: _score(query, r["title"] + r["text"]), reverse=True)[: max(1, int(top_k))]
   text = "nn".join(f"[{r['id']}] {r['title']}n{r['text']}" for r in ranked)
   return ToolResponse(content=[TextBlock(type="text", text=text)])


toolkit = Toolkit()
toolkit.register_tool_function(search_runbook)
toolkit.register_tool_function(execute_python_code)

We define a lightweight internal runbook and implement a correlation-based search tool on top of it. We register this function along with a Python execution tool, which allows agents to retrieve policy information or aggregate results dynamically. It shows how we develop agents with extraneous abilities beyond pure language reasoning. Check it out FULL CODES here.

def make_model():
   return OpenAIChatModel(
       model_name=OPENAI_MODEL,
       api_key=os.environ["OPENAI_API_KEY"],
       generate_kwargs={"temperature": 0.2},
   )


class Route(BaseModel):
   lane: Literal["triage", "analysis", "report", "unknown"] = Field(...)
   goal: str = Field(...)


router = ReActAgent(
   name="Router",
   sys_prompt="Route the request to triage, analysis, or report and output structured JSON only.",
   model=make_model(),
   formatter=OpenAIChatFormatter(),
   memory=InMemoryMemory(),
)


triager = ReActAgent(
   name="Triager",
   sys_prompt="Classify severity and immediate actions using runbook search when useful.",
   model=make_model(),
   formatter=OpenAIChatFormatter(),
   memory=InMemoryMemory(),
   toolkit=toolkit,
)


analyst = ReActAgent(
   name="Analyst",
   sys_prompt="Analyze logs and compute summaries using python tool when helpful.",
   model=make_model(),
   formatter=OpenAIChatFormatter(),
   memory=InMemoryMemory(),
   toolkit=toolkit,
)


writer = ReActAgent(
   name="Writer",
   sys_prompt="Write a concise incident report with clear structure.",
   model=make_model(),
   formatter=OpenAIChatFormatter(),
   memory=InMemoryMemory(),
)


reviewer = ReActAgent(
   name="Reviewer",
   sys_prompt="Critique and improve the report with concrete fixes.",
   model=make_model(),
   formatter=OpenAIChatFormatter(),
   memory=InMemoryMemory(),
)

We create multiple special agents for ReAct and a structured router that determines how each user request should be handled. We provide clear responsibilities to triage, analyze, document, and review agents, ensuring separation of concerns. Check it out FULL CODES here.

LOGS = """timestamp,service,status,latency_ms,error
2025-12-18T12:00:00Z,checkout,200,180,false
2025-12-18T12:00:05Z,checkout,500,900,true
2025-12-18T12:00:10Z,auth,200,120,false
2025-12-18T12:00:12Z,checkout,502,1100,true
2025-12-18T12:00:20Z,search,200,140,false
2025-12-18T12:00:25Z,checkout,500,950,true
"""


def msg_text(m: Msg) -> str:
   blocks = m.get_content_blocks("text")
   if blocks is None:
       return ""
   if isinstance(blocks, str):
       return blocks
   if isinstance(blocks, list):
       return "n".join(str(x) for x in blocks)
   return str(blocks)

We present sample log file data and a helper function that normalizes the agent's output to plain text. We ensure that the agents below can safely use and refine previous responses without format issues. It focuses on making communication between agents robust and predictable. Check it out FULL CODES here.

async def run_demo(user_request: str):
   route_msg = await router(Msg("user", user_request, "user"), structured_model=Route)
   lane = (route_msg.metadata or {}).get("lane", "unknown")


   if lane == "triage":
       first = await triager(Msg("user", user_request, "user"))
   elif lane == "analysis":
       first = await analyst(Msg("user", user_request + "nnLogs:n" + LOGS, "user"))
   elif lane == "report":
       draft = await writer(Msg("user", user_request, "user"))
       first = await reviewer(Msg("user", "Review and improve:nn" + msg_text(draft), "user"))
   else:
       first = Msg("system", "Could not route request.", "system")


   async with MsgHub(
       participants=[triager, analyst, writer, reviewer],
       announcement=Msg("Host", "Refine the final answer collaboratively.", "assistant"),
   ):
       await sequential_pipeline([triager, analyst, writer, reviewer])


   return {"route": route_msg.metadata, "initial_output": msg_text(first)}


result = await run_demo(
   "We see repeated 5xx errors in checkout. Classify severity, analyze logs, and produce an incident report."
)
print(json.dumps(result, indent=2))

We organize a full workflow by submitting a request, creating an appropriate agent, and implementing a collaborative development loop using a message hub. We coordinate multiple agents in sequence to develop the final output before returning it to the user. It combines all front-end components into a unified, end-to-end agent pipeline.

In conclusion, we have shown how AgentScope enables us to design robust, compatible, and collaborative agent systems that go beyond a single interface. We've delivered tasks dynamically, invoked tools only when needed, and refined results through multi-agent communication, all within a clean and reproducible Colab setup. This pattern shows how we can scale from simple agent testing to production-style lines of reasoning while maintaining clarity, control, and extensibility in our agent AI applications.


Check it out FULL CODES here. Also, feel free to follow us Twitter and don't forget to join our 100k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.


Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the power of Artificial Intelligence for the benefit of society. His latest endeavor is the launch of Artificial Intelligence Media Platform, Marktechpost, which stands out for its extensive coverage of machine learning and deep learning stories that sound technically sound and easily understood by a wide audience. The platform boasts of more than 2 million monthly views, which shows its popularity among viewers.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button