Following Alentai Agent's answers using MLFLOW

MLFLOW is an open source platform for managing and tracking machine learning tests. When used by e-opensai agents Sdk, Mlllow automatically:
- Logs all agents and apple calls
- Captives for the tool usage, installation / output text messages, and medium decisions
- Tracks running to correct an error, analysis, and reproductive
This is very effective when constructing many agents systems where different agents are partnering or calling different functions
In this lesson, we will walk in two important examples: a simple Handoff between agents, and the use of agent Guardrails – everything while following their behavior using MLOW.
To set up the dependence
Installing libraries
pip install openai-agents mlflow pydantic pydotenv
Opelai API key
Finding the Opelai API key, visit and generate a new key. If you are a new user, you may need to add payment information and make a minimum payment of $ 5 to activate API access.
When the key is made, create a .Anv file and enter the following:
Locate
Multi-agent Program (Multi_agent_Demo.py)
In this script (UmLTI_Demo.py), we create a simple habit of using the E-openai agents SDK, designed to follow user questions to a coding or a copy. We give Mlflow.Openai.autology ()The automatic tracking and installing logs All agent's encounter with Opelai API – including installation, outgoing, and agent management – makes it easy to look and fix the system. MLFLOW is prepared to use URI to follow the file in the area (./Mlruns) and include all tasks under the name of the test “Cooking Codes“.
import mlflow, asyncio
from agents import Agent, Runner
import os
from dotenv import load_dotenv
load_dotenv()
mlflow.openai.autolog() # Auto‑trace every OpenAI call
mlflow.set_tracking_uri("./mlruns")
mlflow.set_experiment("Agent‑Coding‑Cooking")
coding_agent = Agent(name="Coding agent",
instructions="You only answer coding questions.")
cooking_agent = Agent(name="Cooking agent",
instructions="You only answer cooking questions.")
triage_agent = Agent(
name="Triage agent",
instructions="If the request is about code, handoff to coding_agent; "
"if about cooking, handoff to cooking_agent.",
handoffs=[coding_agent, cooking_agent],
)
async def main():
res = await Runner.run(triage_agent,
input="How do I boil pasta al dente?")
print(res.final_output)
if __name__ == "__main__":
asyncio.run(main())
Mllow uii
To open MFFLOW UI and look at all agent meet, use the following command in the new terminal:
This will start a MFLW tracker tracking server and promptly indicate the URL and Port where the UI is available – usually by accident.
We can look at all the flow of flow in Track Category – From the first user installation of the assistant who submits the application on the appropriate agent, and ultimately, the answer issued by that name issued. This last end trail provides valuable understanding of decisions, errors, and exit, it helps you to correct your error and increase the flow of your work.
Following Guarderails (Guardrails.py)
In this example, we use a customer support agent with the guard using e-opensai agents SDK through the MFLW track. The agent has been designed to help users with general questions but prohibited from answering medical related questions. The dedicated Guardrail agent examines such inputs, and if accessible, a request. MLFLOW captures all flow – including performance performance, thinking, and agent response – to provide full tracking and understanding of safety.
import mlflow, asyncio
from pydantic import BaseModel
from agents import (
Agent, Runner,
GuardrailFunctionOutput, InputGuardrailTripwireTriggered,
input_guardrail, RunContextWrapper)
from dotenv import load_dotenv
load_dotenv()
mlflow.openai.autolog()
mlflow.set_tracking_uri("./mlruns")
mlflow.set_experiment("Agent‑Guardrails")
class MedicalSymptons(BaseModel):
medical_symptoms: bool
reasoning: str
guardrail_agent = Agent(
name="Guardrail check",
instructions="Check if the user is asking you for medical symptons.",
output_type=MedicalSymptons,
)
@input_guardrail
async def medical_guardrail(
ctx: RunContextWrapper[None], agent: Agent, input
) -> GuardrailFunctionOutput:
result = await Runner.run(guardrail_agent, input, context=ctx.context)
return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.medical_symptoms,
)
agent = Agent(
name="Customer support agent",
instructions="You are a customer support agent. You help customers with their questions.",
input_guardrails=[medical_guardrail],
)
async def main():
try:
await Runner.run(agent, "Should I take aspirin if I'm having a headache?")
print("Guardrail didn't trip - this is unexpected")
except InputGuardrailTripwireTriggered:
print("Medical guardrail tripped")
if __name__ == "__main__":
asyncio.run(main())
This document describes the customer's subsidized agent in Guardrali who receives medical-related questions. Using a different Guardrail_agent to assess whether the user's input contains a medical advice. If such findings, Guardrail causes and prevents a major agent in responding. The whole process, including Guardrail's outcomes, automatically registered and followed using the MLOW.
Mllow uii
To open MFFLOW UI and look at all agent meet, use the following command in the new terminal:
In this example, we asked the agent, “I have to take aspirin if I have head headache?”, Causing the guards. In MFFLOW UI, we can clearly see that the input slandered, and the consultation provided by Guardrail Agent for why the application is blocked.
| Check codes. All credit for this study goes to research for this project. We're ready to contact 1 million Devs / Engineers / Investigators? See that NVIADIA research, LG AI, and senior AI services MarktechPost benefit to their target audience [Learn More] |

I am the student of the community engineering (2022) from Jamia Millia Islamia, New Delhi, and I am very interested in data science, especially neural networks and their application at various locations.



