The Step-Bless Guide who are responsible for building an ITECA Workflow agent using Langgraph and Gemini

In this lesson, we show how we can build a number of steps, wise treats – agent using Langgraph and Gemini 1.5 Flash. The basic idea to organize an AI Reasoning, where the question you do not have in the process of purpose: route, research, research, reply, and verification. Each node works as a functional block that has a well-described role, making an agent not just responding but analyzed. Using the Lang Graph's Stategram, we plan these areas to create analysis system and improve its output until the answer is guaranteed as complete or limitation of Itemation.
!pip install langgraph langchain-google-genai python-dotenv
First, command! PIP Install Lang Graph Langchain-Google-Genei Python-Dotenv to install three Python packages that are important to build a tasking agent. The Langgram enables orchestages based on the magazine Agents Ai, Langchain-Genaai provide integration with Gemini's Gemini models, and Python-Dotenv is letting safe resigns from .env files.
import os
from typing import Dict, Any, List
from dataclasses import dataclass
from langgraph.graph import Graph, StateGraph, END
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, SystemMessage
import json
os.environ["GOOGLE_API_KEY"] = "Use Your API Key Here"
We import important modules and libraries to create an agency function, including ChatgoogleGengenatively communication with Gemini models and stategram control. Line os.environ[“GOOGLE_API_KEY”] = “Use your API key here” gives the API key for environmental variations, which allows the Gemini model to confirm and produce answers.
@dataclass
class AgentState:
"""State shared across all nodes in the graph"""
query: str = ""
context: str = ""
analysis: str = ""
response: str = ""
next_action: str = ""
iteration: int = 0
max_iterations: int = 3
See a letter of writing here
This Agentstate data describes the shared condition that we insist on all different nodes in the work of Langgraph. Following important fields, including user's question, to return the context, any analysis done, the reply produced, and the following verb. Including the Itemation counter and the Max_aterations to control the faster movement may imitate it, evaluate the ability to consult or make decisions about agent.
@dataclass
class AgentState:
"""State shared across all nodes in the graph"""
query: str = ""
context: str = ""
analysis: str = ""
response: str = ""
next_action: str = ""
iteration: int = 0
max_iterations: int = 3
This AgentState dataclass defines the shared state that persists across different nodes in a LangGraph workflow. It tracks key fields, including the user's query, retrieved context, any analysis performed, the generated response, and the recommended next action. It also includes an iteration counter and a max_iterations limit to control how many times the workflow can loop, enabling iterative reasoning or decision-making by the agent.
class GraphAIAgent:
def __init__(self, api_key: str = None):
if api_key:
os.environ["GOOGLE_API_KEY"] = api_key
self.llm = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
temperature=0.7,
convert_system_message_to_human=True
)
self.analyzer = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
temperature=0.3,
convert_system_message_to_human=True
)
self.graph = self._build_graph()
def _build_graph(self) -> StateGraph:
"""Build the LangGraph workflow"""
workflow = StateGraph(AgentState)
workflow.add_node("router", self._router_node)
workflow.add_node("analyzer", self._analyzer_node)
workflow.add_node("researcher", self._researcher_node)
workflow.add_node("responder", self._responder_node)
workflow.add_node("validator", self._validator_node)
workflow.set_entry_point("router")
workflow.add_edge("router", "analyzer")
workflow.add_conditional_edges(
"analyzer",
self._decide_next_step,
{
"research": "researcher",
"respond": "responder"
}
)
workflow.add_edge("researcher", "responder")
workflow.add_edge("responder", "validator")
workflow.add_conditional_edges(
"validator",
self._should_continue,
{
"continue": "analyzer",
"end": END
}
)
return workflow.compile()
def _router_node(self, state: AgentState) -> Dict[str, Any]:
"""Route and categorize the incoming query"""
system_msg = """You are a query router. Analyze the user's query and provide context.
Determine if this is a factual question, creative request, problem-solving task, or analysis."""
messages = [
SystemMessage(content=system_msg),
HumanMessage(content=f"Query: {state.query}")
]
response = self.llm.invoke(messages)
return {
"context": response.content,
"iteration": state.iteration + 1
}
def _analyzer_node(self, state: AgentState) -> Dict[str, Any]:
"""Analyze the query and determine the approach"""
system_msg = """Analyze the query and context. Determine if additional research is needed
or if you can provide a direct response. Be thorough in your analysis."""
messages = [
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Context: {state.context}
Previous Analysis: {state.analysis}
""")
]
response = self.analyzer.invoke(messages)
analysis = response.content
if "research" in analysis.lower() or "more information" in analysis.lower():
next_action = "research"
else:
next_action = "respond"
return {
"analysis": analysis,
"next_action": next_action
}
def _researcher_node(self, state: AgentState) -> Dict[str, Any]:
"""Conduct additional research or information gathering"""
system_msg = """You are a research assistant. Based on the analysis, gather relevant
information and insights to help answer the query comprehensively."""
messages = [
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Analysis: {state.analysis}
Research focus: Provide detailed information relevant to the query.
""")
]
response = self.llm.invoke(messages)
updated_context = f"{state.context}nnResearch: {response.content}"
return {"context": updated_context}
def _responder_node(self, state: AgentState) -> Dict[str, Any]:
"""Generate the final response"""
system_msg = """You are a helpful AI assistant. Provide a comprehensive, accurate,
and well-structured response based on the analysis and context provided."""
messages = [
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Context: {state.context}
Analysis: {state.analysis}
Provide a complete and helpful response.
""")
]
response = self.llm.invoke(messages)
return {"response": response.content}
def _validator_node(self, state: AgentState) -> Dict[str, Any]:
"""Validate the response quality and completeness"""
system_msg = """Evaluate if the response adequately answers the query.
Return 'COMPLETE' if satisfactory, or 'NEEDS_IMPROVEMENT' if more work is needed."""
messages = [
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Original Query: {state.query}
Response: {state.response}
Is this response complete and satisfactory?
""")
]
response = self.analyzer.invoke(messages)
validation = response.content
return {"context": f"{state.context}nnValidation: {validation}"}
def _decide_next_step(self, state: AgentState) -> str:
"""Decide whether to research or respond directly"""
return state.next_action
def _should_continue(self, state: AgentState) -> str:
"""Decide whether to continue iterating or end"""
if state.iteration >= state.max_iterations:
return "end"
if "COMPLETE" in state.context:
return "end"
if "NEEDS_IMPROVEMENT" in state.context:
return "continue"
return "end"
def run(self, query: str) -> str:
"""Run the agent with a query"""
initial_state = AgentState(query=query)
result = self.graph.invoke(initial_state)
return result["response"]
See a letter of writing here
The Graxiiaagent section describes Langgraph based Langgraph bemini using Gemini models to analyze gemini, research, respond and confirm the answers to user questions. It uses meeting places, such as a router, analyzer, researcher, respondent, and service, thinking about complex tasks, replying the answers in control.
def main():
agent = GraphAIAgent("Use Your API Key Here")
test_queries = [
"Explain quantum computing and its applications",
"What are the best practices for machine learning model deployment?",
"Create a story about a robot learning to paint"
]
print("🤖 Graph AI Agent with LangGraph and Gemini")
print("=" * 50)
for i, query in enumerate(test_queries, 1):
print(f"n📝 Query {i}: {query}")
print("-" * 30)
try:
response = agent.run(query)
print(f"🎯 Response: {response}")
except Exception as e:
print(f"❌ Error: {str(e)}")
print("n" + "="*50)
if __name__ == "__main__":
main()
Finally, great work () starts a graphiaaaaigent with a Gemini API key and drives the test for the test questions, strategies, and creative activities. Prints for each question and the response produced by AI, indicates that the agigraphy is driven by installing types of Gemino's and for generations.
In conclusion, by assembling a systematic Langgraph machine by Gemin's Langgraph Electe, this work represents a new paradigm in AI, one showing investigation cycles, analytical, and verification. The lesson provides a valid and expandable template for improvement of AIs that can manage various functions, from answering complex questions in creating creative content.
See a letter of writing here. All credit for this study goes to research for this project.
🆕 Did you know? MarktechPost is a very fast ai-growing media of AI – being relied by more than 1 million students. Book a strategy that costs you discussing your campaign goals. Also, feel free to follow it Sane and don't forget to join ours 95k + ml subreddit Then sign up for Our newspaper.
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.



