Full Code Implementation to design a graph-based agent with a function of work planning, retrieving, integration and pride of yourself

In this lesson, we use advanced AGENT Agent using a graphiagent framework and Gemini 1.5 Flash Model. It describes the targeted graph of areas, each responsible for a specific work: The workforce, the flowing ground, the audit, the author; It includes Gemini with WRAPPER USING JSON Formal JSON by issuing this pipe edge, indicating that thinking, retrieval, and verification of the ability to comply with the same law. Look Full codes here.
import os, json, time, ast, math, getpass
from dataclasses import dataclass, field
from typing import Dict, List, Callable, Any
import google.generativeai as genai
try:
import networkx as nx
except ImportError:
nx = None
We begin by taking the practice of Core Python libraries to manage data, time, safe evaluation, and dactaclasse and typewriter to organize our situation. We are loading with Google.GenerativeIni's client to reach Gemini again, voluntarily, the Graph Visualization network. Look Full codes here.
def make_model(api_key: str, model_name: str = "gemini-1.5-flash"):
genai.configure(api_key=api_key)
return genai.GenerativeModel(model_name, system_instruction=(
"You are GraphAgent, a principled planner-executor. "
"Prefer structured, concise outputs; use provided tools when asked."
))
def call_llm(model, prompt: str, temperature=0.2) -> str:
r = model.generate_content(prompt, generation_config={"temperature": temperature})
return (r.text or "").strip()
It describes a plan for planning and returns the Gemini model for a customization program, and another job that costs llm is temporarily while managing the temperature. We use this reset to ensure our agent receives formal, short shorts consistently. Look Full codes here.
def safe_eval_math(expr: str) -> str:
node = ast.parse(expr, mode="eval")
allowed = (ast.Expression, ast.BinOp, ast.UnaryOp, ast.Num, ast.Constant,
ast.Add, ast.Sub, ast.Mult, ast.Div, ast.Pow, ast.Mod,
ast.USub, ast.UAdd, ast.FloorDiv, ast.AST)
def check(n):
if not isinstance(n, allowed): raise ValueError("Unsafe expression")
for c in ast.iter_child_nodes(n): check(c)
check(node)
return str(eval(compile(node, "
We use the two main agents tools: Safe Mathematical Statistics and evaluate the AST statistics before being killed, as well as the simple set of Snipps Return to the small memory Corpus. We use this to provide reliable agent integration and return skills without any external dependence. Look Full codes here.
@dataclass
class State:
task: str
plan: str = ""
scratch: List[str] = field(default_factory=list)
evidence: List[str] = field(default_factory=list)
result: str = ""
step: int = 0
done: bool = False
def node_plan(state: State, model) -> str:
prompt = f"""Plan step-by-step to solve the user task.
Task: {state.task}
Return JSON: {{"subtasks": ["..."], "tools": {{"search": true/false, "math": true/false}}, "success_criteria": ["..."]}}"""
js = call_llm(model, prompt)
try:
plan = json.loads(js[js.find("{"): js.rfind("}")+1])
except Exception:
plan = {"subtasks": ["Research", "Synthesize"], "tools": {"search": True, "math": False}, "success_criteria": ["clear answer"]}
state.plan = json.dumps(plan, indent=2)
state.scratch.append("PLAN:n"+state.plan)
return "route"
def node_route(state: State, model) -> str:
prompt = f"""You are a router. Decide next node.
Context scratch:n{chr(10).join(state.scratch[-5:])}
If math needed -> 'math', if research needed -> 'research', if ready -> 'write'.
Return one token from [research, math, write]. Task: {state.task}"""
choice = call_llm(model, prompt).lower()
if "math" in choice and any(ch.isdigit() for ch in state.task):
return "math"
if "research" in choice or not state.evidence:
return "research"
return "write"
def node_research(state: State, model) -> str:
prompt = f"""Generate 3 focused search queries for:
Task: {state.task}
Return as JSON list of strings."""
qjson = call_llm(model, prompt)
try:
queries = json.loads(qjson[qjson.find("["): qjson.rfind("]")+1])[:3]
except Exception:
queries = [state.task, "background "+state.task, "pros cons "+state.task]
hits = []
for q in queries:
hits.extend(search_docs(q, k=2))
state.evidence.extend(list(dict.fromkeys(hits)))
state.scratch.append("EVIDENCE:n- " + "n- ".join(hits))
return "route"
def node_math(state: State, model) -> str:
prompt = "Extract a single arithmetic expression from this task:n"+state.task
expr = call_llm(model, prompt)
expr = "".join(ch for ch in expr if ch in "0123456789+-*/().%^ ")
try:
val = safe_eval_math(expr)
state.scratch.append(f"MATH: {expr} = {val}")
except Exception as e:
state.scratch.append(f"MATH-ERROR: {expr} ({e})")
return "route"
def node_write(state: State, model) -> str:
prompt = f"""Write the final answer.
Task: {state.task}
Use the evidence and any math results below, cite inline like [1],[2].
Evidence:n{chr(10).join(f'[{i+1}] '+e for i,e in enumerate(state.evidence))}
Notes:n{chr(10).join(state.scratch[-5:])}
Return a concise, structured answer."""
draft = call_llm(model, prompt, temperature=0.3)
state.result = draft
state.scratch.append("DRAFT:n"+draft)
return "critic"
def node_critic(state: State, model) -> str:
prompt = f"""Critique and improve the answer for factuality, missing steps, and clarity.
If fix needed, return improved answer. Else return 'OK'.
Answer:n{state.result}nCriteria:n{state.plan}"""
crit = call_llm(model, prompt)
if crit.strip().upper() != "OK" and len(crit) > 30:
state.result = crit.strip()
state.scratch.append("REVISED")
state.done = True
return "end"
NODES: Dict[str, Callable[[State, Any], str]] = {
"plan": node_plan, "route": node_route, "research": node_research,
"math": node_math, "write": node_write, "critic": node_critic
}
def run_graph(task: str, api_key: str) -> State:
model = make_model(api_key)
state = State(task=task)
cur = "plan"
max_steps = 12
while not state.done and state.step < max_steps:
state.step += 1
nxt = NODES[cur](state, model)
if nxt == "end": break
cur = nxt
return state
def ascii_graph():
return """
START -> plan -> route -> (research <-> route) & (math <-> route) -> write -> critic -> END
"""
It describes the Typed State Dactaclass insisting the work, program, evidence, the first notes, and the control flags as the graph does. We use Node, editor, router, research, mathematics, writer, and critic. These activities change the situation and returns the next node label. We are registering with ticks and Tertate in Run_Graph until it has been done. We also disclose ASCII_Grap () visualization the flow of control that follows as we travel between research / statistics and is completing criticism. Look Full codes here.
if __name__ == "__main__":
key = os.getenv("GEMINI_API_KEY") or getpass.getpass("🔐 Enter GEMINI_API_KEY: ")
task = input("📝 Enter your task: ").strip() or "Compare solar vs wind for reliability; compute 5*7."
t0 = time.time()
state = run_graph(task, key)
dt = time.time() - t0
print("n=== GRAPH ===", ascii_graph())
print(f"n✅ Result in {dt:.2f}s:n{state.result}n")
print("---- Evidence ----")
print("n".join(state.evidence))
print("n---- Scratch (last 5) ----")
print("n".join(state.scratch[-5:]))
It describes the point of the program entry: We have read safely Gemino API key, take a job such as installation, and run a graph with run_graph. We estimate the murder, printing the ASCII graph of work, indicating the final outcome, and also a few of the last few evidence. Look Full codes here.
In conclusion, we show that the graphaded agent enables the decisions of a deciding work around the Probablistic llm. We see how sad the planer is a router, the router created between research and mathematics, and criticism provides true development. Gemini works as a Central Consultation Engine, While Graph and De Node provides the make-up, the safety checks, and state management. We have a fully functional agent that shows the benefits of combining graph tochestion with today's llm, enabling adverbs such as custom tools, converting opposing memory, or the compatible Node.
Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.



