Generative AI

Creating a UI AGENT program with Langgraph: Adding persistence and distribution (step by step guide)

In our previous study, we create an AI agent who can answer questions through the web. However, when agents are built on long running agents, two critical concepts start playing: irritation including signature. Replacement allows you to maintain an agent status at any given point, enabling you to restart the state by future contact. This is important for long applications. On the other hand, the broadcasting allows you to extract real-time signals about what an agent is made at any time, to provide and control their actions. In this lesson, we will develop our agent by adding these powerful features.

Setting up an agent

Let us first return our agel. We will upload the environmental variables, enter and submit the required libraries, set the hot search tool, explain the agent's state, and eventually build a agent.

pip install langgraph==0.2.53 langgraph-checkpoint==2.0.6 langgraph-sdk==0.1.36 langchain-groq langchain-community langgraph-checkpoint-sqlite==2.0.1
import os
os.environ['TAVILY_API_KEY'] = ""
os.environ['GROQ_API_KEY'] = ""

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_groq import ChatGroq
from langchain_community.tools.tavily_search import TavilySearchResults

tool = TavilySearchResults(max_results=2)

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

class Agent:
    def __init__(self, model, tools, system=""):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile()
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def exists_action(self, state: AgentState):
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

Adding persistence

Installing, we will use Langgraph's bankrupt . Checkpointing saves the agent status after all nodes. In this lesson, we will use SqlicaserSimple checkpoints received by SQLITE, a built-in database. While we will use a simpland memory database, you can easily connect to the external database or use other test areas such as redis or postgres of more persistent persevering.

from langgraph.checkpoint.sqlite import SqliteSaver
import sqlite3
sqlite_conn = sqlite3.connect("checkpoints.sqlite",check_same_thread=False)
memory = SqliteSaver(sqlite_conn)

Next, we will change our agency to accept Checkpointing:

class Agent:
    def __init__(self, model, tools, checkpointer, system=""):
        # Everything else remains the same as before
        self.graph = graph.compile(checkpointer=checkpointer)
    # Everything else after this remains the same

Now, we can build our agent with empowered perseverance:

prompt = """You are a smart research assistant. Use the search engine to look up information. 
You are allowed to make multiple calls (either together or in sequence). 
Only look up information when you are sure of what you want. 
If you need to look up some information before asking a follow-up question, you are allowed to do that!
"""
model = ChatGroq(model="Llama-3.3-70b-Specdec")
bot = Agent(model, [tool], system=prompt, checkpointer=memory)

Adding Broadcast

The broadcast is important for review of real time. There are two types of broadcasts to focus:

1. To spread the messages: Middle messages such as AI choices and the instruments of the device.

2. Spreading tokens: Spreading individual tokens from the llM response.
Let's start by spreading messages. We will create a person's message and use stream How to check agent actions in real time.

messages = [HumanMessage(content="What is the weather in Texas?")]
thread = {"configurable": {"thread_id": "1"}}
for event in bot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v['messages'])

Last release: Current weather in Texas yearly with the temperature of 19.4 ° C (66.9 ° F) air speed space (6.8 kph) … ..

If you run this, you will see the spread of results. First, AI educated the agent calls on tracking, followed by the tool message with search results, and ultimately, the AI ​​message is answering the question.

Understanding IDs

This page cable_id It is an important part of the string configuration. Allows the agent to save unique conversations with users or sites. By allocating a unique rope_he to each conversation, the agent can save a lot of interacting a lot at the same time without mixing yourself.

For example, let us continue the conversation by asking, “What about LA?” Using the same_id line:

messages = [HumanMessage(content="What about in LA?")]
thread = {"configurable": {"thread_id": "1"}}
for event in bot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

Last release: Current weather conditions in Los Angeles is at least 17.2 ° C (63.0 ° F) apressive speed of 2.2 MPH) ….

Agent Frers Asking the Weather, For insistence. To confirm, let us ask, “What is warm?”::

messages = [HumanMessage(content="Which one is warmer?")]
thread = {"configurable": {"thread_id": "1"}}
for event in bot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

Last release: Texas is warmer than Los Angeles. The current temperature in Texas is 19.4 ° C (66.9 ° F), while the current temperature at Los Angeles is 17.2 ° C (63.0 ° F)

The agent compares the weather to Texas and LA. Checking If Persistence keeps conversations separated, let's ask the same question differently cable_id:

messages = [HumanMessage(content="Which one is warmer?")]
thread = {"configurable": {"thread_id": "2"}}
for event in bot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

Extracting: I need more details to answer that question. Can you give more context or specify what two things you compare?

In the meantime, the agent is confused because you have no access to the history of the previous chat.

Spreading tokens

Streaming tokens, We will use ABSTAM_EVENTS way, asynchronous. We will change and on async Checkpointer.

from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver

async with AsyncSqliteSaver.from_conn_string(":memory:") as checkpointer:
    abot = Agent(model, [tool], system=prompt, checkpointer=checkpointer)
    messages = [HumanMessage(content="What is the weather in SF?")]
    thread = {"configurable": {"thread_id": "4"}}
    async for event in abot.graph.astream_events({"messages": messages}, thread, version="v1"):
        kind = event["event"]
        if kind == "on_chat_model_stream":
            content = event["data"]["chunk"].content
            if content:
                # Empty content in the context of OpenAI means
                # that the model is asking for a tool to be invoked.
                # So we only print non-empty content
                print(content, end="|")

This will broadcast the tokens in real time, and give you a live idea of ​​the agent's thinking process.

Store

By adding persistence and distribution, we have improved our power to agent. Perseverance allows the agent to save the context in this partnership, while the distribution provides true understanding of its actions. These features are important to create efficient apps, especially those involving many users or partnerships – in Loop.

In the next study, we will get in Personal Cooperation – in LoopWhen the perseverance plays a very important role in making threatening interactions among people and Agents. Stay tuned!

References:

  1. (DeePlecceaning.Ai)

Also, don't forget to follow Sane and join ours Telegraph station including LinkedIn Grtopic. Don't forget to join ours 75k + ml subreddit.

🚨 Meet the Work: an open source opened with multiple sources to check the difficult program AI (Updated)


Weneet Kumar is a student of a consultant in MarktechPost. He currently pursued his BS from the Indian Institute of Technology (Iit), Kanpur. He is a machine learning enthusiasm. She is passionate about the recent research and anger in the deepest learning, computer idea and related fields.

✅ [Recommended] Join Our Telegraph Channel

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button