Generative AI

Langgraph Lesson: Step Guide for the action to build a pipe

Time to read measured: The knee was purchased minutes

INTRODUCTION OF LANGGRAPH

Langgraph is a powerful framework by Langchain designed to create secure apps, many Multi for llms. It provides the composition and tools required to create AGents AI AIs AIs in a graph-based manner.

Consider the Langgraph as a table of buildings – which gives us tools to designate how our agent will think and work. Just as a manufacturer draws vegetables that indicate how different rooms are links and how people will allow it to design how different skills will connect and how to flow through our agent.

Important features:

  • Geographical Management: Keep a persistent status of working together
  • A variable route: Describe complex flow between elements
  • Perseverance: Save and restart work flow
  • To see by eye: See and understand the formation of your agent

In this lesson, it will show Langgram by creating a multi-step pipe processing three stages:

  1. Scriptural separation: Include the installation text in previously defined paragraphs
  2. Business Issue: Identify key businesses in the text
  3. Scripture summarizing: Produce a brief summary of the installation text

The Pipeline shows how Langgraph can be used to create work flow, which is improved for environmental language work.

To set our environment

Before you enter the code, let us support our environmental environment.

Insertion

# Install required packages
!pip install langgraph langchain langchain-openai python-dotenv

Setting api keys

We will need the Opelai API key to use their models. If you haven't done it, you can find

Look Full codes here

import os
from dotenv import load_dotenv

# Load environment variables from .env file (create this with your API key)
load_dotenv()

# Set OpenAI API key
os.environ["OPENAI_API_KEY"] = os.getenv('OPENAI_API_KEY')

To check our setup

Let's make sure that our environment works well in creating a simple test with Opelai Model:

from langchain_openai import ChatOpenAI

# Initialize the ChatOpenAI instance
llm = ChatOpenAI(model="gpt-4o-mini")

# Test the setup
response = llm.invoke("Hello! Are you working?")
print(response.content)

To build our pipe to analyze the text

Now let us introduce the required packages of our Langgraph Text Inserting

import os
from typing import TypedDict, List, Annotated
from langgraph.graph import StateGraph, END
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage
from langchain_core.runnables.graph import MermaidDrawMethod
from IPython.display import display, Image

Designing the memory of our agent

Just as one's intelligence requires memory, our provider needs a way to follow the details. We compose this use of typing to explain our Kingdom structure: see Full codes here

class State(TypedDict):
    text: str
    classification: str
    entities: List[str]
    summary: str

# Initialize our language model with temperature=0 for more deterministic outputs
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

Creating our agent's skills

Now we will develop the real skills our agent will use. Each of these skills are initiated as a function of a particular type of analysis. Look Full codes here

1. Separation and Done

def classification_node(state: State):
    '''Classify the text into one of the categories: News, Blog, Research, or Other'''
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Classify the following text into one of the categories: News, Blog, Research, or Other.nnText:{text}nnCategory:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    classification = llm.invoke([message]).content.strip()
    return {"classification": classification}

2. EXTICTY Extraction Node

def entity_extraction_node(state: State):
    '''Extract all the entities (Person, Organization, Location) from the text'''
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Extract all the entities (Person, Organization, Location) from the following text. Provide the result as a comma-separated list.nnText:{text}nnEntities:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    entities = llm.invoke([message]).content.strip().split(", ")
    return {"entities": entities}

3. Summing Node

def summarization_node(state: State):
    '''Summarize the text in one short sentence'''
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Summarize the following text in one short sentence.nnText:{text}nnSummary:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    summary = llm.invoke([message]).content.strip()
    return {"summary": summary}

To bring everything together

Now comes the most exciting part – connecting these skills to the integrated system using Langgraph:

Look Full codes here

# Create our StateGraph
workflow = StateGraph(State)

# Add nodes to the graph
workflow.add_node("classification_node", classification_node)
workflow.add_node("entity_extraction", entity_extraction_node)
workflow.add_node("summarization", summarization_node)

# Add edges to the graph
workflow.set_entry_point("classification_node")  # Set the entry point of the graph
workflow.add_edge("classification_node", "entity_extraction")
workflow.add_edge("entity_extraction", "summarization")
workflow.add_edge("summarization", END)

# Compile the graph
app = workflow.compile()

Work Relations Structure: Our pipe follows this option:
Distinction_node → Entity_Extraction → Summary

To check our agent

Now, as we build our agent, let's look at how about the example of the real world text:

Look Full codes here

sample_text = """ OpenAI has announced the GPT-4 model, which is a large multimodal model that exhibits human-level performance on various professional benchmarks. It is developed to improve the alignment and safety of AI systems. Additionally, the model is designed to be more efficient and scalable than its predecessor, GPT-3. The GPT-4 model is expected to be released in the coming months and will be available to the public for research and development purposes. """ 
state_input = {"text": sample_text} 
result = app.invoke(state_input) 
print("Classification:", result["classification"]) 
print("nEntities:", result["entities"]) 
print("nSummary:", result["summary"])
Classification: News Entities: ['OpenAI', 'GPT-4', 'GPT-3'] Summary: OpenAI's upcoming GPT-4 model is a multimodal AI that aims for human-level performance and improved safety, efficiency, and scalability compared to GPT-3.

Understanding the combined processing power

What makes this great result is not just the results of each person – each step in which others create a complete understanding of the text.

  • This page to schedule a particular type It provides the context that helps our diagnosis of the type of text
  • This page Business Issue Identify important words and essential concepts
  • This page condensation woke up in the context of a document

This is a reflection of human education, where naturally, the understanding of this kind of documentation, recognize essential words and ideas, and create a summary of the relationship – all during these different understanding.

Try with your text

Now let's try our pipe in one text sample:

Look Full codes here

# Replace this with your own text to analyze your_text = """ The recent advancements in quantum computing have opened new possibilities for cryptography and data security. Researchers at MIT and Google have demonstrated quantum algorithms that could potentially break current encryption methods. However, they are also developing new quantum-resistant encryption techniques to protect data in the future. """ 

# Process the text through our pipeline your_result = app.invoke({"text": your_text}) print("Classification:", your_result["classification"]) 

print("nEntities:", your_result["entities"]) 
print("nSummary:", your_result["summary"])

Classification: Research Entities: ['MIT', 'Google'] Summary: Recent advancements in quantum computing may threaten current encryption methods while also prompting the development of new quantum-resistant techniques.

Adding additional energy (advanced)

One of the strong Langgraph stuff is how we can easily import our agent for new skills. Let's add Admination Anation Neide to Our Page:

Look Full codes here

# First, let's update our State to include sentiment
class EnhancedState(TypedDict):
    text: str
    classification: str
    entities: List[str]
    summary: str
    sentiment: str

# Create our sentiment analysis node
def sentiment_node(state: EnhancedState):
    '''Analyze the sentiment of the text: Positive, Negative, or Neutral'''
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Analyze the sentiment of the following text. Is it Positive, Negative, or Neutral?nnText:{text}nnSentiment:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    sentiment = llm.invoke([message]).content.strip()
    return {"sentiment": sentiment}

# Create a new workflow with the enhanced state
enhanced_workflow = StateGraph(EnhancedState)

# Add the existing nodes
enhanced_workflow.add_node("classification_node", classification_node)
enhanced_workflow.add_node("entity_extraction", entity_extraction_node)
enhanced_workflow.add_node("summarization", summarization_node)

# Add our new sentiment node
enhanced_workflow.add_node("sentiment_analysis", sentiment_node)

# Create a more complex workflow with branches
enhanced_workflow.set_entry_point("classification_node")
enhanced_workflow.add_edge("classification_node", "entity_extraction")
enhanced_workflow.add_edge("entity_extraction", "summarization")
enhanced_workflow.add_edge("summarization", "sentiment_analysis")
enhanced_workflow.add_edge("sentiment_analysis", END)

# Compile the enhanced graph
enhanced_app = enhanced_workflow.compile()

To explore advanced agent

# Try the enhanced pipeline with the same text
enhanced_result = enhanced_app.invoke({"text": sample_text})

print("Classification:", enhanced_result["classification"])
print("nEntities:", enhanced_result["entities"])
print("nSummary:", enhanced_result["summary"])
print("nSentiment:", enhanced_result["sentiment"])
Classification: News

Entities: ['OpenAI', 'GPT-4', 'GPT-3']

Summary: OpenAI's upcoming GPT-4 model is a multimodal AI that aims for human-level performance and improved safety, efficiency, and scalability compared to GPT-3.

Sentiment: The sentiment of the text is Positive. It highlights the advancements and improvements of the GPT-4 model, emphasizing its human-level performance, efficiency, scalability, and the positive implications for AI alignment and safety. The anticipation of its release for public use further contributes to the positive tone.

Adding conditional endings (Advanced Logic)

Why Conditional Conditions?

So far, our graph has followed the exact way: Distinction is_node → Entity_Extraction → Summary → (moods.

But in real world apps, we often want to use certain steps only when needed. For example:

  • Only execute businesses if the text is a story or an article for research
  • Skip summarizing if the text is too short
  • Enter custom processing to blog posts

The Langgraph do this easier for conditional endings – logical gates used with a route starring based on the present situation.

Look Full codes here

Creating a route transportation work

# Route after classification
def route_after_classification(state: EnhancedState) -> str:
    category = state["classification"].lower() # returns: "news", "blog", "research", "other"
    return category in ["news", "research"]

Describe the conditional graph

from langgraph.graph import StateGraph, END

conditional_workflow = StateGraph(EnhancedState)

# Add nodes
conditional_workflow.add_node("classification_node", classification_node)
conditional_workflow.add_node("entity_extraction", entity_extraction_node)
conditional_workflow.add_node("summarization", summarization_node)
conditional_workflow.add_node("sentiment_analysis", sentiment_node)

# Set entry point
conditional_workflow.set_entry_point("classification_node")

# Add conditional edge
conditional_workflow.add_conditional_edges("classification_node", route_after_classification, path_map={
    True: "entity_extraction",
    False: "summarization"
})

# Add remaining static edges
conditional_workflow.add_edge("entity_extraction", "summarization")
conditional_workflow.add_edge("summarization", "sentiment_analysis")
conditional_workflow.add_edge("sentiment_analysis", END)

# Compile
conditional_app = conditional_workflow.compile()

Testing a conditional pipe

test_text = """
OpenAI released the GPT-4 model with enhanced performance on academic and professional tasks. It's seen as a major breakthrough in alignment and reasoning capabilities.
"""

result = conditional_app.invoke({"text": test_text})

print("Classification:", result["classification"])
print("Entities:", result.get("entities", "Skipped"))
print("Summary:", result["summary"])
print("Sentiment:", result["sentiment"])
Classification: News
Entities: ['OpenAI', 'GPT-4']
Summary: OpenAI's GPT-4 model significantly improves performance in academic and professional tasks, marking a breakthrough in alignment and reasoning.
Sentiment: The sentiment of the text is Positive. It highlights the release of the GPT-4 model as a significant advancement, emphasizing its enhanced performance and breakthrough capabilities.

Look Full codes here

Now try with blog:

blog_text = """
Here's what I learned from a week of meditating in silence. No phones, no talking—just me, my breath, and some deep realizations.
"""

result = conditional_app.invoke({"text": blog_text})

print("Classification:", result["classification"])
print("Entities:", result.get("entities", "Skipped (not applicable)"))
print("Summary:", result["summary"])
print("Sentiment:", result["sentiment"])
Classification: Blog
Entities: Skipped (not applicable)
Summary: A week of silent meditation led to profound personal insights.
Sentiment: The sentiment of the text is Positive. The mention of "deep realizations" and the overall reflective nature of the experience suggests a beneficial and enlightening outcome from the meditation practice.

With unconditional edges, our agent can now:

  • Make decisions as far as context
  • Skip Undicher Steps
  • Run fast and cheap
  • Behave

Store

In this lesson, we are:

  1. Assessed Langgraph's concepts and its way to based on wings
  2. Build a pipe that processes text in separation, business issuance, and summarizing
  3. Developed with our pipeline with additional skills
  4. Presented conditional conclusions of the flow based flow
  5. Showed by our job movement
  6. Assessed by our agent for examples of the real world text

Langgram provides a powerful framework for building AI's agents by imitating the skills skills. This method makes it easy to design, alter, and increase complex AI programs.

The following steps

  • Add many nodes to stretch your agent's skills
  • Checked different llms and parameters
  • Examine the Langgraph Conversational Features in Continuous Conversations

Look Full codes here. All credit for this study goes to research for this project. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.

You might also love the open cosmos of nvidia [Check it now]


The Nir Diaman is ai researcher ALGORITHM, a specialist in Genai, with a life of more than experience in Ai study and algoriths. His open projects have received millions of views, by more than 500k stars and more than 50k stars in GitTub, making the leading voice in Ai.

With his work in GitHub and Diamantaiia Newsletter, Unir helped millions to improve their AI and active competitors and tutorials.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button