Generative AI

Step-by-Step Study Implementation of the Construction of Ai Modar AI's Working Drinking Using Anthropic's Claude Sonnet 3.7 with API and Langgraph's

In this lesson, we provide an effective guide to use the Langgraph framework, the Ai orchestiple framework, combined outside the seamlessly with anthropic's Claude API. Using detailed, enriched information designed for Google Colab, enhancements to learn how to build and see AI functionality as the answers contacted make brilliant answers, automatically analyze technical answers. Integrated initiation highlights the construction of accurate Langgraph langgraph. It can handle the complex order of natural language services, from basic conditions to respond to questions to advanced and advanced pipeline content.

from getpass import getpass
import os


anthropic_key = getpass("Enter your Anthropic API key: ")


os.environ["ANTHROPIC_API_KEY"] = anthropic_key


print("Key set:", "ANTHROPIC_API_KEY" in os.environ)

We quickly sell users to enter their Anthropic API key using the Python's Getpass module, to ensure that sensitive data is not displayed. It puts this key as a natural variability (Anthropic_api_key) and ensure effective storage.

import os
import json
import requests
from typing import Dict, List, Any, Callable, Optional, Union
from dataclasses import dataclass, field
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import display, HTML, clear_output

We import important libraries for building and observing sffffedFlows of organized AI. Includes data management modules (JSON, FACKASS), graphs and the matplotlib), a practical indicator (ipython.display), typing the annotations for clarification and typing.

try:
    import anthropic
except ImportError:
    print("Installing anthropic package...")
    !pip install -q anthropic
    import anthropic


from anthropic import Anthropic

We ensure that the Anthropic Python package is available for use. Trying to import a module again, if not available, automatically installs the PIP in the natural google colab area. After installation, you enter an anthropic client, is important to disfellowshipping the Claude models with anthropic API. 4O

@dataclass
class NodeConfig:
    name: str
    function: Callable
    inputs: List[str] = field(default_factory=list)
    outputs: List[str] = field(default_factory=list)
    config: Dict[str, Any] = field(default_factory=dict)

This nodeconfig data section describes the formation of each location in Langgraph Workflow. Each node has a name, work-made, optional and effects, and the optional dictionary you can choose to save additional parameters. This setup allows the definitions of a variable node descriptions of the Grapho-based AI function.

class LangGraph:
    def __init__(self, api_key: Optional[str] = None):
        self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
        if not self.api_key:
            from google.colab import userdata
            try:
                self.api_key = userdata.get('ANTHROPIC_API_KEY')
                if not self.api_key:
                    raise ValueError("No API key found")
            except:
                print("No Anthropic API key found in environment variables or Colab secrets.")
                self.api_key = input("Please enter your Anthropic API key: ")
                if not self.api_key:
                    raise ValueError("Please provide an Anthropic API key")
       
        self.client = Anthropic(api_key=self.api_key)
        self.graph = nx.DiGraph()
        self.nodes = {}
        self.state = {}
   
    def add_node(self, node_config: NodeConfig):
        self.nodes[node_config.name] = node_config
        self.graph.add_node(node_config.name)
        for input_node in node_config.inputs:
            if input_node in self.nodes:
                self.graph.add_edge(input_node, node_config.name)
        return self
   
    def claude_node(self, name: str, prompt_template: str, model: str = "claude-3-7-sonnet-20250219",
                   inputs: List[str] = None, outputs: List[str] = None, system_prompt: str = None):
        """Convenience method to create a Claude API node"""
        inputs = inputs or []
        outputs = outputs or [name + "_response"]
       
        def claude_fn(state, **kwargs):
            prompt = prompt_template
            for k, v in state.items():
                if isinstance(v, str):
                    prompt = prompt.replace(f"{{{k}}}", v)
           
            message_params = {
                "model": model,
                "max_tokens": 1000,
                "messages": [{"role": "user", "content": prompt}]
            }
           
            if system_prompt:
                message_params["system"] = system_prompt
               
            response = self.client.messages.create(**message_params)
            return response.content[0].text
       
        node_config = NodeConfig(
            name=name,
            function=claude_fn,
            inputs=inputs,
            outputs=outputs,
            config={"model": model, "prompt_template": prompt_template}
        )
        return self.add_node(node_config)
   
    def transform_node(self, name: str, transform_fn: Callable,
                      inputs: List[str] = None, outputs: List[str] = None):
        """Add a data transformation node"""
        inputs = inputs or []
        outputs = outputs or [name + "_output"]
       
        node_config = NodeConfig(
            name=name,
            function=transform_fn,
            inputs=inputs,
            outputs=outputs
        )
        return self.add_node(node_config)
   
    def visualize(self):
        """Visualize the graph"""
        plt.figure(figsize=(10, 6))
        pos = nx.spring_layout(self.graph)
        nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
                node_size=1500, arrowsize=20, font_size=10)
        plt.title("LangGraph Flow")
        plt.tight_layout()
        plt.show()
       
        print("nGraph Structure:")
        for node in self.graph.nodes():
            successors = list(self.graph.successors(node))
            if successors:
                print(f"  {node} → {', '.join(successors)}")
            else:
                print(f"  {node} (endpoint)")
        print()
   
    def _get_execution_order(self):
        """Determine execution order based on dependencies"""
        try:
            return list(nx.topological_sort(self.graph))
        except nx.NetworkXUnfeasible:
            raise ValueError("Graph contains a cycle")
   
    def execute(self, initial_state: Dict[str, Any] = None):
        """Execute the graph in topological order"""
        self.state = initial_state or {}
        execution_order = self._get_execution_order()
       
        print("Executing LangGraph flow:")
       
        for node_name in execution_order:
            print(f"- Running node: {node_name}")
            node = self.nodes[node_name]
            inputs = {k: self.state.get(k) for k in node.inputs if k in self.state}
           
            result = node.function(self.state, **inputs)
           
            if len(node.outputs) == 1:
                self.state[node.outputs[0]] = result
            elif isinstance(result, (list, tuple)) and len(result) == len(node.outputs):
                for i, output_name in enumerate(node.outputs):
                    self.state[output_name] = result[i]
       
        print("Execution completed!")
        return self.state


def run_example(question="What are the key benefits of using a graph-based architecture for AI workflows?"):
    """Run an example LangGraph flow with a predefined question"""
    print(f"Running example with question: '{question}'")
   
    graph = LangGraph()
   
    def question_provider(state, **kwargs):
        return question
   
    graph.transform_node(
        name="question_provider",
        transform_fn=question_provider,
        outputs=["user_question"]
    )
   
    graph.claude_node(
        name="question_answerer",
        prompt_template="Answer this question clearly and concisely: {user_question}",
        inputs=["user_question"],
        outputs=["answer"],
        system_prompt="You are a helpful AI assistant."
    )
   
    graph.claude_node(
        name="answer_analyzer",
        prompt_template="Analyze if this answer addresses the question well: Question: {user_question}nAnswer: {answer}",
        inputs=["user_question", "answer"],
        outputs=["analysis"],
        system_prompt="You are a critical evaluator. Be brief but thorough."
    )
   
    graph.visualize()
   
    result = graph.execute()
   
    print("n" + "="*50)
    print("EXECUTION RESULTS:")
    print("="*50)
    print(f"n🔍 QUESTION:n{result.get('user_question')}n")
    print(f"📝 ANSWER:n{result.get('answer')}n")
    print(f"✅ ANALYSIS:n{result.get('analysis')}")
    print("="*50 + "n")
   
    return graph

Langgraph Class uses a lacking structure of construction and building the flow of AI based on graph using Claude from anthropic. It allows users to describe the powerful areas, whether Claude-Powered propless or custom Transformation activities, connecting itself, visualize the entire pipe. Run activity

def run_advanced_example():
    """Run a more advanced example with multiple nodes for content generation"""
    graph = LangGraph()
   
    def topic_selector(state, **kwargs):
        return "Graph-based AI systems"
   
    graph.transform_node(
        name="topic_selector",
        transform_fn=topic_selector,
        outputs=["topic"]
    )
   
    graph.claude_node(
        name="outline_generator",
        prompt_template="Create a brief outline for a technical blog post about {topic}. Include 3-4 main sections only.",
        inputs=["topic"],
        outputs=["outline"],
        system_prompt="You are a technical writer specializing in AI technologies."
    )
   
    graph.claude_node(
        name="intro_writer",
        prompt_template="Write an engaging introduction for a blog post with this outline: {outline}nTopic: {topic}",
        inputs=["topic", "outline"],
        outputs=["introduction"],
        system_prompt="You are a technical writer. Write in a clear, engaging style."
    )
   
    graph.claude_node(
        name="conclusion_writer",
        prompt_template="Write a conclusion for a blog post with this outline: {outline}nTopic: {topic}",
        inputs=["topic", "outline"],
        outputs=["conclusion"],
        system_prompt="You are a technical writer. Summarize key points and include a forward-looking statement."
    )
   
    def assembler(state, introduction, outline, conclusion, **kwargs):
        return f"# {state['topic']}nn{introduction}nn## Outlinen{outline}nn## Conclusionn{conclusion}"
   
    graph.transform_node(
        name="content_assembler",
        transform_fn=assembler,
        inputs=["topic", "introduction", "outline", "conclusion"],
        outputs=["final_content"]
    )
   
    graph.visualize()
    result = graph.execute()
   
    print("n" + "="*50)
    print("BLOG POST GENERATED:")
    print("="*50 + "n")
    print(result.get("final_content"))
    print("n" + "="*50)
   
    return graph

Run_AdCopps_example function showed more complex use of Langgraph by installing multiple power-enabled nodes to produce complete blog post. It starts by choosing the theme, and then creates a framework, introduction, and ending, all using formal claude. Finally, Node to change includes content into a formatted blog mail. This example shows that Langgram can make sure of complex activities, with many steps for content content using connected areas, connected to clear flow and performance.

print("1. Running simple question-answering example")
question = "What are the three main advantages of using graph-based AI architectures?"
simple_graph = run_example(question)


print("n2. Running advanced blog post creation example")
advanced_graph = run_advanced_example()

Finally, the killing of both Langgraph Workflows. First, conducting a simple example of answering questions by giving a pre-defined question on the run_example () work. After that, it starts the Advanced Blog Post Generation using Run_AdConction_exam (). Together, these calls show the actual variable of Langgraph, from the basic interaction based on the default content of several content content using anthropic's Claudes API.

In conclusion, we have used a langgraph compiled with anthropic's Claude API, showing the relief of the Ai Modar AI relief of models with powerful language, pipes derived from graph. By visualization of jobs and division of responsibilities in the middle of areas, such as analyzing, analytical examination, the content, and the council, the engineer, the developers, ai systems, ai, are well looked. The power of node is clear and thousands of claeude language and the skills that work well for AI in orchestranging AI, especially faster catching and surfaces such as Google Colab.


Check the Colab booklet. All credit for this study goes to research for this project. Also, feel free to follow it Sane and don't forget to join ours 95k + ml subreddit Then sign up for Our newspaper.


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button