Step Guide by a step to create an agent of my various various agents with Langgraph and Claude Dynamic Agent Creation

In this comprehensive lesson, we guide users by creating a multiple powerful tools that use Langgraph and Claude, prepared by various functions including mathematical integration, text restriction, and recovery of literal texts. It is prevalent by facilitating the installation of dependers depending on to ensure unemployment, even to beginners. Users are presented in the formal use of specialized tools, such as a safe calculator, a funcomal function, detailed data provider, and detailed download work, and detailed download work, and detailed download work, and detailed download work. The lesson and clearly give you the consolidation of the wise entities designed through the Langgraph, indicating effective use of the effective example of AI agents agents immediately.
import subprocess
import sys
def install_packages():
packages = [
"langgraph",
"langchain",
"langchain-anthropic",
"langchain-community",
"requests",
"python-dotenv",
"duckduckgo-search"
]
for package in packages:
try:
subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
print(f"✓ Installed {package}")
except subprocess.CalledProcessError:
print(f"✗ Failed to install {package}")
print("Installing required packages...")
install_packages()
print("Installation complete!n")
We use the installation of the essential Python package required to create an Agent for a Laggrapp agent based on Langgraph. It gets low energy to operate quiet pups and verify each package, which range from long-line popup tools and environmental tools, are successfully installed. This setup guides the environmental preparation process, making a writing brochure feels and is first friendly.
import os
import json
import math
import requests
from typing import Dict, List, Any, Annotated, TypedDict
from datetime import datetime
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from duckduckgo_search import DDGS
We import all libraries required and modules for building a multi-tool Ai agent. Including the standard Python information libraries such as OS, JSON, Math, and DeteTime operating system common and foreign libraries such as HTTP Calls and Duckduckgo_search_search user search. Langchain and Lang Graph in Cosystems brings types of messages, ornaments, Publicry graphs, and testing services, while Chatanthropic enables integration with Claude Model for Claude Intelligence Common Demon Defaude. The importation of the Importing creates a basis for building a basis for explaining tools, the flow of agent's work, and collaboration.
os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here"
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
We place and return the ANTHTH AKPI key needed to verify and participate with Claude models. The OS.ENViro line offers your API key (which to replace the right key), while OS.geegenv gets safe to be used later in model. This method ensures that the key is available throughout the text without regularly use it.
from typing import TypedDict
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
@tool
def calculator(expression: str) -> str:
"""
Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.
Args:
expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
Returns:
Result of the calculation as a string
"""
try:
allowed_names = {
'abs': abs, 'round': round, 'min': min, 'max': max,
'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
'log': math.log, 'log10': math.log10, 'exp': math.exp,
'pi': math.pi, 'e': math.e
}
expression = expression.replace('^', '**')
result = eval(expression, {"__builtins__": {}}, allowed_names)
return f"Result: {result}"
except Exception as e:
return f"Error in calculation: {str(e)}"
It describes the internal state of agent and uses a powerful calculator tool. The Agentstate section uses TypedDection to schedule the agent's memory, which are followed directly to the exchange during the interview. Counting work, decorated with @Tool to register as an AIs used, and protect mathematical talks. It allows secure integration by restricting a set of advertised set of setback from Mathematical Module and Restore Normal Syntax such as ^ In Python's Exponentiation operator operator. This ensures that the instrument can manage the simple arithmetic functions and Advanced such as Trigonometry or logariths while blocking the killing of unsafe code.
@tool
def web_search(query: str, num_results: int = 3) -> str:
"""
Search the web for information using DuckDuckGo.
Args:
query: Search query string
num_results: Number of results to return (default: 3, max: 10)
Returns:
Search results as formatted string
"""
try:
num_results = min(max(num_results, 1), 10)
with DDGS() as ddgs:
results = list(ddgs.text(query, max_results=num_results))
if not results:
return f"No search results found for: {query}"
formatted_results = f"Search results for '{query}':nn"
for i, result in enumerate(results, 1):
formatted_results += f"{i}. **{result['title']}**n"
formatted_results += f" {result['body']}n"
formatted_results += f" Source: {result['href']}nn"
return formatted_results
except Exception as e:
return f"Error performing web search: {str(e)}"
It describes a Web_earch tool that makes an agent downloading the original time information from the Internet using the Duckduckgo_search Python package. The tool welcomes the search query and the NEM_Rerrult Paramet, to ensure the amount of results returned to between 1 and 10. Opens results, reconciling the easiest usable display results. If no results are available or error occurs, the work is responsible for returning the informative message. This tool equips an agent for real-time search skills, to develop responding and functionality.
@tool
def weather_info(city: str) -> str:
"""
Get current weather information for a city using OpenWeatherMap API.
Note: This is a mock implementation for demo purposes.
Args:
city: Name of the city
Returns:
Weather information as a string
"""
mock_weather = {
"new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
"london": {"temp": 15, "condition": "Rainy", "humidity": 80},
"tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
"paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
}
city_lower = city.lower()
if city_lower in mock_weather:
weather = mock_weather[city_lower]
return f"Weather in {city}:n"
f"Temperature: {weather['temp']}°Cn"
f"Condition: {weather['condition']}n"
f"Humidity: {weather['humidity']}%"
else:
return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)"
It describes the weather-thefo device that imitate the return of current weather data in the city provided. While not connecting to the live weather apt, it uses a defined dictionary of the funeral data of large cities as New York, London, Tokyo, and Paris. When he finds the city's name, work is familiar to reduce and evaluate its existence on Bock Database. Returns the temperature, weather and humble moisture when available. Besides, it informs the user that the weather data is not available. This tool works as a catch place and later can be upgraded for live data from the weather API itself.
@tool
def text_analyzer(text: str) -> str:
"""
Analyze text and provide statistics like word count, character count, etc.
Args:
text: Text to analyze
Returns:
Text analysis results
"""
if not text.strip():
return "Please provide text to analyze."
words = text.split()
sentences = text.split('.') + text.split('!') + text.split('?')
sentences = [s.strip() for s in sentences if s.strip()]
analysis = f"Text Analysis Results:n"
analysis += f"• Characters (with spaces): {len(text)}n"
analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}n"
analysis += f"• Words: {len(words)}n"
analysis += f"• Sentences: {len(sentences)}n"
analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}n"
analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}"
return analysis
Text_Nalalyzer tool provides detailed statistics analysis of the text. It calculates the metrics such as letters (other than spaces), the calculation of the sentences, sentences, and middle names in each sentence, and points to the word that often occurred. The tool treats the blank kindness by pulling the user to provide the correct text. Using simple tasks of cable and the Python's SET setup and max functions to produce logical insight. It is an important responsibility for language analysis or the quality of content of content in Age Agent's Toolkit.
@tool
def current_time() -> str:
"""
Get the current date and time.
Returns:
Current date and time as a formatted string
"""
now = datetime.now()
return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
The current Charnisha tool provides a direct way to find the current plan for the system and time in a man-readable way. Using the Python Datelime module, absorbing the current moment and is organized as Hyyy-MM-DD HH: Mm: SS. This use is especially helpful to strengthen the time or answer the user questions on the present day of the agent's agent.
tools = [calculator, web_search, weather_info, text_analyzer, current_time]
def create_llm():
if ANTHROPIC_API_KEY:
return ChatAnthropic(
model="claude-3-haiku-20240307",
temperature=0.1,
max_tokens=1024
)
else:
class MockLLM:
def invoke(self, messages):
last_message = messages[-1].content if messages else ""
if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
import re
numbers = re.findall(r'[d+-*/.()sw]+', last_message)
expr = numbers[0] if numbers else "2+2"
return AIMessage(content="I'll help you with that calculation.",
tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']):
query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip()
if not query or len(query) < 3:
query = "python programming"
return AIMessage(content="I'll search for that information.",
tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}])
elif any(word in last_message.lower() for word in ['weather', 'temperature']):
city = "New York"
words = last_message.lower().split()
for i, word in enumerate(words):
if word == 'in' and i + 1 < len(words):
city = words[i + 1].title()
break
return AIMessage(content="I'll get the weather information.",
tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}])
elif any(word in last_message.lower() for word in ['time', 'date']):
return AIMessage(content="I'll get the current time.",
tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}])
elif any(word in last_message.lower() for word in ['analyze', 'analysis']):
text = last_message.replace('analyze this text:', '').replace('analyze', '').strip()
if not text:
text = "Sample text for analysis"
return AIMessage(content="I'll analyze that text for you.",
tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}])
else:
return AIMessage(content="Hello! I'm a multi-tool agent powered by Claude. I can help with:n• Mathematical calculationsn• Web searchesn• Weather informationn• Text analysisn• Current time/datennWhat would you like me to help you with?")
def bind_tools(self, tools):
return self
print("⚠️ Note: Using mock LLM for demo. Add your ANTHROPIC_API_KEY for full functionality.")
return MockLLM()
llm = create_llm()
llm_with_tools = llm.bind_tools(tools)
We start the language model that enables AI agent. If a valid ANTROPIC API key is available, using a Claude 3 Haiku model at high quality. Without API Key, Mockllm is defined to imitate the basic rotioning code based on matching keyword, allows an agent to function an internet with limited skills. The Bind_Tools method links the specified tools in the model, which makes it to suck as needed.
def agent_node(state: AgentState) -> Dict[str, Any]:
"""Main agent node that processes messages and decides on tool usage."""
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: AgentState) -> str:
"""Determine whether to continue with tool calls or end."""
last_message = state["messages"][-1]
if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
return "tools"
return END
We explain the idea of making the decisions that you are in. Agent_node function treats incoming messages, removes the language model (with tools), and returns the model response. If you have to_to just check that the model's reply includes the toolbar. If so, the routes control the Tool item and Done; Besides, it supervises flow to end worker. These activities provide stable and dynamic energy to the agent's transaction.
def create_agent_graph():
tool_node = ToolNode(tools)
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
workflow.add_edge("tools", "agent")
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
return app
print("Creating LangGraph Multi-Tool Agent...")
agent = create_agent_graph()
print("✓ Agent created successfully!n")
We create a powerful flow of Langgraph-powered work that describes the formation of the Age Agent. Starting ToolNode to manage tools and use the Stategram to plan the movement between agent decisions and tools. NONEs and edges are added to manage changes: Starting with the agent, the lane below to the tools, and backup as needed. MemorySaver is compiled to comply with persistent turning situation. The graph is compiled by the operating app (app), enables a systematic agent, memory for orderly memory device for shipping.
def test_agent():
"""Test the agent with various queries."""
config = {"configurable": {"thread_id": "test-thread"}}
test_queries = [
"What's 15 * 7 + 23?",
"Search for information about Python programming",
"What's the weather like in Tokyo?",
"What time is it?",
"Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
]
print("🧪 Testing the agent with sample queries...n")
for i, query in enumerate(test_queries, 1):
print(f"Query {i}: {query}")
print("-" * 50)
try:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
last_message = response["messages"][-1]
print(f"Response: {last_message.content}n")
except Exception as e:
print(f"Error: {str(e)}n")
The test_agent function is a verification device that confirms that the Langgraph for the agent responds well to different cases used. Previous questions, statistics, web searches, weather, time, and text analysis, and reply to agent answers. Using a consistent_id configuration string, it attracts an agent for each question. It beautifully illustrates the results, helps developers to ensure integration of tools and understanding that is transformed before going to effective or manufacturing utility.
def chat_with_agent():
"""Interactive chat function."""
config = {"configurable": {"thread_id": "interactive-thread"}}
print("🤖 Multi-Tool Agent Chat")
print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time")
print("Type 'quit' to exit, 'help' for available commandsn")
while True:
try:
user_input = input("You: ").strip()
if user_input.lower() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
elif user_input.lower() == 'help':
print("nAvailable commands:")
print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'")
print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'")
print("• Text Analysis: 'Analyze this text: [your text]'")
print("• Current Time: 'What time is it?' or 'Current date'")
print("• quit: Exit the chatn")
continue
elif not user_input:
continue
response = agent.invoke(
{"messages": [HumanMessage(content=user_input)]},
config=config
)
last_message = response["messages"][-1]
print(f"Agent: {last_message.content}n")
except KeyboardInterrupt:
print("nGoodbye!")
break
except Exception as e:
print(f"Error: {str(e)}n")
Chat_wit_agent function provides a practical command indicator for real-time conversations with Langgraph multigraph-tool for agent. It supports natural language questions and recognizes the instructions such as “Help” to administer the use and “quitting” to exit. Each user's installation is processed with an agent, which chooses you in force and urges the relevant response tools. An employee promotes user involvement by imitating flexible experiences and reflecting agent experiences in handling various questions, from Math and web quests, and time to recover time.
if __name__ == "__main__":
test_agent()
print("=" * 60)
print("🎉 LangGraph Multi-Tool Agent is ready!")
print("=" * 60)
chat_with_agent()
def quick_demo():
"""Quick demonstration of agent capabilities."""
config = {"configurable": {"thread_id": "demo"}}
demos = [
("Math", "Calculate the square root of 144 plus 5 times 3"),
("Search", "Find recent news about artificial intelligence"),
("Time", "What's the current date and time?")
]
print("🚀 Quick Demo of Agent Capabilitiesn")
for category, query in demos:
print(f"[{category}] Query: {query}")
try:
response = agent.invoke(
{"messages": [HumanMessage(content=query)]},
config=config
)
print(f"Response: {response['messages'][-1].content}n")
except Exception as e:
print(f"Error: {str(e)}n")
print("n" + "="*60)
print("🔧 Usage Instructions:")
print("1. Add your ANTHROPIC_API_KEY to use Claude model")
print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'")
print("2. Run quick_demo() for a quick demonstration")
print("3. Run chat_with_agent() for interactive chat")
print("4. The agent supports: calculations, web search, weather, text analysis, and time")
print("5. Example: 'Calculate 15*7+23' or 'Search for Python tutorials'")
print("="*60)
Finally, we plan the killing of many tools. When the script is held directly, it starts to explore_agent () to ensure the use of sample questions, followed to introduce Chat_wit_TENT () real communication mode. Quick_Demo work () Briefly display agent, searches, and time for the questions. The instructions of clear use is finally printed, which authoritates API key, working working, and agent. This provides a smooth onboarding experience of users to explore and extend the agent.
In conclusion, this action education lesson provides valuable insight into creating effective AGENT AI qualific tools and claudes. For specific definitions and craftsmanship, the guide gives users to combine various resources in the joint and functional system. The agent's variations in the performance of the activities, from complex calculations to the restoration of strong information, indicates the variable of today's AI development systems. Also, the inclusion of the usefulness of both evaluation and communication chat improves effective discretion, enables the use of rapidly in different contexts. Engineers can extend with conviction and customize their AI providers on that basic information.
View the letter of writing in Githubub. All credit for this study goes to research for this project. Also, feel free to follow it Sane and don't forget to join ours 95k + ml subreddit Then sign up for Our newspaper.
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.
