Meet the langchain library and a practical example to see how depagents actually work

While the main agent of the model model (LLM) who also called external tools – they are easy to create, these people often face long and complex tasks because they do not have the task of planning ahead and manage their work later. They can be considered “shallow” in their execution.
Depaintlents Library is designed to overcome this limitation by creating a standard architecture inspired by advanced applications such as deep research and deep coding.

This regime provides agents with greater depth by combining four key features:
- Editing Tool: It allows an agent to efficiently clear a complex task in manageable steps before execution.
- Sub-agents: Enables a large agent to delegate specialized parts of the work to smaller, focused agents.
- Access to the file system: Provides persistent memory for saving task progress, notes, and final results, allowing the agent to pick up where it left off.
- WARRANTY BASED ON: It gives the agent clear instructions, context, and problems for his long-term goals.
By providing these innovations, depaagents make it easy for developers to build powerful, general-purpose agents that can plan, manage state, and optimize performance.
In this article, we will look at a working example to see how depth actually works. Look Full codes here.
The main power of depaagents
1. Planning and scheduling of work: Snagents comes with a built-in scripting tool that helps agents break down large tasks into smaller, manageable steps. They can track their progress and adjust the plan as they learn new information.
2. Context management: Using file tools such as LS, Read_File, Write_File, and Edit_File, agents can store information outside of their temporary memory. This prevents context overflow and allows them to handle large or detailed tasks smoothly.
3. Creation of an Agent: The Built-in Tool allows an agent to create smaller, more focused agents. These sub-agents work on specific parts of the problem without installing the main agent context.
4. Long-term memory: With support from Langgraph's store, agents can remember information at all times. This means they can remember past work, continue past conversations, and build on past progress.


Setting dependencies
!pip install deepagents tavily-python langchain-google-genai langchain-openai
Natural Diversity
In this tutorial, we will use the Opelai API key to enable our deep agent. However, for reference, we will also show how to use the Gemini model.
You are free to choose any model provider you like – Open, Gemini, Anthropic, or others – as jupaagents works by searching with different networks. Look Full codes here.
import os
from getpass import getpass
os.environ['TAVILY_API_KEY'] = getpass('Enter Tavily API Key: ')
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
os.environ['GOOGLE_API_KEY'] = getpass('Enter Google API Key: ')
Importing required libraries
import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient()
Tools
Like standard agents that use a tool, a deep agent can also be installed with a set of tools to help it perform tasks.
In this example, we will give our agent access to a quick search tool, which he can use to gather real-time information from the web. Look Full codes here.
from typing import Literal
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""Run a web search"""
search_docs = tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
return search_docs
Sub-agents
Subagents are one of the most powerful features of deep agents. They allow a large agent to delegate certain parts of a complex task to smaller, specialized agents – each with its own focus, tools, and instructions. This helps keep the main agent context clean and organized while still allowing for deep, focused work on each decoration.
In our example, we defined two cases:
- Policy research- – A specialist researcher who conducts in-depth analysis of policies, regulations, and ethical frameworks around the world. It uses the Internet_Search tool to gather real-time information and produce a structured, professional report.
- Policy-Critique-Agent – Editing agent responsible for reviewing the report produced for accuracy, completeness, and tone. Ensures that research is limited, factual, and aligned with regional legal frameworks.
Together, these images enable a deep agent to conduct research, analysis, and quality reviews in a structured, flexible workflow. Look Full codes here.
sub_research_prompt = """
You are a specialized AI policy researcher.
Conduct in-depth research on government policies, global regulations, and ethical frameworks related to artificial intelligence.
Your answer should:
- Provide key updates and trends
- Include relevant sources and laws (e.g., EU AI Act, U.S. Executive Orders)
- Compare global approaches when relevant
- Be written in clear, professional language
Only your FINAL message will be passed back to the main agent.
"""
research_sub_agent = {
"name": "policy-research-agent",
"description": "Used to research specific AI policy and regulation questions in depth.",
"system_prompt": sub_research_prompt,
"tools": [internet_search],
}
sub_critique_prompt = """
You are a policy editor reviewing a report on AI governance.
Check the report at `final_report.md` and the question at `question.txt`.
Focus on:
- Accuracy and completeness of legal information
- Proper citation of policy documents
- Balanced analysis of regional differences
- Clarity and neutrality of tone
Provide constructive feedback, but do NOT modify the report directly.
"""
critique_sub_agent = {
"name": "policy-critique-agent",
"description": "Critiques AI policy research reports for completeness, clarity, and accuracy.",
"system_prompt": sub_critique_prompt,
}
Fast system
Deep agents include a built-in system that acts as their command line. This dynamic is inspired by the system used in Clae Claude Code and is designed to be general purpose, providing guidance on how to use built-in tools, file operations, and file communication.
However, while the default System Enables deep agents enable deep agents out of the box, it is highly recommended to define a custom system optimized for your specific use case. Agile design plays an important role in shaping the agent's thinking, structure, and overall performance.
In our example, we quickly explained what is called a policy It clearly shows the flow of work step by step – saving the question, using the research subsagent to analyze, writing this report, and optionally soliciting the victim of the review. It also enforces best practices such as Markdown formatting, citation style, and appropriate wording to ensure that the final report meets the policy's highest standards. Look Full codes here.
policy_research_instructions = """
You are an expert AI policy researcher and analyst.
Your job is to investigate questions related to global AI regulation, ethics, and governance frameworks.
1️⃣ Save the user's question to `question.txt`
2️⃣ Use the `policy-research-agent` to perform in-depth research
3️⃣ Write a detailed report to `final_report.md`
4️⃣ Optionally, ask the `policy-critique-agent` to critique your draft
5️⃣ Revise if necessary, then output the final, comprehensive report
When writing the final report:
- Use Markdown with clear sections (## for each)
- Include citations in [Title](URL) format
- Add a ### Sources section at the end
- Write in professional, neutral tone suitable for policy briefings
"""
The main agent
Here we define our own deep agent implementation using the create_deep_agent() function. We start the model with Opelai's GPT-4Obut as indicated in the comment line, you can easily change it Gemini's Gemini 2.5 Flash model if you like. The agent is configured with the Internet_Search tool, our custom policy_Research
By default, depaunts inside Claude Sonnet 4.5 As its model if nothing is explicitly specified, but the library allows full flexibility to integrate Olephaa, Gemini, anthropic, or other LLMs supported by Langchain. Look Full codes here.
model = init_chat_model(model="openai:gpt-4o")
# model = init_chat_model(model="google_genai:gemini-2.5-flash")
agent = create_deep_agent(
model=model,
tools=[internet_search],
system_prompt=policy_research_instructions,
subagents=[research_sub_agent, critique_sub_agent],
)
Appeal to the agent
query = "What are the latest updates on the EU AI Act and its global impact?"
result = agent.invoke({"messages": [{"role": "user", "content": query}]})
Look Full codes here. Feel free to take a look at ours GitHub page for tutorials, code and notebooks. Also, feel free to follow us Kind of stubborn and don't forget to join ours 100K + ML Subreddit and sign up Our newsletter. Wait! Do you telegraph? Now you can join us by telegraph.

I am a civil engineering student (2022) from Jamia Millia Islamia, New Delhi, and I am very interested in data science, especially neural networks and their application in various fields.
Follow Marktechpost: Add us as a favorite source on Google.



