Generative AI

Coding Codetables in Advanced Langgraph Multing Research Research Research Research Research

We create an improved Langgraph Multi-agent program found by the Gemino's Femeni Modemi Gemini of the end of the end. In this lesson, it begins by installing the required libraries, LangSphaph, Langchain-Genai-Geniai, and Langchain-Conai. On the way, we show how you can imitate the web search, make the data analyzing, and then edit messages between agents to generate a molded report. Look Full codes here.

!pip install -q langgraph langchain-google-genai langchain-core


import os
from typing import TypedDict, Annotated, List, Dict, Any
from langgraph.graph import StateGraph, END
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
import operator
import json




os.environ["GOOGLE_API_KEY"] = "Use Your Own API Key"


class AgentState(TypedDict):
   messages: Annotated[List[BaseMessage], operator.add]
   current_agent: str
   research_data: dict
   analysis_complete: bool
   final_report: str


llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash", temperature=0.7)

It includes Langgraph and Langchain-Google-Geniai packages and imports the main modules we need to organize our many work travel. We place our Google API key, explain the Agentstate key to edit messages and work, and start the Gemini-1.5-flash model with 0.7 temperatures for moderate answers. Look Full codes here.

def simulate_web_search(query: str) -> str:
   """Simulated web search - replace with real API in production"""
   return f"Search results for '{query}': Found relevant information about {query} including recent developments, expert opinions, and statistical data."


def simulate_data_analysis(data: str) -> str:
   """Simulated data analysis tool"""
   return f"Analysis complete: Key insights from the data include emerging trends, statistical patterns, and actionable recommendations."


def research_agent(state: AgentState) -> AgentState:
   """Agent that researches a given topic"""
   messages = state["messages"]
   last_message = messages[-1].content
  
   search_results = simulate_web_search(last_message)
  
   prompt = f"""You are a research agent. Based on the query: "{last_message}"
  
   Here are the search results: {search_results}
  
   Conduct thorough research and gather relevant information. Provide structured findings with:
   1. Key facts and data points
   2. Current trends and developments 
   3. Expert opinions and insights
   4. Relevant statistics
  
   Be comprehensive and analytical in your research summary."""
  
   response = llm.invoke([HumanMessage(content=prompt)])
  
   research_data = {
       "topic": last_message,
       "findings": response.content,
       "search_results": search_results,
       "sources": ["academic_papers", "industry_reports", "expert_analyses"],
       "confidence": 0.88,
       "timestamp": "2024-research-session"
   }
  
   return {
       "messages": state["messages"] + [AIMessage(content=f"Research completed on '{last_message}': {response.content}")],
       "current_agent": "analysis",
       "research_data": research_data,
       "analysis_complete": False,
       "final_report": ""
   }

It describes imitation_web_earch and imitate_natata_analysis such as bad activists. We allow all the work of research one work to develop agent in the analysis of an analysis when the search is made and the formal llM exit completed. Look Full codes here.

def analysis_agent(state: AgentState) -> AgentState:
   """Agent that analyzes research data and extracts insights"""
   research_data = state["research_data"]
  
   analysis_results = simulate_data_analysis(research_data.get('findings', ''))
  
   prompt = f"""You are an analysis agent. Analyze this research data in depth:
  
   Topic: {research_data.get('topic', 'Unknown')}
   Research Findings: {research_data.get('findings', 'No findings')}
   Analysis Results: {analysis_results}
  
   Provide deep insights including:
   1. Pattern identification and trend analysis
   2. Comparative analysis with industry standards
   3. Risk assessment and opportunities 
   4. Strategic implications
   5. Actionable recommendations with priority levels
  
   Be analytical and provide evidence-based insights."""
  
   response = llm.invoke([HumanMessage(content=prompt)])
  
   return {
       "messages": state["messages"] + [AIMessage(content=f"Analysis completed: {response.content}")],
       "current_agent": "report",
       "research_data": state["research_data"],
       "analysis_complete": True,
       "final_report": ""
   }




def report_agent(state: AgentState) -> AgentState:
   """Agent that generates final comprehensive reports"""
   research_data = state["research_data"]
  
   analysis_message = None
   for msg in reversed(state["messages"]):
       if isinstance(msg, AIMessage) and "Analysis completed:" in msg.content:
           analysis_message = msg.content.replace("Analysis completed: ", "")
           break
  
   prompt = f"""You are a professional report generation agent. Create a comprehensive executive report based on:
  
   ๐Ÿ” Research Topic: {research_data.get('topic')}
   ๐Ÿ“Š Research Findings: {research_data.get('findings')}
   ๐Ÿง  Analysis Results: {analysis_message or 'Analysis pending'}
  
   Generate a well-structured, professional report with these sections:
  
   ## EXECUTIVE SUMMARY  
   ## KEY RESEARCH FINDINGS 
   [Detail the most important discoveries and data points]
  
   ## ANALYTICAL INSIGHTS
   [Present deep analysis, patterns, and trends identified]
  
   ## STRATEGIC RECOMMENDATIONS
   [Provide actionable recommendations with priority levels]
  
   ## RISK ASSESSMENT & OPPORTUNITIES
   [Identify potential risks and opportunities]
  
   ## CONCLUSION & NEXT STEPS
   [Summarize and suggest follow-up actions]
  
   Make the report professional, data-driven, and actionable."""
  
   response = llm.invoke([HumanMessage(content=prompt)])
  
   return {
       "messages": state["messages"] + [AIMessage(content=f"๐Ÿ“„ FINAL REPORT GENERATED:nn{response.content}")],
       "current_agent": "complete",
       "research_data": state["research_data"],
       "analysis_complete": True,
       "final_report": response.content
   }

We use analyzing_agent to take findings for research that is infected, eliminate us through our Gemini Equipment tools, speed production and formal recommendations, and change the work movement in the report section. We have built reports_agent Uninstalling the recent analysis and the hands of an EXECUTIVUoung report reports on Gemini, which consists of theft from the clouds of the following. We then marked the flow of work as final by keeping the final report in the state. Look Full codes here.

def should_continue(state: AgentState) -> str:
   """Determine which agent should run next based on current state"""
   current_agent = state.get("current_agent", "research")
  
   if current_agent == "research":
       return "analysis"
   elif current_agent == "analysis":
       return "report"
   elif current_agent == "report":
       return END
   else:
       return END


workflow = StateGraph(AgentState)


workflow.add_node("research", research_agent)
workflow.add_node("analysis", analysis_agent)
workflow.add_node("report", report_agent)


workflow.add_conditional_edges(
   "research",
   should_continue,
   {"analysis": "analysis", END: END}
)


workflow.add_conditional_edges(
   "analysis",
   should_continue,
   {"report": "report", END: END}
)


workflow.add_conditional_edges(
   "report",
   should_continue,
   {END: END}
)


workflow.set_entry_point("research")


app = workflow.compile()


def run_research_assistant(query: str):
   """Run the complete research workflow"""
   initial_state = {
       "messages": [HumanMessage(content=query)],
       "current_agent": "research",
       "research_data": {},
       "analysis_complete": False,
       "final_report": ""
   }
  
   print(f"๐Ÿ” Starting Multi-Agent Research on: '{query}'")
   print("=" * 60)
  
   current_state = initial_state
  
   print("๐Ÿค– Research Agent: Gathering information...")
   current_state = research_agent(current_state)
   print("โœ… Research phase completed!n")
  
   print("๐Ÿง  Analysis Agent: Analyzing findings...")
   current_state = analysis_agent(current_state)
   print("โœ… Analysis phase completed!n")
  
   print("๐Ÿ“Š Report Agent: Generating comprehensive report...")
   final_state = report_agent(current_state)
   print("โœ… Report generation completed!n")
  
   print("=" * 60)
   print("๐ŸŽฏ MULTI-AGENT WORKFLOW COMPLETED SUCCESSFULLY!")
   print("=" * 60)
  
   final_report = final_state['final_report']
   print(f"n๐Ÿ“‹ COMPREHENSIVE RESEARCH REPORT:n")
   print(final_report)
  
   return final_state

We are building a Sport Gram, add three agents to the stadiums with conditional exhibitions called_we'ned “research,” and integrate the graph. We then explains Run_Reearch_assistant see Full codes here.

if __name__ == "__main__":
   print("๐Ÿš€ Advanced LangGraph Multi-Agent System Ready!")
   print("๐Ÿ”ง Remember to set your GOOGLE_API_KEY!")
  
   example_queries = [
       "Impact of renewable energy on global markets",
       "Future of remote work post-pandemic"
   ]
  
   print(f"n๐Ÿ’ก Example queries you can try:")
   for i, query in enumerate(example_queries, 1):
       print(f"  {i}. {query}")
  
   print(f"n๐ŸŽฏ Usage: run_research_assistant('Your research question here')")
  
   result = run_research_assistant("What are emerging trends in sustainable technology?")

It describes the point of entry that barks our agent's program, indicating a readiness message, exemplary questions, and remind us to set the Google API key. We show sample directors to show how to interact with the research assistant and release the escape from “fried stylets in the ongoing technology,” printing the flow of complete work flow.

In conclusion, we think that this set of correction is enabled in the trial of pure transaction. Each agent includes a different intelligence group, interpretation, and delivery, allowing us to change in real apis or extend pipes with new tools as our needs appear. We encourage you to test custom tools, fix the State building, and check out other LLMs. This framework is designed to grow up with your research and product purposes. As we do, we always encourage our agents and skills, ensure that our multi-alent program remains strong and flexible in any domain.


Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button