Build Agent with an Agent Agent Agent Agent Using the Real Sament Stream broadcast

In this lesson, we will build a powerful and joint Support Application for integration in Langchain, Google Gemini API, and Suite developed tools to create AI smart assistant. Using a visual interface, we will create a system based on the web search, download the Wikipedia content, make statistics, and manage the main details, and manage chat history, everything in real time. Whether the developer, researchers, or just to check AI, this setup lets us work with a lot of agents directly from the browser with higher variations.
!pip install -q streamlit langchain langchain-google-genai langchain-community
!pip install -q pyngrok python-dotenv wikipedia duckduckgo-search
!npm install -g localtunnel
import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
import asyncio
import threading
import time
from datetime import datetime
import json
We begin by entering all the required packages and node.js packages to our AI service app. This includes Frontendand, Langchain to receive agent, and tools such as Wikipedia, Duckduckgo, and National External and Management Plant. When set, we import all modules to start building our various AI agency.
GOOGLE_API_KEY = "Use Your API Key Here"
NGROK_AUTH_TOKEN = "Use Your Auth Token Here"
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
Next, we prepare our environment by setting a Google A API key and the authentication token of the authenticity of Ncok. We give evidence of these variables and place Google_api_key for the Langchain's prison can be safe to achieve the Gemini model during the killing time.
class InnovativeAgentTools:
"""Advanced tool collection for the multi-agent system"""
@staticmethod
def get_calculator_tool():
def calculate(expression: str) -> str:
"""Calculate mathematical expressions safely"""
try:
allowed_chars = set('0123456789+-*/.() ')
if all(c in allowed_chars for c in expression):
result = eval(expression)
return f"Result: {result}"
else:
return "Error: Invalid mathematical expression"
except Exception as e:
return f"Calculation error: {str(e)}"
return Tool(
name="Calculator",
func=calculate,
description="Calculate mathematical expressions. Input should be a valid math expression."
)
@staticmethod
def get_memory_tool(memory_store):
def save_memory(key_value: str) -> str:
"""Save information to memory"""
try:
key, value = key_value.split(":", 1)
memory_store[key.strip()] = value.strip()
return f"Saved '{key.strip()}' to memory"
except:
return "Error: Use format 'key: value'"
def recall_memory(key: str) -> str:
"""Recall information from memory"""
return memory_store.get(key.strip(), f"No memory found for '{key}'")
return [
Tool(name="SaveMemory", func=save_memory,
description="Save information to memory. Format: 'key: value'"),
Tool(name="RecallMemory", func=recall_memory,
description="Recall saved information. Input: key to recall")
]
@staticmethod
def get_datetime_tool():
def get_current_datetime(format_type: str = "full") -> str:
"""Get current date and time"""
now = datetime.now()
if format_type == "date":
return now.strftime("%Y-%m-%d")
elif format_type == "time":
return now.strftime("%H:%M:%S")
else:
return now.strftime("%Y-%m-%d %H:%M:%S")
return Tool(
name="DateTime",
func=get_current_datetime,
description="Get current date/time. Options: 'date', 'time', or 'full'"
)
Here, explaining the InnituitEgenttool phase to approve our agent for special skills. We use tools such as safe testing, remembering tools to save and remember the turning details, and time tool for time to download the date and time. These tools enable our broadcasting AI to think, remember, and respond to the content, as a true assistant. Look Complete Booklet here
class MultiAgentSystem:
"""Innovative multi-agent system with specialized capabilities"""
def __init__(self, api_key: str):
self.llm = ChatGoogleGenerativeAI(
model="gemini-pro",
google_api_key=api_key,
temperature=0.7,
convert_system_message_to_human=True
)
self.memory_store = {}
self.conversation_memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=10,
return_messages=True
)
self.tools = self._initialize_tools()
self.agent = self._create_agent()
def _initialize_tools(self):
"""Initialize all available tools"""
tools = []
tools.extend([
DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
])
tools.append(InnovativeAgentTools.get_calculator_tool())
tools.append(InnovativeAgentTools.get_datetime_tool())
tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
return tools
def _create_agent(self):
"""Create the ReAct agent with advanced prompt"""
prompt = PromptTemplate.from_template("""
🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.
AVAILABLE TOOLS:
{tools}
TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer
MEMORY CAPABILITIES:
- You can save important information using SaveMemory
- You can recall previous information using RecallMemory
- Always try to remember user preferences and context
CONVERSATION HISTORY:
{chat_history}
CURRENT QUESTION: {input}
REASONING PROCESS:
{agent_scratchpad}
Begin your response with your thought process, then take action if needed.
""")
agent = create_react_agent(self.llm, self.tools, prompt)
return AgentExecutor(
agent=agent,
tools=self.tools,
memory=self.conversation_memory,
verbose=True,
handle_parsing_errors=True,
max_iterations=5
)
def chat(self, message: str, callback_handler=None):
"""Process user message and return response"""
try:
if callback_handler:
response = self.agent.invoke(
{"input": message},
{"callbacks": [callback_handler]}
)
else:
response = self.agent.invoke({"input": message})
return response["output"]
except Exception as e:
return f"Error processing request: {str(e)}"
In this section, we create a root cause of our request, a multiagentsystem. Here, combine the Gemini Pro model using Langchain and start all important tools, including web search, memory, and counting activities. We prepare the rub-style style agent using a custom prempt that guide the use of tools and memory management. Finally, we explain how to talk that allows the agent to consider user input, and persuade tools where necessary, and produce clever answers, knowing the situation. Look Complete Booklet here
def create_streamlit_app():
"""Create the innovative Streamlit application"""
st.set_page_config(
page_title="🚀 Advanced LangChain Agent with Gemini",
page_icon="🤖",
layout="wide",
initial_sidebar_state="expanded"
)
st.markdown("""
""", unsafe_allow_html=True)
st.markdown("""
Powered by LangChain + Gemini API + Streamlit
""", unsafe_allow_html=True)
with st.sidebar:
st.header("🔧 Configuration")
api_key = st.text_input(
"🔑 Google AI API Key",
type="password",
value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "",
help="Get your API key from
)
if not api_key:
st.error("Please enter your Google AI API key to continue")
st.stop()
st.success("✅ API Key configured")
st.header("🤖 Agent Capabilities")
st.markdown("""
- 🔍 **Web Search** (DuckDuckGo)
- 📚 **Wikipedia Lookup**
- 🧮 **Mathematical Calculator**
- 🧠 **Persistent Memory**
- 📅 **Date & Time**
- 💬 **Conversation History**
""")
if 'agent_system' in st.session_state:
st.header("🧠 Memory Store")
memory = st.session_state.agent_system.memory_store
if memory:
for key, value in memory.items():
st.markdown(f"""
{key}: {value}
""", unsafe_allow_html=True)
else:
st.info("No memories stored yet")
if 'agent_system' not in st.session_state:
with st.spinner("🔄 Initializing Advanced Agent System..."):
st.session_state.agent_system = MultiAgentSystem(api_key)
st.success("✅ Agent System Ready!")
st.header("💬 Interactive Chat")
if 'messages' not in st.session_state:
st.session_state.messages = [{
"role": "assistant",
"content": """🤖 Hello! I'm your advanced AI assistant powered by Gemini. I can:
• Search the web and Wikipedia for information
• Perform mathematical calculations
• Remember important information across our conversation
• Provide current date and time
• Maintain conversation context
Try asking me something like:
- "Calculate 15 * 8 + 32"
- "Search for recent news about AI"
- "Remember that my favorite color is blue"
- "What's the current time?"
"""
}]
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
if prompt := st.chat_input("Ask me anything..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
callback_handler = StreamlitCallbackHandler(st.container())
with st.spinner("🤔 Thinking..."):
response = st.session_state.agent_system.chat(prompt, callback_handler)
st.markdown(f"""
{response}
""", unsafe_allow_html=True)
st.session_state.messages.append({"role": "assistant", "content": response})
st.header("💡 Example Queries")
col1, col2, col3 = st.columns(3)
with col1:
if st.button("🔍 Search Example"):
example = "Search for the latest developments in quantum computing"
st.session_state.example_query = example
with col2:
if st.button("🧮 Math Example"):
example = "Calculate the compound interest on $1000 at 5% for 3 years"
st.session_state.example_query = example
with col3:
if st.button("🧠 Memory Example"):
example = "Remember that I work as a data scientist at TechCorp"
st.session_state.example_query = example
if 'example_query' in st.session_state:
st.info(f"Example query: {st.session_state.example_query}")
At this stage, we bring everything together by creating an active web site uses broadcast. We prepare for app shape, explain custom css styles, and set the API keys to add and prepare for agent skills. We start a multi-agent program, save the message history, and enable the chat interface that allows users to meet at the real time. To make it easy to look, and we offer buttons for examples search, statistics, and memory related questions, all well organized, responding. Look Complete Booklet here
def setup_ngrok_auth(auth_token):
"""Setup ngrok authentication"""
try:
from pyngrok import ngrok, conf
conf.get_default().auth_token = auth_token
try:
tunnels = ngrok.get_tunnels()
print("✅ Ngrok authentication successful!")
return True
except Exception as e:
print(f"❌ Ngrok authentication failed: {e}")
return False
except ImportError:
print("❌ pyngrok not installed. Installing...")
import subprocess
subprocess.run(['pip', 'install', 'pyngrok'], check=True)
return setup_ngrok_auth(auth_token)
def get_ngrok_token_instructions():
"""Provide instructions for getting ngrok token"""
return """
🔧 NGROK AUTHENTICATION SETUP:
1. Sign up for an ngrok account:
- Visit:
- Create a free account
2. Get your authentication token:
- Go to:
- Copy your authtoken
3. Replace 'your-ngrok-auth-token-here' in the code with your actual token
4. Alternative methods if ngrok fails:
- Use Google Colab's built-in public URL feature
- Use localtunnel: !npx localtunnel --port 8501
- Use serveo.net: !ssh -R 80:localhost:8501 serveo.net
"""
Here, we set up the work of verifying the Ncok, which allows us to disclose our local broadcasting app on the Internet. We use the Pygrok library to establish a authentication token and confirm the connection. If the token is lost, we offer detailed instructions on how to get another trip, such as LocalTunnel or LOWTo, making it easy for us to hold and share our app from Google Colab.
def main():
"""Main function to run the application"""
try:
create_streamlit_app()
except Exception as e:
st.error(f"Application error: {str(e)}")
st.info("Please check your API key and try refreshing the page")
This main function () works as a point of entry to our stream request. We simply drive to create a_smreammlit_app () to present a full interface. If there is something wrong, such as the lost API key or installing a generous device, we hold a friendly error and display auxiliary message, make sure that the user is able to recover and continue using the app well.
def run_in_colab():
"""Run the application in Google Colab with proper ngrok setup"""
print("🚀 Starting Advanced LangChain Agent Setup...")
if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":
print("⚠️ NGROK_AUTH_TOKEN not configured!")
print(get_ngrok_token_instructions())
print("🔄 Attempting alternative tunnel methods...")
try_alternative_tunnels()
return
print("📦 Installing required packages...")
import subprocess
packages = [
'streamlit',
'langchain',
'langchain-google-genai',
'langchain-community',
'wikipedia',
'duckduckgo-search',
'pyngrok'
]
for package in packages:
try:
subprocess.run(['pip', 'install', package], check=True, capture_output=True)
print(f"✅ {package} installed")
except subprocess.CalledProcessError:
print(f"⚠️ Failed to install {package}")
app_content=""'
import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
from datetime import datetime
# Configuration - Replace with your actual keys
GOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
class InnovativeAgentTools:
@staticmethod
def get_calculator_tool():
def calculate(expression: str) -> str:
try:
allowed_chars = set('0123456789+-*/.() ')
if all(c in allowed_chars for c in expression):
result = eval(expression)
return f"Result: {result}"
else:
return "Error: Invalid mathematical expression"
except Exception as e:
return f"Calculation error: {str(e)}"
return Tool(name="Calculator", func=calculate,
description="Calculate mathematical expressions. Input should be a valid math expression.")
@staticmethod
def get_memory_tool(memory_store):
def save_memory(key_value: str) -> str:
try:
key, value = key_value.split(":", 1)
memory_store[key.strip()] = value.strip()
return f"Saved '{key.strip()}' to memory"
except:
return "Error: Use format 'key: value'"
def recall_memory(key: str) -> str:
return memory_store.get(key.strip(), f"No memory found for '{key}'")
return [
Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),
Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")
]
@staticmethod
def get_datetime_tool():
def get_current_datetime(format_type: str = "full") -> str:
now = datetime.now()
if format_type == "date":
return now.strftime("%Y-%m-%d")
elif format_type == "time":
return now.strftime("%H:%M:%S")
else:
return now.strftime("%Y-%m-%d %H:%M:%S")
return Tool(name="DateTime", func=get_current_datetime,
description="Get current date/time. Options: 'date', 'time', or 'full'")
class MultiAgentSystem:
def __init__(self, api_key: str):
self.llm = ChatGoogleGenerativeAI(
model="gemini-pro",
google_api_key=api_key,
temperature=0.7,
convert_system_message_to_human=True
)
self.memory_store = {}
self.conversation_memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=10, return_messages=True
)
self.tools = self._initialize_tools()
self.agent = self._create_agent()
def _initialize_tools(self):
tools = []
try:
tools.extend([
DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
])
except Exception as e:
st.warning(f"Search tools may have limited functionality: {e}")
tools.append(InnovativeAgentTools.get_calculator_tool())
tools.append(InnovativeAgentTools.get_datetime_tool())
tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
return tools
def _create_agent(self):
prompt = PromptTemplate.from_template("""
🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.
AVAILABLE TOOLS:
{tools}
TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer
CONVERSATION HISTORY:
{chat_history}
CURRENT QUESTION: {input}
REASONING PROCESS:
{agent_scratchpad}
Begin your response with your thought process, then take action if needed.
""")
agent = create_react_agent(self.llm, self.tools, prompt)
return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,
verbose=True, handle_parsing_errors=True, max_iterations=5)
def chat(self, message: str, callback_handler=None):
try:
if callback_handler:
response = self.agent.invoke({"input": message}, {"callbacks": [callback_handler]})
else:
response = self.agent.invoke({"input": message})
return response["output"]
except Exception as e:
return f"Error processing request: {str(e)}"
# Streamlit App
st.set_page_config(page_title="🚀 Advanced LangChain Agent", page_icon="🤖", layout="wide")
st.markdown("""
""", unsafe_allow_html=True)
st.markdown('Powered by LangChain + Gemini API
', unsafe_allow_html=True)
with st.sidebar:
st.header("🔧 Configuration")
api_key = st.text_input("🔑 Google AI API Key", type="password", value=GOOGLE_API_KEY)
if not api_key:
st.error("Please enter your Google AI API key")
st.stop()
st.success("✅ API Key configured")
st.header("🤖 Agent Capabilities")
st.markdown("- 🔍 Web Search\n- 📚 Wikipedia\n- 🧮 Calculator\n- 🧠 Memory\n- 📅 Date/Time")
if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store:
st.header("🧠 Memory Store")
for key, value in st.session_state.agent_system.memory_store.items():
st.markdown(f'{key}: {value}
', unsafe_allow_html=True)
if 'agent_system' not in st.session_state:
with st.spinner("🔄 Initializing Agent..."):
st.session_state.agent_system = MultiAgentSystem(api_key)
st.success("✅ Agent Ready!")
if 'messages' not in st.session_state:
st.session_state.messages = [{
"role": "assistant",
"content": "🤖 Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you."
}]
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
if prompt := st.chat_input("Ask me anything..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
callback_handler = StreamlitCallbackHandler(st.container())
with st.spinner("🤔 Thinking..."):
response = st.session_state.agent_system.chat(prompt, callback_handler)
st.markdown(f'{response}
', unsafe_allow_html=True)
st.session_state.messages.append({"role": "assistant", "content": response})
# Example buttons
st.header("💡 Try These Examples")
col1, col2, col3 = st.columns(3)
with col1:
if st.button("🧮 Calculate 15 * 8 + 32"):
st.rerun()
with col2:
if st.button("🔍 Search AI news"):
st.rerun()
with col3:
if st.button("🧠 Remember my name is Alex"):
st.rerun()
'''
with open('streamlit_app.py', 'w') as f:
f.write(app_content)
print("✅ Streamlit app file created successfully!")
if setup_ngrok_auth(NGROK_AUTH_TOKEN):
start_streamlit_with_ngrok()
else:
print("❌ Ngrok authentication failed. Trying alternative methods...")
try_alternative_tunnels()
In Run_N_COLAB () Work, we make it easy to install a direct broadcasting app for Google Colab. We begin by entering all the required packages, generating powerfully and records the complete app certification code in thestream_App file.py.py.py.py. We guarantee the existence of a valid Ncokkeni existence to enable Community access to the app from Cocab, and if it is lost or permissible, we are direct, directing our options Tuning. This setup allows us to work together with our AI agent from anywhere, all a few cells of Colob. Look Complete Booklet here
def start_streamlit_with_ngrok():
"""Start Streamlit with ngrok tunnel"""
import subprocess
import threading
from pyngrok import ngrok
def start_streamlit():
subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
print("🚀 Starting Streamlit server...")
thread = threading.Thread(target=start_streamlit)
thread.daemon = True
thread.start()
time.sleep(5)
try:
print("🌐 Creating ngrok tunnel...")
public_url = ngrok.connect(8501)
print(f"🔗 SUCCESS! Access your app at: {public_url}")
print("✨ Your Advanced LangChain Agent is now running publicly!")
print("📱 You can share this URL with others!")
print("⏳ Keeping tunnel alive... Press Ctrl+C to stop")
try:
ngrok_process = ngrok.get_ngrok_process()
ngrok_process.proc.wait()
except KeyboardInterrupt:
print("👋 Shutting down...")
ngrok.kill()
except Exception as e:
print(f"❌ Ngrok tunnel failed: {e}")
try_alternative_tunnels()
def try_alternative_tunnels():
"""Try alternative tunneling methods"""
print("🔄 Trying alternative tunnel methods...")
import subprocess
import threading
def start_streamlit():
subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
thread = threading.Thread(target=start_streamlit)
thread.daemon = True
thread.start()
time.sleep(3)
print("🌐 Streamlit is running on
print("n📋 ALTERNATIVE TUNNEL OPTIONS:")
print("1. localtunnel: Run this in a new cell:")
print(" !npx localtunnel --port 8501")
print("n2. serveo.net: Run this in a new cell:")
print(" !ssh -R 80:localhost:8501 serveo.net")
print("n3. Colab public URL (if available):")
print(" Use the 'Public URL' button in Colab's interface")
try:
while True:
time.sleep(60)
except KeyboardInterrupt:
print("👋 Shutting down...")
if __name__ == "__main__":
try:
get_ipython()
print("🚀 Google Colab detected - starting setup...")
run_in_colab()
except NameError:
main()
In this last part, we put logic execution to use the app can be in the area or within the Google Colab. Start_Sreamlit_grok () work introduces the background distribution server and using Jgrok to expand it in public, making access to access and share. If the Ncok fails, TROK_ALTERNATE_TUNNELS () the work done with non-existent options, such as local choices and COOTO. With __ Main__ block, we automatically recognize if we are in Colob and begin to set up the right, making the whole process of moving smooth, modified, and left anywhere.
In conclusion, we will have a fully functional agent that works within the Sleek Streamlit display, able to answer questions, to remember the user's in the public. We have seen that easily spread it enables us to adapt to an advanced AI functionality in an appealing and easy-to-use application. From here, we can increase agent tools, connecting it to a larger activity, or sends as part of our intelligent applications. In the sense of being dissolved as Front-End Front and Langchain agent enabling Logic, we create a solid basis for the following Ai experiences.
Look Complete booklet here. All credit for this study goes to research for this project. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.




