A broad lesson in five Agentic Ai Levels: From Quick Questions Quickly In Product Products and Execution

In this lesson, we examine five Agentic design levels, from Model of the easiest language of full-time private calls. This lesson is designed to run outside the seams in Google Colab. Starting that “simple processor” indicates the output of the model, will gradually create a logic, and finally give the best energy, and ultimately enable them, and use their Python code. Throughout each of each part, democratic deeds, and clear efforts indicating how you have measured personal control with the Real-World Ai application.
import os
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
import re
import json
import time
import random
from IPython.display import clear_output
We import Core Python and foreign libraries, including the OS time to control the drainage, a torch, autokokozer, automodelforcaallm) to load model and access. Also, we use the Relet Re-Jon to get the llM exit, random seeds, and funny data, while clear_ut_ut_ut_ut
MODEL_NAME = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
def get_model_and_tokenizer():
if not hasattr(get_model_and_tokenizer, "model"):
print(f"Loading model {MODEL_NAME}...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
torch_dtype=torch.float16,
device_map="auto",
low_cpu_mem_usage=True
)
get_model_and_tokenizer.model = model
get_model_and_tokenizer.tokenizer = tokenizer
print("Model loaded successfully!")
return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer
Here, explaining Model_Name to point to Tinyllama 1.1B Chat Model and use a lazy Get_model_and_and_andelizer () starting calls of all telephone calls.
def get_model_and_tokenizer():
if not hasattr(get_model_and_tokenizer, "model"):
print(f"Loading model {MODEL_NAME}...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
torch_dtype=torch.float16,
device_map="auto",
low_cpu_mem_usage=True
)
get_model_and_tokenizer.model = model
get_model_and_tokenizer.tokenizer = tokenizer
print("Model loaded successfully!")
return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer
This assistant work has used a lazy pattern of Tinyllama model and its token. With the first faith, the top and both possible for the accuracy and default service placement, as the following telecommunications, and the following telephones, refunded full-time conditions.
def generate_text(prompt, max_length=512):
model, tokenizer = get_model_and_tokenizer()
messages = [{"role": "user", "content": prompt}]
formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=max_length,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
response = generated_text.split("ASSISTANT: ")[-1].strip()
return response
The EXERTLE_TEXT workstalls the Tinyllama Workout: Returns saved model and Tokenzer, dynamic formats in the chat temple, samples to respond to temperature and top-p. After generation, it commends the result and just releases the help of a helper by separating the “helper:” mark.
Level 1: Simple processor
At the easiest level, the code describes the exact pipe of text – a generation management of model as a language processor. When the user is quicking, “-`proSorsorserson` work an ambassador `produce Rete_text into the answer directly. Under the Hood, `produce_text` confirms the model and Tokenzer and by receiving them within` Get_model_and This Standard is the most basic communication pattern: Installation received, and the movement of the program remained completely control of people.
def simple_processor(prompt):
"""Level 1: Simple Processor - Model has no impact on program flow"""
response = generate_text(prompt)
return response
def demo_level1():
print("n" + "="*50)
print("LEVEL 1: SIMPLE PROCESSOR DEMO")
print("="*50)
print("At this level, the AI has no control over program flow.")
print("It simply takes input and produces output.n")
user_input = input("Enter your question or prompt: ") or "Write a short poem about artificial intelligence."
print("nProcessing your request...n")
output = simple_processor(user_input)
print("OUTPUT:")
print("-"*50)
print(output)
print("-"*50)
Luthula_processor work includes a simple agent for our agent by managing the model only as the text generator; Accept faster user immediately and production delegates_the news. Returns any productive model outside of any brand of branching or logic. The compatible method of Demo_Level1 provides a slow contact, printing a clearer header, requesting user's input (automatically.
Level 2: Router
The second level introduces a conditional route based on the classification of the user's question model. The work of the `Rougent` is asking the model to divide the question to be” technical, “the question is being delivered to the Tur_the Ture_query. You can guidance the following method of communication while based on people specified management to produce the final result.
def router_agent(user_query):
"""Level 2: Router - Model determines basic program flow"""
category_prompt = f"""Classify the following query into one of these categories:
'technical', 'creative', or 'factual'.
Query: {user_query}
Return ONLY the category name and nothing else."""
category_response = generate_text(category_prompt)
category = category_response.lower()
if "technical" in category:
category = "technical"
elif "creative" in category:
category = "creative"
else:
category = "factual"
print(f"Query classified as: {category}")
if category == "technical":
return handle_technical_query(user_query)
elif category == "creative":
return handle_creative_query(user_query)
else:
return handle_factual_query(user_query)
def handle_technical_query(query):
system_prompt = f"""You are a technical assistant. Provide detailed technical explanations.
User query: {query}"""
response = generate_text(system_prompt)
return f"[Technical Response]n{response}"
def handle_creative_query(query):
system_prompt = f"""You are a creative assistant. Be imaginative and inspiring.
User query: {query}"""
response = generate_text(system_prompt)
return f"[Creative Response]n{response}"
def handle_factual_query(query):
system_prompt = f"""You are a factual assistant. Provide accurate information concisely.
User query: {query}"""
response = generate_text(system_prompt)
return f"[Factual Response]n{response}"
def demo_level2():
print("n" + "="*50)
print("LEVEL 2: ROUTER DEMO")
print("="*50)
print("At this level, the AI determines basic program flow.")
print("It decides which processing path to take.n")
user_query = input("Enter your question or prompt: ") or "How do neural networks work?"
print("nProcessing your request...n")
result = router_agent(user_query)
print("OUTPUT:")
print("-"*50)
print(result)
print("-"*50)
Router_agent function helps with router behavior by asking the model to divide the user's question as a “tech_quechnical order provides a clear Clip-style indicator, the printing, automatically). How can a basic control of the process of selecting the application.
Level 3: Driving Tool
At the third level, the code provides a model to determine the number of exquisite exports to promote JSON based management. The `The Tingter_calling_agent is displaying a user question next to the menu of potential tools, including weathering, or direct response, and educate the model to answer the selected tool and its parameters. Regex and release the first JSON item from the model out, then the code falls back to the correct response when Parsing fails. When the Tertivation and conflict is identified, the corresponding Python function is made, its effect is caught, and the last model telephone is running to a compatible response. This llM's Perb bridles consult with the concrete code code by allowing the orchestrate model to any Apis or driving services.
def tool_calling_agent(user_query):
"""Level 3: Tool Calling - Model determines how functions are executed"""
tool_selection_prompt = f"""Based on the user query, select the most appropriate tool from the following list:
1. get_weather: Get the current weather for a location
2. search_information: Search for specific information on a topic
3. get_date_time: Get current date and time
4. direct_response: Provide a direct response without using tools
USER QUERY: {user_query}
INSTRUCTIONS:
- Return your response in valid JSON format
- Include the tool name and any required parameters
- For get_weather, include location parameter
- For search_information, include query and depth parameter (basic or detailed)
- For get_date_time, include timezone parameter (optional)
- For direct_response, no parameters needed
Example output format: {{"tool": "get_weather", "parameters": {{"location": "New York"}}}}"""
tool_selection_response = generate_text(tool_selection_prompt)
try:
json_match = re.search(r'({.*})', tool_selection_response, re.DOTALL)
if json_match:
tool_selection = json.loads(json_match.group(1))
else:
print("Could not parse tool selection. Defaulting to direct response.")
tool_selection = {"tool": "direct_response", "parameters": {}}
except json.JSONDecodeError:
print("Invalid JSON in tool selection. Defaulting to direct response.")
tool_selection = {"tool": "direct_response", "parameters": {}}
tool_name = tool_selection.get("tool", "direct_response")
parameters = tool_selection.get("parameters", {})
print(f"Selected tool: {tool_name}")
if tool_name == "get_weather":
location = parameters.get("location", "Unknown")
tool_result = get_weather(location)
elif tool_name == "search_information":
query = parameters.get("query", user_query)
depth = parameters.get("depth", "basic")
tool_result = search_information(query, depth)
elif tool_name == "get_date_time":
timezone = parameters.get("timezone", "UTC")
tool_result = get_date_time(timezone)
else:
return generate_text(f"Please provide a helpful response to: {user_query}")
final_prompt = f"""User Query: {user_query}
Tool Used: {tool_name}
Tool Result: {json.dumps(tool_result)}
Based on the user's query and the tool result above, provide a helpful response."""
final_response = generate_text(final_prompt)
return final_response
def get_weather(location):
weather_conditions = ["Sunny", "Partly cloudy", "Overcast", "Light rain", "Heavy rain", "Thunderstorms", "Snowy", "Foggy"]
temperatures = {
"cold": list(range(-10, 10)),
"mild": list(range(10, 25)),
"hot": list(range(25, 40))
}
location_hash = sum(ord(c) for c in location)
condition_index = location_hash % len(weather_conditions)
season = ["winter", "spring", "summer", "fall"][location_hash % 4]
temp_range = temperatures["cold"] if season in ["winter", "fall"] else temperatures["hot"] if season == "summer" else temperatures["mild"]
temperature = random.choice(temp_range)
return {
"location": location,
"temperature": f"{temperature}°C",
"conditions": weather_conditions[condition_index],
"humidity": f"{random.randint(30, 90)}%"
}
def search_information(query, depth="basic"):
mock_results = [
f"First result about {query}",
f"Second result discussing {query}",
f"Third result analyzing {query}"
]
if depth == "detailed":
mock_results.extend([
f"Fourth detailed analysis of {query}",
f"Fifth comprehensive overview of {query}",
f"Sixth academic paper on {query}"
])
return {
"query": query,
"results": mock_results,
"depth": depth,
"sources": [f"source{i}.com" for i in range(1, len(mock_results) + 1)]
}
def get_date_time(timezone="UTC"):
current_time = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
return {
"current_datetime": current_time,
"timezone": timezone
}
def demo_level3():
print("n" + "="*50)
print("LEVEL 3: TOOL CALLING DEMO")
print("="*50)
print("At this level, the AI selects which tools to use and with what parameters.")
print("It can process the results from tools to create a final response.n")
user_query = input("Enter your question or prompt: ") or "What's the weather like in San Francisco?"
print("nProcessing your request...n")
result = tool_calling_agent(user_query)
print("OUTPUT:")
print("-"*50)
print(result)
print("-"*50)
At the first time of Level 3, Tool_calling Tool Tool Model to Pre-predetermined resources, such as web time, by returning the JSON item, by returning the toolbar and its selected tools. Safely note that JSON, describes the corresponding Python function to receive systematic data, and make the following Model Call to integrate an united response tool, facing the user.
Level 4: Multiple Start Error agent
The fourth level expands a full-time cost of a multiple steps that deal with its travel and state. MultiPagent` Class maintains internal memory for the user's installation, displayed tools, and agents. Each ITerita releases the planning of a memory, asking the model to select one of the few tools, such as the web search, issue, text, or to achieve the last. After removing the selected tool and making its results back from memory, the process also until the model removes the “perfect” action or higher number of steps. Finally, the agent points to memory into the last reply of unity. This structure shows how the llm can plan a complicated orchestrate, multiple classes while consulting with external tasks and reflects its previous results.
class MultiStepAgent:
"""Level 4: Multi-Step Agent - Model controls iteration and program continuation"""
def __init__(self):
self.tools = {
"search_web": self.search_web,
"extract_info": self.extract_info,
"summarize_text": self.summarize_text,
"create_report": self.create_report
}
self.memory = []
self.max_steps = 5
def run(self, user_task):
self.memory.append({"role": "user", "content": user_task})
steps_taken = 0
while steps_taken < self.max_steps:
next_action = self.determine_next_action()
if next_action["action"] == "complete":
return next_action["output"]
tool_name = next_action["tool"]
tool_args = next_action["args"]
print(f"n📌 Step {steps_taken + 1}: Using tool '{tool_name}' with arguments: {tool_args}")
tool_result = self.tools[tool_name](**tool_args)
self.memory.append({
"role": "tool",
"content": json.dumps(tool_result)
})
steps_taken += 1
return self.generate_final_response("Maximum steps reached. Here's what I've found so far.")
def determine_next_action(self):
context = "Current memory state:n"
for item in self.memory:
if item["role"] == "user":
context += f"USER INPUT: {item['content']}nn"
elif item["role"] == "tool":
context += f"TOOL RESULT: {item['content']}nn"
prompt = f"""{context}
Based on the above information, determine the next action to take.
Choose one of the following options:
1. search_web: Search for information (args: query)
2. extract_info: Extract specific information from a text (args: text, target_info)
3. summarize_text: Create a summary of text (args: text)
4. create_report: Create a structured report (args: title, content)
5. complete: Task is complete (include final output)
Respond with a JSON object with the following structure:
For tools: {{"action": "tool", "tool": "tool_name", "args": {{tool-specific arguments}}}}
For completion: {{"action": "complete", "output": "final output text"}}
Only return the JSON object and nothing else."""
next_action_response = generate_text(prompt)
try:
json_match = re.search(r'({.*})', next_action_response, re.DOTALL)
if json_match:
next_action = json.loads(json_match.group(1))
else:
return {"action": "complete", "output": "I encountered an error in planning. Here's what I know so far: " + self.generate_final_response("Error in planning")}
except json.JSONDecodeError:
return {"action": "complete", "output": "I encountered an error in planning. Here's what I know so far: " + self.generate_final_response("Error in planning")}
self.memory.append({"role": "assistant", "content": next_action_response})
return next_action
def generate_final_response(self, prefix=""):
context = "Task history:n"
for item in self.memory:
if item["role"] == "user":
context += f"USER INPUT: {item['content']}nn"
elif item["role"] == "tool":
context += f"TOOL RESULT: {item['content']}nn"
elif item["role"] == "assistant":
context += f"AGENT ACTION: {item['content']}nn"
prompt = f"""{context}
{prefix} Generate a comprehensive final response that addresses the original user task."""
final_response = generate_text(prompt)
return final_response
def search_web(self, query):
time.sleep(1)
query_hash = sum(ord(c) for c in query)
num_results = (query_hash % 3) + 2
results = []
for i in range(num_results):
results.append(f"Result {i+1}: Information about '{query}' related to aspect {chr(97 + i)}.")
return {
"query": query,
"results": results
}
def extract_info(self, text, target_info):
time.sleep(0.5)
return {
"extracted_info": f"Extracted information about '{target_info}' from the text: The text indicates that {target_info} is related to several key aspects mentioned in the content.",
"confidence": round(random.uniform(0.7, 0.95), 2)
}
def summarize_text(self, text):
time.sleep(0.5)
word_count = len(text.split())
return {
"summary": f"Summary of the provided text ({word_count} words): The text discusses key points related to the subject matter, highlighting important aspects and providing context.",
"original_length": word_count,
"summary_length": round(word_count * 0.3)
}
def create_report(self, title, content):
time.sleep(0.7)
report_sections = [
"## Introduction",
f"This report provides an overview of {title}.",
"",
"## Key Findings",
content,
"",
"## Conclusion",
f"This analysis of {title} highlights several important aspects that warrant consideration."
]
return {
"report": "n".join(report_sections),
"word_count": len(content.split()),
"section_count": 3
}
def demo_level4():
print("n" + "="*50)
print("LEVEL 4: MULTI-STEP AGENT DEMO")
print("="*50)
print("At this level, the AI manages the entire workflow, deciding which tools")
print("to use, when to use them, and determining when the task is complete.n")
user_task = input("Enter a research or analysis task: ") or "Research quantum computing recent developments and create a brief report"
print("nProcessing your request... (this may take a minute)n")
agent = MultiStepAgent()
result = agent.run(user_task)
print("nFINAL OUTPUT:")
print("-"*50)
print(result)
print("-"*50)
Most of the class shows the memory of user installation and the results of the toolbar, and then repeated the llm that determines its next verb, or to close the selected report and then end until the step. By doing so, it shows the Level 4 agent found by many steps by allowing the Itemation management model and continuation system.
Level 5: Complete independent agent
At the highest level, `ORULULEIMAIMAIGENT` is a class showing the closed system when the model is not only the plans and created and productive, activate the new Python code. After the user's work is recorded, the agent asks the model to produce a detailed system, and make it productive solutions that contain automatically, automatically cleaned in marking format. The next verification initiative is to understand the model of any syntax or sensible stories; If the issues are found, the agent asks the model that is refined the code. A certified code is also under the Sandbox services, such as secure printing, and photographed, and captured the concept of capture, and killed in the restricted area of the area. Finally, an agent adapts the expert report to which is made, how to do, and the final results. This Standard indicates a real independent AI system that can add its skills through Dynamic Code Dance and Persion.
class AutonomousAgent:
"""Level 5: Fully Autonomous Agent - Model creates & executes new code"""
def __init__(self):
self.memory = []
def run(self, user_task):
self.memory.append({"role": "user", "content": user_task})
print("🧠 Planning solution approach...")
planning_message = self.plan_solution(user_task)
self.memory.append({"role": "assistant", "content": planning_message})
print("💻 Generating solution code...")
generated_code = self.generate_solution_code()
self.memory.append({"role": "assistant", "content": f"Generated code: ```pythonn{generated_code}n```"})
print("🔍 Validating code...")
validation_result = self.validate_code(generated_code)
if not validation_result["valid"]:
print("⚠️ Code validation found issues - refining...")
refined_code = self.refine_code(generated_code, validation_result["issues"])
self.memory.append({"role": "assistant", "content": f"Refined code: ```pythonn{refined_code}n```"})
generated_code = refined_code
else:
print("✅ Code validation passed")
try:
print("🚀 Executing solution...")
execution_result = self.safe_execute_code(generated_code, user_task)
self.memory.append({"role": "system", "content": f"Execution result: {execution_result}"})
# Generate a final report
print("📝 Creating final report...")
final_report = self.create_final_report(execution_result)
return final_report
except Exception as e:
return f"Error executing the solution: {str(e)}nnGenerated code was:n```pythonn{generated_code}n```"
def plan_solution(self, task):
prompt = f"""Task: {task}
You are an autonomous problem-solving agent. Create a detailed plan to solve this task.
Include:
1. Breaking down the task into subtasks
2. What algorithms or approaches you'll use
3. What data structures are needed
4. Any external resources or libraries required
5. Expected challenges and how to address them
Provide a step-by-step plan.
"""
return generate_text(prompt)
def generate_solution_code(self):
context = "Task and planning information:n"
for item in self.memory:
if item["role"] == "user":
context += f"USER TASK: {item['content']}nn"
elif item["role"] == "assistant":
context += f"PLANNING: {item['content']}nn"
prompt = f"""{context}
Generate clean, efficient Python code that solves this task. Include comments to explain the code.
The code should be self-contained and able to run inside a Python script or notebook.
Only include the Python code itself without any markdown formatting.
"""
code = generate_text(prompt)
code = re.sub(r'^```pythonn|```$', '', code, flags=re.MULTILINE)
return code
def validate_code(self, code):
prompt = f"""Code to validate:
```python
{code}
```
Examine the code for the following issues:
1. Syntax errors
2. Logic errors
3. Inefficient implementations
4. Security concerns
5. Missing error handling
6. Import statements for unavailable libraries
If the code has any issues, describe them in detail. If the code looks good, state "No issues found."
"""
validation_response = generate_text(prompt)
if "no issues" in validation_response.lower() or "code looks good" in validation_response.lower():
return {"valid": True, "issues": None}
else:
return {"valid": False, "issues": validation_response}
def refine_code(self, original_code, issues):
prompt = f"""Original code:
```python
{original_code}
```
Issues identified:
{issues}
Please provide a corrected version of the code that addresses these issues.
Only include the Python code itself without any markdown formatting.
"""
refined_code = generate_text(prompt)
refined_code = re.sub(r'^```pythonn|```$', '', refined_code, flags=re.MULTILINE)
return refined_code
def safe_execute_code(self, code, user_task):
safe_imports = """
# Standard library imports
import math
import random
import re
import time
import json
from datetime import datetime
# Define a function to capture printed output
captured_output = []
original_print = print
def safe_print(*args, **kwargs):
output = " ".join(str(arg) for arg in args)
captured_output.append(output)
original_print(output)
print = safe_print
# Define a result variable to store the final output
result = None
# Function to store the final result
def store_result(value):
global result
result = value
return value
"""
result_capture = """
# Store the final result if not already done
if 'result' not in locals() or result is None:
try:
# Look for variables that might contain the final result
potential_results = [var for var in locals() if not var.startswith('_') and var not in
['math', 'random', 're', 'time', 'json', 'datetime',
'captured_output', 'original_print', 'safe_print',
'result', 'store_result']]
if potential_results:
# Use the last defined variable as the result
store_result(locals()[potential_results[-1]])
except:
pass
"""
full_code = safe_imports + "n# User code starts heren" + code + "nn" + result_capture
code_lines = code.split('n')
first_lines = code_lines[:3]
print(f"nExecuting (first 3 lines):n{first_lines}")
local_env = {}
try:
exec(full_code, {}, local_env)
return {
"output": local_env.get('captured_output', []),
"result": local_env.get('result', "No explicit result returned")
}
except Exception as e:
return {"error": str(e)}
def create_final_report(self, execution_result):
if isinstance(execution_result.get('output'), list):
output_text = "n".join(execution_result.get('output', []))
else:
output_text = str(execution_result.get('output', ''))
result_text = str(execution_result.get('result', ''))
error_text = execution_result.get('error', '')
context = "Task history:n"
for item in self.memory:
if item["role"] == "user":
context += f"USER TASK: {item['content']}nn"
prompt = f"""{context}
EXECUTION OUTPUT:
{output_text}
EXECUTION RESULT:
{result_text}
{f"ERROR: {error_text}" if error_text else ""}
Create a final report that explains the solution to the original task. Include:
1. What was done
2. How it was accomplished
3. The final results
4. Any insights or conclusions drawn from the analysis
Format the report in a professional, easy to read manner.
"""
return generate_text(prompt)
def demo_level5():
print("n" + "="*50)
print("LEVEL 5: FULLY AUTONOMOUS AGENT DEMO")
print("="*50)
print("At this level, the AI generates and executes code to solve complex problems.")
print("It can create, validate, refine, and run custom code solutions.n")
user_task = input("Enter a data analysis or computational task: ") or "Analyze a dataset of numbers [10, 45, 65, 23, 76, 12, 89, 32, 50] and create visualizations of the distribution"
print("nProcessing your request... (this may take a minute or two)n")
agent = AutonomousAgent()
result = agent.run(user_task)
print("nFINAL REPORT:")
print("-"*50)
print(result)
print("-"*50)
The private sector complies with the full-time agenci. When running is started, the agent promotes the model to produce a detailed system for resolving the work and stores this program by memory. Next, requesting a model that created the Python code that contains the Psyth Code based on that plan, you have any Markdown Foronting, and confirm the code by entering the Syntax model, logical, functional and security problems. If validation makes problems of problems, the agent ordered the model to dip the code until it passed the test. The final code is included in the Sandbox HandCer tied, complete with issuing buffers and automatically automatically disabled. Finally, the agent adapts to a removed report, hand-cultural expertise is performed in the model, producing an account that describes what is done, and how it is, and what is found. Demo_Level5's work-related work is providing specifically, valid LOOP that receives the user's job, conducting an agent, and has revealed the final report.
The main function: All the steps above
def main():
while True:
clear_output(wait=True)
print("n" + "="*50)
print("AI AGENT LEVELS DEMO")
print("="*50)
print("nThis notebook demonstrates the 5 levels of AI agents:")
print("1. Simple Processor - Model has no impact on program flow")
print("2. Router - Model determines basic program flow")
print("3. Tool Calling - Model determines how functions are executed")
print("4. Multi-Step Agent - Model controls iteration and program continuation")
print("5. Fully Autonomous Agent - Model creates & executes new code")
print("6. Quit")
choice = input("nSelect a level to demo (1-6): ")
if choice == "1":
demo_level1()
elif choice == "2":
demo_level2()
elif choice == "3":
demo_level3()
elif choice == "4":
demo_level4()
elif choice == "5":
demo_level5()
elif choice == "6":
print("nThank you for exploring the AI Agent levels!")
break
else:
print("nInvalid choice. Please select 1-6.")
input("nPress Enter to return to the main menu...")
if __name__ == "__main__":
main()
Finally, the main function produces a simple menu loop, which deletes the flow of colab. This structure provides a sharp indication, the CLI style that gives you strength to evaluate each agent in a row without the murder of a manual.
In conclusion, by working with the five levels, we have found practical insight into Agentic Ai Terms and Trade between controlling, flexibility and independence. We have seen how the system can come from direct behavior as soon as possible in complicated decision pipes and even changing the exchange code. Even if you intend to show wise assistants, create data pipes, or try the skills of Ai extensive, this organization framework provides a roadmap to compose randomly and crafty agents.
Here is the Colab Notebook. Also, don't forget to follow Sane and join ours Telegraph station including LinkedIn Grtopic. Don't forget to join ours 90k + ml subreddit.
🔥 [Register Now] Summit of the Minicon Virtual in Agentic AI: Free Registration + Certificate of Before Hour 4 Hour Court (May 21, 9 AM
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.
