Coding Guide to the Complete Nanobot Agent Test Pipeline, from wiring and memory tools to skills, Subagents, and Cron scheduling

In this lesson, we go deeper nanobotis a lightweight personal AI agent framework from HKUDS that packs complete agent capabilities into nearly 4,000 lines of Python. Rather than just plugging it in and out of the box, we open the hood and go through each of its main subsystems, the agent loop, tool execution, memory persistence, load capabilities, session management, subagent spawning, and cron scheduling, to understand exactly how they work. We integrate everything with OpenAI's gpt-4o-mini as our LLM provider, enter our API key securely using the terminal (do not display the notebook output), and gradually build from a single tool call object to a multi-step research pipeline that reads and writes files, stores long-term memories, and dispatches tasks to run concurrently. Ultimately, we don't just know how to use nanobots, we understand how to extend them with custom tools, abilities, and properties for our agents.
import sys
import os
import subprocess
def section(title, emoji="๐น"):
"""Pretty-print a section header."""
width = 72
print(f"n{'โ' * width}")
print(f" {emoji} {title}")
print(f"{'โ' * width}n")
def info(msg):
print(f" โน๏ธ {msg}")
def success(msg):
print(f" โ
{msg}")
def code_block(code):
print(f" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ")
for line in code.strip().split("n"):
print(f" โ {line}")
print(f" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ")
section("STEP 1 ยท Installing nanobot-ai & Dependencies", "๐ฆ")
info("Installing nanobot-ai from PyPI (latest stable)...")
subprocess.check_call([
sys.executable, "-m", "pip", "install", "-q",
"nanobot-ai", "openai", "rich", "httpx"
])
success("nanobot-ai installed successfully!")
import importlib.metadata
nanobot_version = importlib.metadata.version("nanobot-ai")
print(f" ๐ nanobot-ai version: {nanobot_version}")
section("STEP 2 ยท Secure OpenAI API Key Input", "๐")
info("Your API key will NOT be printed or stored in notebook output.")
info("It is held only in memory for this session.n")
try:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
if not OPENAI_API_KEY:
raise ValueError("Not set in Colab secrets")
success("Loaded API key from Colab Secrets ('OPENAI_API_KEY').")
info("Tip: You can set this in Colab โ ๐ Secrets panel on the left sidebar.")
except Exception:
import getpass
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
success("API key captured securely via terminal input.")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
import openai
client = openai.OpenAI(api_key=OPENAI_API_KEY)
try:
client.models.list()
success("OpenAI API key validated โ connection successful!")
except Exception as e:
print(f" โ API key validation failed: {e}")
print(" Please restart and enter a valid key.")
sys.exit(1)
section("STEP 3 ยท Configuring nanobot for OpenAI", "โ๏ธ")
import json
from pathlib import Path
NANOBOT_HOME = Path.home() / ".nanobot"
NANOBOT_HOME.mkdir(parents=True, exist_ok=True)
WORKSPACE = NANOBOT_HOME / "workspace"
WORKSPACE.mkdir(parents=True, exist_ok=True)
(WORKSPACE / "memory").mkdir(parents=True, exist_ok=True)
config = {
"providers": {
"openai": {
"apiKey": OPENAI_API_KEY
}
},
"agents": {
"defaults": {
"model": "openai/gpt-4o-mini",
"maxTokens": 4096,
"workspace": str(WORKSPACE)
}
},
"tools": {
"restrictToWorkspace": True
}
}
config_path = NANOBOT_HOME / "config.json"
config_path.write_text(json.dumps(config, indent=2))
success(f"Config written to {config_path}")
agents_md = WORKSPACE / "AGENTS.md"
agents_md.write_text(
"# Agent Instructionsnn"
"You are nanobot ๐, an ultra-lightweight personal AI assistant.n"
"You are helpful, concise, and use tools when needed.n"
"Always explain your reasoning step by step.n"
)
soul_md = WORKSPACE / "SOUL.md"
soul_md.write_text(
"# Personalitynn"
"- Friendly and approachablen"
"- Technically precisen"
"- Uses emoji sparingly for warmthn"
)
user_md = WORKSPACE / "USER.md"
user_md.write_text(
"# User Profilenn"
"- The user is exploring the nanobot framework.n"
"- They are interested in AI agent architectures.n"
)
memory_md = WORKSPACE / "memory" / "MEMORY.md"
memory_md.write_text("# Long-term Memorynn_No memories stored yet._n")
success("Workspace bootstrap files created:")
for f in [agents_md, soul_md, user_md, memory_md]:
print(f" ๐ {f.relative_to(NANOBOT_HOME)}")
section("STEP 4 ยท nanobot Architecture Deep Dive", "๐๏ธ")
info("""nanobot is organized into 7 subsystems in ~4,000 lines of code:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ USER INTERFACES โ
โ CLI ยท Telegram ยท WhatsApp ยท Discord โ
โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ InboundMessage / OutboundMessage
โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MESSAGE BUS โ
โ publish_inbound() / publish_outbound() โ
โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AGENT LOOP (loop.py) โ
โ โโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Context โโ โ LLM โโ โ Tool Execution โ โ
โ โ Builder โ โ Call โ โ (if tool_calls) โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโโโ โ
โ โฒ โ loop back โ
โ โ โโโโโโโโโโโโโโโโโโโโโ until done โ
โ โโโโโโดโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Memory โ โ Skills โ โ Subagent Mgr โ โ
โ โ Store โ โ Loader โ โ (spawn tasks) โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LLM PROVIDER LAYER โ
โ OpenAI ยท Anthropic ยท OpenRouter ยท DeepSeek ยท ... โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The Agent Loop iterates up to 40 times (configurable):
1. ContextBuilder assembles system prompt + memory + skills + history
2. LLM is called with tools definitions
3. If response has tool_calls โ execute tools, append results, loop
4. If response is plain text โ return as final answer
""")
Set up a full tutorial base by importing the required modules, defining helper functions to display a clean class, and install the nanobot dependencies within Google Colab. We then securely load and validate the OpenAI API key so that every notebook can interact with the model without exposing information to the notebook output. After that, we prepare the nanobot workspace and create the main bootstrap files, such as AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and study the high-level structure to understand how the framework is organized before moving on to implementation.
section("STEP 5 ยท The Agent Loop โ Core Concept in Action", "๐")
info("We'll manually recreate nanobot's agent loop pattern using OpenAI.")
info("This is exactly what loop.py does internally.n")
import json as _json
import datetime
TOOLS = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current date and time.",
"parameters": {"type": "object", "properties": {}, "required": []}
}
},
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression.",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Math expression to evaluate, e.g. '2**10 + 42'"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the contents of a file in the workspace.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Relative file path within the workspace"
}
},
"required": ["path"]
}
}
},
{
"type": "function",
"function": {
"name": "write_file",
"description": "Write content to a file in the workspace.",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string", "description": "Relative file path"},
"content": {"type": "string", "description": "Content to write"}
},
"required": ["path", "content"]
}
}
},
{
"type": "function",
"function": {
"name": "save_memory",
"description": "Save a fact to the agent's long-term memory.",
"parameters": {
"type": "object",
"properties": {
"fact": {"type": "string", "description": "The fact to remember"}
},
"required": ["fact"]
}
}
}
]
def execute_tool(name: str, arguments: dict) -> str:
"""Execute a tool call โ mirrors nanobot's ToolRegistry.execute()."""
if name == "get_current_time":
elif name == "calculate":
expr = arguments.get("expression", "")
try:
result = eval(expr, {"__builtins__": {}}, {"abs": abs, "round": round, "min": min, "max": max})
return str(result)
except Exception as e:
return f"Error: {e}"
elif name == "read_file":
fpath = WORKSPACE / arguments.get("path", "")
if fpath.exists():
return fpath.read_text()[:4000]
return f"Error: File not found โ {arguments.get('path')}"
elif name == "write_file":
fpath = WORKSPACE / arguments.get("path", "")
fpath.parent.mkdir(parents=True, exist_ok=True)
fpath.write_text(arguments.get("content", ""))
return f"Successfully wrote {len(arguments.get('content', ''))} chars to {arguments.get('path')}"
elif name == "save_memory":
fact = arguments.get("fact", "")
mem_file = WORKSPACE / "memory" / "MEMORY.md"
existing = mem_file.read_text()
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
mem_file.write_text(existing + f"n- [{timestamp}] {fact}n")
return f"Memory saved: {fact}"
return f"Unknown tool: {name}"
def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""
Recreates nanobot's AgentLoop._process_message() logic.
The loop:
1. Build context (system prompt + bootstrap files + memory)
2. Call LLM with tools
3. If tool_calls โ execute โ append results โ loop
4. If text response โ return final answer
"""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "memory" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".join(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" ๐จ User: {user_message}")
print(f" ๐ง System prompt: {len(system_prompt)} chars "
f"(from {len(system_parts)} bootstrap files)")
print()
for iteration in range(1, max_iterations + 1):
if verbose:
print(f" โโ Iteration {iteration}/{max_iterations} โโ")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS,
tool_choice="auto",
max_tokens=2048
)
choice = response.choices[0]
message = choice.message
if message.tool_calls:
if verbose:
print(f" ๐ง LLM requested {len(message.tool_calls)} tool call(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.function.name
args = _json.loads(tc.function.arguments) if tc.function.arguments else {}
if verbose:
print(f" โ {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
result = execute_tool(fname, args)
if verbose:
print(f" โ {result[:100]}{'...' if len(result) > 100 else ''}")
messages.append({
"role": "tool",
"tool_call_id": tc.id,
"content": result
})
if verbose:
print()
else:
final = message.content or ""
if verbose:
print(f" ๐ฌ Agent: {final}n")
return final
return "โ ๏ธ Max iterations reached without a final response."
print("โ" * 60)
print(" DEMO 1: Time-aware calculation with tool chaining")
print("โ" * 60)
result1 = agent_loop(
"What is the current time? Also, calculate 2^20 + 42 for me."
)
print("โ" * 60)
print(" DEMO 2: File creation + memory storage")
print("โ" * 60)
result2 = agent_loop(
"Write a haiku about AI agents to a file called 'haiku.txt'. "
"Then remember that I enjoy poetry about technology."
)
We manually recreate the heart of the nanobot by defining the schemas of the tools, exploiting their understanding and functionality, and building an iterative agent loop that connects the LLM to the tools. We combine information from the workspace and memory files, send the dialog to the model, receive the tool calls, execute them, paste the results back into the dialog, and continue probing until the model returns the final answer. We then test this machine with practical examples involving timing, calculations, file writing, and memory saving, to see the loop working exactly like the nanobot's internal flow.
section("STEP 6 ยท Memory System โ Persistent Agent Memory", "๐ง ")
info("""nanobot's memory system (memory.py) uses two storage mechanisms:
1. MEMORY.md โ Long-term facts (always loaded into context)
2. YYYY-MM-DD.md โ Daily journal entries (loaded for recent days)
Memory consolidation runs periodically to summarize and compress
old entries, keeping the context window manageable.
""")
mem_content = (WORKSPACE / "memory" / "MEMORY.md").read_text()
print(" ๐ Current MEMORY.md contents:")
print(" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ")
for line in mem_content.strip().split("n"):
print(f" โ {line}")
print(" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโn")
today = datetime.datetime.now().strftime("%Y-%m-%d")
daily_file = WORKSPACE / "memory" / f"{today}.md"
daily_file.write_text(
f"# Daily Log โ {today}nn"
"- User ran the nanobot advanced tutorialn"
"- Explored agent loop, tools, and memoryn"
"- Created a haiku about AI agentsn"
)
success(f"Daily journal created: memory/{today}.md")
print("n ๐ Workspace contents:")
for item in sorted(WORKSPACE.rglob("*")):
if item.is_file():
rel = item.relative_to(WORKSPACE)
size = item.stat().st_size
print(f" {'๐' if item.suffix == '.md' else '๐'} {rel} ({size} bytes)")
section("STEP 7 ยท Skills System โ Extending Agent Capabilities", "๐ฏ")
info("""nanobot's SkillsLoader (skills.py) reads Markdown files from the
skills/ directory. Each skill has:
- A name and description (for the LLM to decide when to use it)
- Instructions the LLM follows when the skill is activated
- Some skills are 'always loaded'; others are loaded on demand
Let's create a custom skill and see how the agent uses it.
""")
skills_dir = WORKSPACE / "skills"
skills_dir.mkdir(exist_ok=True)
data_skill = skills_dir / "data_analyst.md"
data_skill.write_text("""# Data Analyst Skill
## Description
Analyze data, compute statistics, and provide insights from numbers.
## Instructions
When asked to analyze data:
1. Identify the data type and structure
2. Compute relevant statistics (mean, median, range, std dev)
3. Look for patterns and outliers
4. Present findings in a clear, structured format
5. Suggest follow-up questions
## Always Available
false
""")
review_skill = skills_dir / "code_reviewer.md"
review_skill.write_text("""# Code Reviewer Skill
## Description
Review code for bugs, security issues, and best practices.
## Instructions
When reviewing code:
1. Check for common bugs and logic errors
2. Identify security vulnerabilities
3. Suggest performance improvements
4. Evaluate code style and readability
5. Rate the code quality on a 1-10 scale
## Always Available
true
""")
success("Custom skills created:")
for f in skills_dir.iterdir():
print(f" ๐ฏ {f.name}")
print("n ๐งช Testing skill-aware agent interaction:")
print(" " + "โ" * 56)
skills_context = "nn## Available Skillsn"
for skill_file in skills_dir.glob("*.md"):
content = skill_file.read_text()
skills_context += f"n### {skill_file.stem}n{content}n"
result3 = agent_loop(
"Review this Python code for issues:nn"
"```pythonn"
"def get_user(id):n"
" query = f'SELECT * FROM users WHERE id = {id}'n"
" result = db.execute(query)n"
" return resultn"
"```"
)
We access the persistent memory system by examining the long-term memory file, creating a daily journal entry, and reviewing how the workspace changes after previous interactions. We then extend the agent with a skills system by creating markdown-based skill files that define special behaviors such as data analysis and code review. Finally, we simulate how skill recognition works by exposing these skills to the agent and asking him to review the Python function, which helps us see how the nanobot can be guided by modulating power descriptions.
section("STEP 8 ยท Custom Tool Creation โ Extending the Agent", "๐ง")
info("""nanobot's tool system uses a ToolRegistry with a simple interface.
Each tool needs:
- A name and description
- A JSON Schema for parameters
- An execute() method
Let's create custom tools and wire them into our agent loop.
""")
import random
CUSTOM_TOOLS = [
{
"type": "function",
"function": {
"name": "roll_dice",
"description": "Roll one or more dice with a given number of sides.",
"parameters": {
"type": "object",
"properties": {
"num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1},
"sides": {"type": "integer", "description": "Number of sides per die", "default": 6}
},
"required": []
}
}
},
{
"type": "function",
"function": {
"name": "text_stats",
"description": "Compute statistics about a text: word count, char count, sentence count, reading time.",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "The text to analyze"}
},
"required": ["text"]
}
}
},
{
"type": "function",
"function": {
"name": "generate_password",
"description": "Generate a random secure password.",
"parameters": {
"type": "object",
"properties": {
"length": {"type": "integer", "description": "Password length", "default": 16}
},
"required": []
}
}
}
]
_original_execute = execute_tool
def execute_tool_extended(name: str, arguments: dict) -> str:
if name == "roll_dice":
n = arguments.get("num_dice", 1)
s = arguments.get("sides", 6)
rolls = [random.randint(1, s) for _ in range(n)]
return f"Rolled {n}d{s}: {rolls} (total: {sum(rolls)})"
elif name == "text_stats":
text = arguments.get("text", "")
words = len(text.split())
chars = len(text)
sentences = text.count('.') + text.count('!') + text.count('?')
reading_time = round(words / 200, 1)
return _json.dumps({
"words": words,
"characters": chars,
"sentences": max(sentences, 1),
"reading_time_minutes": reading_time
})
elif name == "generate_password":
import string
length = arguments.get("length", 16)
chars = string.ascii_letters + string.digits + "!@#$%^&*"
pwd = ''.join(random.choice(chars) for _ in range(length))
return f"Generated password ({length} chars): {pwd}"
return _original_execute(name, arguments)
execute_tool = execute_tool_extended
ALL_TOOLS = TOOLS + CUSTOM_TOOLS
def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""Agent loop with extended custom tools."""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "memory" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".join(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" ๐จ User: {user_message}")
print()
for iteration in range(1, max_iterations + 1):
if verbose:
print(f" โโ Iteration {iteration}/{max_iterations} โโ")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=ALL_TOOLS,
tool_choice="auto",
max_tokens=2048
)
choice = response.choices[0]
message = choice.message
if message.tool_calls:
if verbose:
print(f" ๐ง {len(message.tool_calls)} tool call(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.function.name
args = _json.loads(tc.function.arguments) if tc.function.arguments else {}
if verbose:
print(f" โ {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
result = execute_tool(fname, args)
if verbose:
print(f" โ {result[:120]}{'...' if len(result) > 120 else ''}")
messages.append({
"role": "tool",
"tool_call_id": tc.id,
"content": result
})
if verbose:
print()
else:
final = message.content or ""
if verbose:
print(f" ๐ฌ Agent: {final}n")
return final
return "โ ๏ธ Max iterations reached."
print("โ" * 60)
print(" DEMO 3: Custom tools in action")
print("โ" * 60)
result4 = agent_loop_v2(
"Roll 3 six-sided dice for me, then generate a 20-character password, "
"and finally analyze the text stats of this sentence: "
)
section("STEP 9 ยท Multi-Turn Conversation โ Session Management", "๐ฌ")
info("""nanobot's SessionManager (session/manager.py) maintains conversation
history per session_key (format: 'channel:chat_id'). History is stored
in JSON files and loaded into context for each new message.
Let's simulate a multi-turn conversation with persistent state.
""")
We extend the agent's capabilities by defining new custom tools such as dice rolling, text analysis, and password generation, and embedding them into the tool's execution pipeline. We're updating the legacy constructor, combining built-in definitions with custom tools, and creating a second version of the agent loop that can think about this larger set of capabilities. We then run a demo task that forces the model to integrate multiple tool requests, showing how easy it is to extend the nanobot with our own tasks while keeping the same overall interaction pattern.
class SimpleSessionManager:
"""
Minimal recreation of nanobot's SessionManager.
Stores conversation history and provides context continuity.
"""
def __init__(self, workspace: Path):
self.workspace = workspace
self.sessions: dict[str, list[dict]] = {}
def get_history(self, session_key: str) -> list[dict]:
return self.sessions.get(session_key, [])
def add_turn(self, session_key: str, role: str, content: str):
if session_key not in self.sessions:
self.sessions[session_key] = []
self.sessions[session_key].append({"role": role, "content": content})
def save(self, session_key: str):
fpath = self.workspace / f"session_{session_key.replace(':', '_')}.json"
fpath.write_text(_json.dumps(self.sessions.get(session_key, []), indent=2))
def load(self, session_key: str):
fpath = self.workspace / f"session_{session_key.replace(':', '_')}.json"
if fpath.exists():
self.sessions[session_key] = _json.loads(fpath.read_text())
session_mgr = SimpleSessionManager(WORKSPACE)
SESSION_KEY = "cli:tutorial_user"
def chat(user_message: str, verbose: bool = True):
"""Multi-turn chat with session persistence."""
session_mgr.add_turn(SESSION_KEY, "user", user_message)
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
system_prompt = "nn".join(system_parts)
history = session_mgr.get_history(SESSION_KEY)
messages = [{"role": "system", "content": system_prompt}] + history
if verbose:
print(f" ๐ค You: {user_message}")
print(f" (conversation history: {len(history)} messages)")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=1024
)
reply = response.choices[0].message.content or ""
session_mgr.add_turn(SESSION_KEY, "assistant", reply)
session_mgr.save(SESSION_KEY)
if verbose:
print(f" ๐ nanobot: {reply}n")
return reply
print("โ" * 60)
print(" DEMO 4: Multi-turn conversation with memory")
print("โ" * 60)
chat("Hi! My name is Alex and I'm building an AI agent.")
chat("What's my name? And what am I working on?")
chat("Can you suggest 3 features I should add to my agent?")
success("Session persisted with full conversation history!")
session_file = WORKSPACE / f"session_{SESSION_KEY.replace(':', '_')}.json"
session_data = _json.loads(session_file.read_text())
print(f" ๐ Session file: {session_file.name} ({len(session_data)} messages)")
section("STEP 10 ยท Subagent Spawning โ Background Task Delegation", "๐")
info("""nanobot's SubagentManager (agent/subagent.py) allows the main agent
to delegate tasks to independent background workers. Each subagent:
- Gets its own tool registry (no SpawnTool to prevent recursion)
- Runs up to 15 iterations independently
- Reports results back via the MessageBus
Let's simulate this pattern with concurrent tasks.
""")
import asyncio
import uuid
async def run_subagent(task_id: str, goal: str, verbose: bool = True):
"""
Simulates nanobot's SubagentManager._run_subagent().
Runs an independent LLM loop for a specific goal.
"""
if verbose:
print(f" ๐น Subagent [{task_id[:8]}] started: {goal[:60]}")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a focused research assistant. "
"Complete the assigned task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
result = response.choices[0].message.content or ""
if verbose:
print(f" โ
Subagent [{task_id[:8]}] done: {result[:80]}...")
return {"task_id": task_id, "goal": goal, "result": result}
async def spawn_subagents(goals: list[str]):
"""Spawn multiple subagents concurrently โ mirrors SubagentManager.spawn()."""
tasks = []
for goal in goals:
task_id = str(uuid.uuid4())
tasks.append(run_subagent(task_id, goal))
print(f"n ๐ Spawning {len(tasks)} subagents concurrently...n")
results = await asyncio.gather(*tasks)
return results
goals = [
"What are the 3 key components of a ReAct agent architecture?",
"Explain the difference between tool-calling and function-calling in LLMs.",
"What is MCP (Model Context Protocol) and why does it matter for AI agents?",
]
try:
loop = asyncio.get_running_loop()
import nest_asyncio
nest_asyncio.apply()
subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(goals))
except RuntimeError:
subagent_results = asyncio.run(spawn_subagents(goals))
except ModuleNotFoundError:
print(" โน๏ธ Running subagents sequentially (install nest_asyncio for async)...n")
subagent_results = []
for goal in goals:
task_id = str(uuid.uuid4())
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Complete the task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
r = response.choices[0].message.content or ""
print(f" โ
Subagent [{task_id[:8]}] done: {r[:80]}...")
subagent_results.append({"task_id": task_id, "goal": goal, "result": r})
print(f"n ๐ All {len(subagent_results)} subagent results collected!")
for i, r in enumerate(subagent_results, 1):
print(f"n โโ Result {i} โโ")
print(f" Goal: {r['goal'][:60]}")
print(f" Answer: {r['result'][:200]}")
We simulate dynamic conversation management by building a lightweight session manager that stores, retrieves, and persists the conversation history across loops. We use that history to maintain continuity in the conversation, allowing the agent to remember information from earlier in the conversation and respond in a relevant and appropriate manner. Then, we model subagent spawning by executing similar background tasks each carrying a fixed goal, which helps us understand how a nanobot can delegate the same task to independent agent workers.
section("STEP 11 ยท Scheduled Tasks โ The Cron Pattern", "โฐ")
info("""nanobot's CronService (cron/service.py) uses APScheduler to trigger
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.
Let's demonstrate the pattern with a simulated scheduler.
""")
from datetime import timedelta
class SimpleCronJob:
"""Mirrors nanobot's cron job structure."""
def __init__(self, name: str, message: str, interval_seconds: int):
self.id = str(uuid.uuid4())[:8]
self.name = name
self.message = message
self.interval = interval_seconds
self.enabled = True
self.last_run = None
self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)
jobs = [
SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400),
SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200),
SimpleCronJob("health_check", "Run a system health check.", 3600),
]
print(" ๐ Registered Cron Jobs:")
print(" โโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโ")
print(" โ ID โ Name โ Interval โ Next Run โ")
print(" โโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโค")
for job in jobs:
interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s"
print(f" โ {job.id} โ {job.name:<18} โ {interval_str:>8} โ {job.next_run.strftime('%Y-%m-%d %H:%M')} โ")
print(" โโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโ")
print(f"n โฐ Simulating cron trigger for '{jobs[2].name}'...")
cron_result = agent_loop_v2(jobs[2].message, verbose=True)
section("STEP 12 ยท Full Agent Pipeline โ End-to-End Demo", "๐ฌ")
info("""Now let's run a complex, multi-step task that exercises the full
nanobot pipeline: context building โ tool use โ memory โ file I/O.
""")
print("โ" * 60)
print(" DEMO 5: Complex multi-step research task")
print("โ" * 60)
complex_result = agent_loop_v2(
"I need you to help me with a small project:n"
"1. First, check the current timen"
"2. Write a short project plan to 'project_plan.txt' about building "
"a personal AI assistant (3-4 bullet points)n"
"3. Remember that my current project is 'building a personal AI assistant'n"
"4. Read back the project plan file to confirm it was saved correctlyn"
"Then summarize everything you did.",
max_iterations=15
)
section("STEP 13 ยท Final Workspace Summary", "๐")
print(" ๐ Complete workspace state after tutorial:n")
total_files = 0
total_bytes = 0
for item in sorted(WORKSPACE.rglob("*")):
if item.is_file():
rel = item.relative_to(WORKSPACE)
size = item.stat().st_size
total_files += 1
total_bytes += size
icon = {"md": "๐", "txt": "๐", "json": "๐"}.get(item.suffix.lstrip("."), "๐")
print(f" {icon} {rel} ({size:,} bytes)")
print(f"n โโ Summary โโ")
print(f" Total files: {total_files}")
print(f" Total size: {total_bytes:,} bytes")
print(f" Config: {config_path}")
print(f" Workspace: {WORKSPACE}")
print("n ๐ง Final Memory State:")
mem_content = (WORKSPACE / "memory" / "MEMORY.md").read_text()
print(" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ")
for line in mem_content.strip().split("n"):
print(f" โ {line}")
print(" โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ")
section("COMPLETE ยท What's Next?", "๐")
print(""" You've explored the core internals of nanobot! Here's what to try next:
๐น Run the real CLI agent:
nanobot onboard && nanobot agent
๐น Connect to Telegram:
Add a bot token to config.json and run `nanobot gateway`
๐น Enable web search:
Add a Brave Search API key under tools.web.search.apiKey
๐น Try MCP integration:
nanobot supports Model Context Protocol servers for external tools
๐น Explore the source (~4K lines):
๐น Key files to read:
โข agent/loop.py โ The agent iteration loop
โข agent/context.py โ Prompt assembly pipeline
โข agent/memory.py โ Persistent memory system
โข agent/tools/ โ Built-in tool implementations
โข agent/subagent.py โ Background task delegation
""")
We demonstrate the cron-style scheduling pattern by defining simple scheduled tasks, documenting their intervals and subsequent run times, and simulating the triggering of an automated agent task. We then run a large end-to-end example that combines context building, tool usage, memory updates, and file operations into a multi-step workflow, so we can see the full pipeline working together in a real-world project. Finally, we check the final state of the workspace, review the saved memory, and close the tutorial with clear next steps that connect this notebook implementation to the actual nanobot project and its source code.
In conclusion, we have gone through all the major layers of the nanobot architecture, from the iterative LLM tool loop at its core to the session manager that provides our agent with conversational memory in turn. We built five built-in tools, three custom tools, two skills, a session persistence layer, a subagent spawner, and a cron simulator, all while keeping everything in one executable script. What stands out is how nanobot proves that a production-grade agent framework does not require hundreds of thousands of lines of code; the patterns we've used here, context pooling, tool deployment, memory pooling, and background task deployment, are the same patterns that power very large systems, just taken out of their core. We now have a working mental model of an internal AI agent and a codebase small enough to be learned in one place, making nanobot an ideal choice for anyone looking to build, customize, or research AI agents from the ground up.
Check it out Full Codes here. Also, feel free to follow us Twitter and don't forget to join our 120k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.
Michal Sutter is a data science expert with a Master of Science in Data Science from the University of Padova. With a strong foundation in statistical analysis, machine learning, and data engineering, Michal excels at turning complex data sets into actionable insights.



