On this tutorial, we take a deep dive into nanobot, the ultra-lightweight private AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 strains of Python. Quite than merely putting in and operating it out of the field, we crack open the hood and manually recreate every of its core subsystems, the agent loop, device execution, reminiscence persistence, expertise loading, session administration, subagent spawning, and cron scheduling, so we perceive precisely how they work. We wire all the pieces up with OpenAIβs gpt-4o-mini as our LLM supplier, enter our API key securely by way of the terminal (by no means exposing it in pocket book output), and progressively construct from a single tool-calling loop all the best way to a multi-step analysis pipeline that reads and writes information, shops long-term reminiscences, and delegates duties to concurrent background employees. By the top, we donβt simply know the best way to use nanobots, we perceive the best way to prolong them with customized instruments, expertise, and our personal agent architectures.
import sys
import os
import subprocess
def part(title, emoji="πΉ"):
"""Fairly-print a bit header."""
width = 72
print(f"n{'β' * width}")
print(f" {emoji} {title}")
print(f"{'β' * width}n")
def data(msg):
print(f" βΉοΈ {msg}")
def success(msg):
print(f" β
{msg}")
def code_block(code):
print(f" ββββββββββββββββββββββββββββββββββββββββββββββββββ")
for line in code.strip().break up("n"):
print(f" β {line}")
print(f" ββββββββββββββββββββββββββββββββββββββββββββββββββ")
part("STEP 1 Β· Putting in nanobot-ai & Dependencies", "π¦")
data("Putting in nanobot-ai from PyPI (newest steady)...")
subprocess.check_call([
sys.executable, "-m", "pip", "install", "-q",
"nanobot-ai", "openai", "rich", "httpx"
])
success("nanobot-ai put in efficiently!")
import importlib.metadata
nanobot_version = importlib.metadata.model("nanobot-ai")
print(f" π nanobot-ai model: {nanobot_version}")
part("STEP 2 Β· Safe OpenAI API Key Enter", "π")
data("Your API key will NOT be printed or saved in pocket book output.")
data("It's held solely in reminiscence for this session.n")
strive:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
if not OPENAI_API_KEY:
increase ValueError("Not set in Colab secrets and techniques")
success("Loaded API key from Colab Secrets and techniques ('OPENAI_API_KEY').")
data("Tip: You'll be able to set this in Colab β π Secrets and techniques panel on the left sidebar.")
besides Exception:
import getpass
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
success("API key captured securely through terminal enter.")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
import openai
consumer = openai.OpenAI(api_key=OPENAI_API_KEY)
strive:
consumer.fashions.checklist()
success("OpenAI API key validated β connection profitable!")
besides Exception as e:
print(f" β API key validation failed: {e}")
print(" Please restart and enter a legitimate key.")
sys.exit(1)
part("STEP 3 Β· Configuring nanobot for OpenAI", "βοΈ")
import json
from pathlib import Path
NANOBOT_HOME = Path.dwelling() / ".nanobot"
NANOBOT_HOME.mkdir(mother and father=True, exist_ok=True)
WORKSPACE = NANOBOT_HOME / "workspace"
WORKSPACE.mkdir(mother and father=True, exist_ok=True)
(WORKSPACE / "reminiscence").mkdir(mother and father=True, exist_ok=True)
config = {
"suppliers": {
"openai": {
"apiKey": OPENAI_API_KEY
}
},
"brokers": {
"defaults": {
"mannequin": "openai/gpt-4o-mini",
"maxTokens": 4096,
"workspace": str(WORKSPACE)
}
},
"instruments": {
"restrictToWorkspace": True
}
}
config_path = NANOBOT_HOME / "config.json"
config_path.write_text(json.dumps(config, indent=2))
success(f"Config written to {config_path}")
agents_md = WORKSPACE / "AGENTS.md"
agents_md.write_text(
"# Agent Instructionsnn"
"You're nanobot π, an ultra-lightweight private AI assistant.n"
"You're useful, concise, and use instruments when wanted.n"
"At all times clarify your reasoning step-by-step.n"
)
soul_md = WORKSPACE / "SOUL.md"
soul_md.write_text(
"# Personalitynn"
"- Pleasant and approachablen"
"- Technically precisen"
"- Makes use of emoji sparingly for warmthn"
)
user_md = WORKSPACE / "USER.md"
user_md.write_text(
"# Person Profilenn"
"- The consumer is exploring the nanobot framework.n"
"- They're eager about AI agent architectures.n"
)
memory_md = WORKSPACE / "reminiscence" / "MEMORY.md"
memory_md.write_text("# Lengthy-term Memorynn_No reminiscences saved but._n")
success("Workspace bootstrap information created:")
for f in [agents_md, soul_md, user_md, memory_md]:
print(f" π {f.relative_to(NANOBOT_HOME)}")
part("STEP 4 Β· nanobot Structure Deep Dive", "ποΈ")
data("""nanobot is organized into 7 subsystems in ~4,000 strains of code:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β USER INTERFACES β
β CLI Β· Telegram Β· WhatsApp Β· Discord β
ββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β InboundMessage / OutboundMessage
ββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ
β MESSAGE BUS β
β publish_inbound() / publish_outbound() β
ββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ
β AGENT LOOP (loop.py) β
β βββββββββββ ββββββββββββ ββββββββββββββββββββββ β
β β Context ββ β LLM ββ β Device Execution β β
β β Builder β β Name β β (if tool_calls) β β
β βββββββββββ ββββββββββββ ββββββββββ¬ββββββββββββ β
β β² β loop again β
β β βββββββββββββββββββββ till executed β
β ββββββ΄βββββ ββββββββββββ ββββββββββββββββββββββ β
β β Reminiscence β β Expertise β β Subagent Mgr β β
β β Retailer β β Loader β β (spawn duties) β β
β βββββββββββ ββββββββββββ ββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ
β LLM PROVIDER LAYER β
β OpenAI Β· Anthropic Β· OpenRouter Β· DeepSeek Β· ... β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The Agent Loop iterates as much as 40 occasions (configurable):
1. ContextBuilder assembles system immediate + reminiscence + expertise + historical past
2. LLM known as with instruments definitions
3. If response has tool_calls β execute instruments, append outcomes, loop
4. If response is obvious textual content β return as last reply
""")
We arrange the total basis of the tutorial by importing the required modules, defining helper features for clear part show, and putting in the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the remainder of the pocket book can work together with the mannequin with out exposing credentials within the pocket book output. After that, we configure the nanobot workspace and create the core bootstrap information, reminiscent of AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and research the high-level structure so we perceive how the framework is organized earlier than shifting into implementation.
part("STEP 5 Β· The Agent Loop β Core Idea in Motion", "π")
data("We'll manually recreate nanobot's agent loop sample utilizing OpenAI.")
data("That is precisely what loop.py does internally.n")
import json as _json
import datetime
TOOLS = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current date and time.",
"parameters": {"type": "object", "properties": {}, "required": []}
}
},
{
"kind": "operate",
"operate": {
"title": "calculate",
"description": "Consider a mathematical expression.",
"parameters": {
"kind": "object",
"properties": {
"expression": {
"kind": "string",
"description": "Math expression to guage, e.g. '2**10 + 42'"
}
},
"required": ["expression"]
}
}
},
{
"kind": "operate",
"operate": {
"title": "read_file",
"description": "Learn the contents of a file within the workspace.",
"parameters": {
"kind": "object",
"properties": {
"path": {
"kind": "string",
"description": "Relative file path inside the workspace"
}
},
"required": ["path"]
}
}
},
{
"kind": "operate",
"operate": {
"title": "write_file",
"description": "Write content material to a file within the workspace.",
"parameters": {
"kind": "object",
"properties": {
"path": {"kind": "string", "description": "Relative file path"},
"content material": {"kind": "string", "description": "Content material to jot down"}
},
"required": ["path", "content"]
}
}
},
{
"kind": "operate",
"operate": {
"title": "save_memory",
"description": "Save a truth to the agent's long-term reminiscence.",
"parameters": {
"kind": "object",
"properties": {
"truth": {"kind": "string", "description": "The actual fact to recollect"}
},
"required": ["fact"]
}
}
}
]
def execute_tool(title: str, arguments: dict) -> str:
"""Execute a device name β mirrors nanobot's ToolRegistry.execute()."""
if title == "get_current_time":
elif title == "calculate":
expr = arguments.get("expression", "")
strive:
consequence = eval(expr, {"__builtins__": {}}, {"abs": abs, "spherical": spherical, "min": min, "max": max})
return str(consequence)
besides Exception as e:
return f"Error: {e}"
elif title == "read_file":
fpath = WORKSPACE / arguments.get("path", "")
if fpath.exists():
return fpath.read_text()[:4000]
return f"Error: File not discovered β {arguments.get('path')}"
elif title == "write_file":
fpath = WORKSPACE / arguments.get("path", "")
fpath.mother or father.mkdir(mother and father=True, exist_ok=True)
fpath.write_text(arguments.get("content material", ""))
return f"Efficiently wrote {len(arguments.get('content material', ''))} chars to {arguments.get('path')}"
elif title == "save_memory":
truth = arguments.get("truth", "")
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
current = mem_file.read_text()
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
mem_file.write_text(current + f"n- [{timestamp}] {truth}n")
return f"Reminiscence saved: {truth}"
return f"Unknown device: {title}"
def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""
Recreates nanobot's AgentLoop._process_message() logic.
The loop:
1. Construct context (system immediate + bootstrap information + reminiscence)
2. Name LLM with instruments
3. If tool_calls β execute β append outcomes β loop
4. If textual content response β return last reply
"""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".be part of(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" π¨ Person: {user_message}")
print(f" π§ System immediate: {len(system_prompt)} chars "
f"(from {len(system_parts)} bootstrap information)")
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f" ββ Iteration {iteration}/{max_iterations} ββ")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
instruments=TOOLS,
tool_choice="auto",
max_tokens=2048
)
alternative = response.selections[0]
message = alternative.message
if message.tool_calls:
if verbose:
print(f" π§ LLM requested {len(message.tool_calls)} device name(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.operate.title
args = _json.masses(tc.operate.arguments) if tc.operate.arguments else {}
if verbose:
print(f" β {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
consequence = execute_tool(fname, args)
if verbose:
print(f" β {consequence[:100]}{'...' if len(consequence) > 100 else ''}")
messages.append({
"position": "device",
"tool_call_id": tc.id,
"content material": consequence
})
if verbose:
print()
else:
last = message.content material or ""
if verbose:
print(f" π¬ Agent: {last}n")
return last
return "β οΈ Max iterations reached with out a last response."
print("β" * 60)
print(" DEMO 1: Time-aware calculation with device chaining")
print("β" * 60)
result1 = agent_loop(
"What's the present time? Additionally, calculate 2^20 + 42 for me."
)
print("β" * 60)
print(" DEMO 2: File creation + reminiscence storage")
print("β" * 60)
result2 = agent_loop(
"Write a haiku about AI brokers to a file referred to as 'haiku.txt'. "
"Then do not forget that I get pleasure from poetry about know-how."
)
We manually recreate the guts of nanobot by defining the device schemas, implementing their execution logic, and constructing the iterative agent loop that connects the LLM to instruments. We assemble the immediate from the workspace information and reminiscence, ship the dialog to the mannequin, detect device calls, execute them, append the outcomes again into the dialog, and maintain looping till the mannequin returns a last reply. We then check this mechanism with sensible examples that contain time lookups, calculations, file writing, and reminiscence saving, so we are able to see the loop function precisely like the inner nanobot move.
part("STEP 6 Β· Reminiscence System β Persistent Agent Reminiscence", "π§ ")
data("""nanobot's reminiscence system (reminiscence.py) makes use of two storage mechanisms:
1. MEMORY.md β Lengthy-term information (at all times loaded into context)
2. YYYY-MM-DD.md β Day by day journal entries (loaded for latest days)
Reminiscence consolidation runs periodically to summarize and compress
previous entries, maintaining the context window manageable.
""")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print(" π Present MEMORY.md contents:")
print(" ββββββββββββββββββββββββββββββββββββββββββββββ")
for line in mem_content.strip().break up("n"):
print(f" β {line}")
print(" ββββββββββββββββββββββββββββββββββββββββββββββn")
immediately = datetime.datetime.now().strftime("%Y-%m-%d")
daily_file = WORKSPACE / "reminiscence" / f"{immediately}.md"
daily_file.write_text(
f"# Day by day Log β {immediately}nn"
"- Person ran the nanobot superior tutorialn"
"- Explored agent loop, instruments, and memoryn"
"- Created a haiku about AI agentsn"
)
success(f"Day by day journal created: reminiscence/{immediately}.md")
print("n π Workspace contents:")
for merchandise in sorted(WORKSPACE.rglob("*")):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
dimension = merchandise.stat().st_size
print(f" {'π' if merchandise.suffix == '.md' else 'π'} {rel} ({dimension} bytes)")
part("STEP 7 Β· Expertise System β Extending Agent Capabilities", "π―")
data("""nanobot's SkillsLoader (expertise.py) reads Markdown information from the
expertise/ listing. Every ability has:
- A reputation and outline (for the LLM to resolve when to make use of it)
- Directions the LLM follows when the ability is activated
- Some expertise are 'at all times loaded'; others are loaded on demand
Let's create a customized ability and see how the agent makes use of it.
""")
skills_dir = WORKSPACE / "expertise"
skills_dir.mkdir(exist_ok=True)
data_skill = skills_dir / "data_analyst.md"
data_skill.write_text("""# Knowledge Analyst Ability
## Description
Analyze knowledge, compute statistics, and supply insights from numbers.
## Directions
When requested to research knowledge:
1. Establish the info kind and construction
2. Compute related statistics (imply, median, vary, std dev)
3. Search for patterns and outliers
4. Current findings in a transparent, structured format
5. Counsel follow-up questions
## At all times Obtainable
false
""")
review_skill = skills_dir / "code_reviewer.md"
review_skill.write_text("""# Code Reviewer Ability
## Description
Overview code for bugs, safety points, and finest practices.
## Directions
When reviewing code:
1. Verify for widespread bugs and logic errors
2. Establish safety vulnerabilities
3. Counsel efficiency enhancements
4. Consider code fashion and readability
5. Fee the code high quality on a 1-10 scale
## At all times Obtainable
true
""")
success("Customized expertise created:")
for f in skills_dir.iterdir():
print(f" π― {f.title}")
print("n π§ͺ Testing skill-aware agent interplay:")
print(" " + "β" * 56)
skills_context = "nn## Obtainable Skillsn"
for skill_file in skills_dir.glob("*.md"):
content material = skill_file.read_text()
skills_context += f"n### {skill_file.stem}n{content material}n"
result3 = agent_loop(
"Overview this Python code for points:nn"
"```pythonn"
"def get_user(id):n"
" question = f'SELECT * FROM customers WHERE id = {id}'n"
" consequence = db.execute(question)n"
" return resultn"
"```"
)
We transfer into the persistent reminiscence system by inspecting the long-term reminiscence file, making a every day journal entry, and reviewing how the workspace evolves after earlier interactions. We then prolong the agent with a expertise system by creating markdown-based ability information that describe specialised behaviors reminiscent of knowledge evaluation and code overview. Lastly, we simulate how skill-aware prompting works by exposing these expertise to the agent and asking it to overview a Python operate, which helps us see how nanobot could be guided by way of modular functionality descriptions.
part("STEP 8 Β· Customized Device Creation β Extending the Agent", "π§")
data("""nanobot's device system makes use of a ToolRegistry with a easy interface.
Every device wants:
- A reputation and outline
- A JSON Schema for parameters
- An execute() technique
Let's create customized instruments and wire them into our agent loop.
""")
import random
CUSTOM_TOOLS = [
{
"type": "function",
"function": {
"name": "roll_dice",
"description": "Roll one or more dice with a given number of sides.",
"parameters": {
"type": "object",
"properties": {
"num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1},
"sides": {"type": "integer", "description": "Number of sides per die", "default": 6}
},
"required": []
}
}
},
{
"kind": "operate",
"operate": {
"title": "text_stats",
"description": "Compute statistics a few textual content: phrase rely, char rely, sentence rely, studying time.",
"parameters": {
"kind": "object",
"properties": {
"textual content": {"kind": "string", "description": "The textual content to research"}
},
"required": ["text"]
}
}
},
{
"kind": "operate",
"operate": {
"title": "generate_password",
"description": "Generate a random safe password.",
"parameters": {
"kind": "object",
"properties": {
"size": {"kind": "integer", "description": "Password size", "default": 16}
},
"required": []
}
}
}
]
_original_execute = execute_tool
def execute_tool_extended(title: str, arguments: dict) -> str:
if title == "roll_dice":
n = arguments.get("num_dice", 1)
s = arguments.get("sides", 6)
rolls = [random.randint(1, s) for _ in range(n)]
return f"Rolled {n}d{s}: {rolls} (whole: {sum(rolls)})"
elif title == "text_stats":
textual content = arguments.get("textual content", "")
phrases = len(textual content.break up())
chars = len(textual content)
sentences = textual content.rely('.') + textual content.rely('!') + textual content.rely('?')
reading_time = spherical(phrases / 200, 1)
return _json.dumps({
"phrases": phrases,
"characters": chars,
"sentences": max(sentences, 1),
"reading_time_minutes": reading_time
})
elif title == "generate_password":
import string
size = arguments.get("size", 16)
chars = string.ascii_letters + string.digits + "!@#$%^&*"
pwd = ''.be part of(random.alternative(chars) for _ in vary(size))
return f"Generated password ({size} chars): {pwd}"
return _original_execute(title, arguments)
execute_tool = execute_tool_extended
ALL_TOOLS = TOOLS + CUSTOM_TOOLS
def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""Agent loop with prolonged customized instruments."""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".be part of(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" π¨ Person: {user_message}")
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f" ββ Iteration {iteration}/{max_iterations} ββ")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
instruments=ALL_TOOLS,
tool_choice="auto",
max_tokens=2048
)
alternative = response.selections[0]
message = alternative.message
if message.tool_calls:
if verbose:
print(f" π§ {len(message.tool_calls)} device name(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.operate.title
args = _json.masses(tc.operate.arguments) if tc.operate.arguments else {}
if verbose:
print(f" β {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
consequence = execute_tool(fname, args)
if verbose:
print(f" β {consequence[:120]}{'...' if len(consequence) > 120 else ''}")
messages.append({
"position": "device",
"tool_call_id": tc.id,
"content material": consequence
})
if verbose:
print()
else:
last = message.content material or ""
if verbose:
print(f" π¬ Agent: {last}n")
return last
return "β οΈ Max iterations reached."
print("β" * 60)
print(" DEMO 3: Customized instruments in motion")
print("β" * 60)
result4 = agent_loop_v2(
"Roll 3 six-sided cube for me, then generate a 20-character password, "
"and eventually analyze the textual content stats of this sentence: "
)
part("STEP 9 Β· Multi-Flip Dialog β Session Administration", "π¬")
data("""nanobot's SessionManager (session/supervisor.py) maintains dialog
historical past per session_key (format: 'channel:chat_id'). Historical past is saved
in JSON information and loaded into context for every new message.
Let's simulate a multi-turn dialog with persistent state.
""")
We broaden the agentβs capabilities by defining new customized instruments reminiscent of cube rolling, textual content statistics, and password technology, after which wiring them into the device execution pipeline. We replace the executor, merge the built-in and customized device definitions, and create a second model of the agent loop that may motive over this bigger set of capabilities. We then run a demo activity that forces the mannequin to chain a number of device invocations, demonstrating how simple it’s to increase nanobot with our personal features whereas maintaining the identical total interplay sample.
class SimpleSessionManager:
"""
Minimal recreation of nanobot's SessionManager.
Shops dialog historical past and offers context continuity.
"""
def __init__(self, workspace: Path):
self.workspace = workspace
self.classes: dict[str, list[dict]] = {}
def get_history(self, session_key: str) -> checklist[dict]:
return self.classes.get(session_key, [])
def add_turn(self, session_key: str, position: str, content material: str):
if session_key not in self.classes:
self.classes[session_key] = []
self.classes[session_key].append({"position": position, "content material": content material})
def save(self, session_key: str):
fpath = self.workspace / f"session_{session_key.change(':', '_')}.json"
fpath.write_text(_json.dumps(self.classes.get(session_key, []), indent=2))
def load(self, session_key: str):
fpath = self.workspace / f"session_{session_key.change(':', '_')}.json"
if fpath.exists():
self.classes[session_key] = _json.masses(fpath.read_text())
session_mgr = SimpleSessionManager(WORKSPACE)
SESSION_KEY = "cli:tutorial_user"
def chat(user_message: str, verbose: bool = True):
"""Multi-turn chat with session persistence."""
session_mgr.add_turn(SESSION_KEY, "consumer", user_message)
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
system_prompt = "nn".be part of(system_parts)
historical past = session_mgr.get_history(SESSION_KEY)
messages = [{"role": "system", "content": system_prompt}] + historical past
if verbose:
print(f" π€ You: {user_message}")
print(f" (dialog historical past: {len(historical past)} messages)")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
max_tokens=1024
)
reply = response.selections[0].message.content material or ""
session_mgr.add_turn(SESSION_KEY, "assistant", reply)
session_mgr.save(SESSION_KEY)
if verbose:
print(f" π nanobot: {reply}n")
return reply
print("β" * 60)
print(" DEMO 4: Multi-turn dialog with reminiscence")
print("β" * 60)
chat("Hello! My title is Alex and I am constructing an AI agent.")
chat("What's my title? And what am I engaged on?")
chat("Are you able to recommend 3 options I ought to add to my agent?")
success("Session persevered with full dialog historical past!")
session_file = WORKSPACE / f"session_{SESSION_KEY.change(':', '_')}.json"
session_data = _json.masses(session_file.read_text())
print(f" π Session file: {session_file.title} ({len(session_data)} messages)")
part("STEP 10 Β· Subagent Spawning β Background Process Delegation", "π")
data("""nanobot's SubagentManager (agent/subagent.py) permits the principle agent
to delegate duties to impartial background employees. Every subagent:
- Will get its personal device registry (no SpawnTool to forestall recursion)
- Runs as much as 15 iterations independently
- Experiences outcomes again through the MessageBus
Let's simulate this sample with concurrent duties.
""")
import asyncio
import uuid
async def run_subagent(task_id: str, objective: str, verbose: bool = True):
"""
Simulates nanobot's SubagentManager._run_subagent().
Runs an impartial LLM loop for a particular objective.
"""
if verbose:
print(f" πΉ Subagent [{task_id[:8]}] began: {objective[:60]}")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a focused research assistant. "
"Complete the assigned task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
consequence = response.selections[0].message.content material or ""
if verbose:
print(f" β
Subagent [{task_id[:8]}] executed: {consequence[:80]}...")
return {"task_id": task_id, "objective": objective, "consequence": consequence}
async def spawn_subagents(objectives: checklist[str]):
"""Spawn a number of subagents concurrently β mirrors SubagentManager.spawn()."""
duties = []
for objective in objectives:
task_id = str(uuid.uuid4())
duties.append(run_subagent(task_id, objective))
print(f"n π Spawning {len(duties)} subagents concurrently...n")
outcomes = await asyncio.collect(*duties)
return outcomes
objectives = [
"What are the 3 key components of a ReAct agent architecture?",
"Explain the difference between tool-calling and function-calling in LLMs.",
"What is MCP (Model Context Protocol) and why does it matter for AI agents?",
]
strive:
loop = asyncio.get_running_loop()
import nest_asyncio
nest_asyncio.apply()
subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(objectives))
besides RuntimeError:
subagent_results = asyncio.run(spawn_subagents(objectives))
besides ModuleNotFoundError:
print(" βΉοΈ Working subagents sequentially (set up nest_asyncio for async)...n")
subagent_results = []
for objective in objectives:
task_id = str(uuid.uuid4())
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=[
{"role": "system", "content": "Complete the task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
r = response.selections[0].message.content material or ""
print(f" β
Subagent [{task_id[:8]}] executed: {r[:80]}...")
subagent_results.append({"task_id": task_id, "objective": objective, "consequence": r})
print(f"n π All {len(subagent_results)} subagent outcomes collected!")
for i, r in enumerate(subagent_results, 1):
print(f"n ββ End result {i} ββ")
print(f" Aim: {r['goal'][:60]}")
print(f" Reply: {r['result'][:200]}")
We simulate multi-turn dialog administration by constructing a light-weight session supervisor that shops, retrieves, and persists dialog historical past throughout turns. We use that historical past to keep up continuity within the chat, permitting the agent to recollect particulars from earlier within the interplay and reply extra coherently and statefully. After that, we mannequin subagent spawning by launching concurrent background duties that every deal with a centered goal, which helps us perceive how nanobot can delegate parallel work to impartial agent employees.
part("STEP 11 Β· Scheduled Duties β The Cron Sample", "β°")
data("""nanobot's CronService (cron/service.py) makes use of APScheduler to set off
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.
Let's reveal the sample with a simulated scheduler.
""")
from datetime import timedelta
class SimpleCronJob:
"""Mirrors nanobot's cron job construction."""
def __init__(self, title: str, message: str, interval_seconds: int):
self.id = str(uuid.uuid4())[:8]
self.title = title
self.message = message
self.interval = interval_seconds
self.enabled = True
self.last_run = None
self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)
jobs = [
SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400),
SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200),
SimpleCronJob("health_check", "Run a system health check.", 3600),
]
print(" π Registered Cron Jobs:")
print(" ββββββββββ¬βββββββββββββββββββββ¬βββββββββββ¬βββββββββββββββββββββββ")
print(" β ID β Identify β Interval β Subsequent Run β")
print(" ββββββββββΌβββββββββββββββββββββΌβββββββββββΌβββββββββββββββββββββββ€")
for job in jobs:
interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s"
print(f" β {job.id} β {job.title:<18} β {interval_str:>8} β {job.next_run.strftime('%Y-%m-%d %H:%M')} β")
print(" ββββββββββ΄βββββββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββ")
print(f"n β° Simulating cron set off for '{jobs[2].title}'...")
cron_result = agent_loop_v2(jobs[2].message, verbose=True)
part("STEP 12 Β· Full Agent Pipeline β Finish-to-Finish Demo", "π¬")
data("""Now let's run a fancy, multi-step activity that workout routines the total
nanobot pipeline: context constructing β device use β reminiscence β file I/O.
""")
print("β" * 60)
print(" DEMO 5: Complicated multi-step analysis activity")
print("β" * 60)
complex_result = agent_loop_v2(
"I would like you to assist me with a small mission:n"
"1. First, examine the present timen"
"2. Write a brief mission plan to 'project_plan.txt' about constructing "
"a private AI assistant (3-4 bullet factors)n"
"3. Keep in mind that my present mission is 'constructing a private AI assistant'n"
"4. Learn again the mission plan file to substantiate it was saved correctlyn"
"Then summarize all the pieces you probably did.",
max_iterations=15
)
part("STEP 13 Β· Last Workspace Abstract", "π")
print(" π Full workspace state after tutorial:n")
total_files = 0
total_bytes = 0
for merchandise in sorted(WORKSPACE.rglob("*")):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
dimension = merchandise.stat().st_size
total_files += 1
total_bytes += dimension
icon = {"md": "π", "txt": "π", "json": "π"}.get(merchandise.suffix.lstrip("."), "π")
print(f" {icon} {rel} ({dimension:,} bytes)")
print(f"n ββ Abstract ββ")
print(f" Complete information: {total_files}")
print(f" Complete dimension: {total_bytes:,} bytes")
print(f" Config: {config_path}")
print(f" Workspace: {WORKSPACE}")
print("n π§ Last Reminiscence State:")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print(" ββββββββββββββββββββββββββββββββββββββββββββββ")
for line in mem_content.strip().break up("n"):
print(f" β {line}")
print(" ββββββββββββββββββββββββββββββββββββββββββββββ")
part("COMPLETE Β· What's Subsequent?", "π")
print(""" You have explored the core internals of nanobot! This is what to strive subsequent:
πΉ Run the true CLI agent:
nanobot onboard && nanobot agent
πΉ Connect with Telegram:
Add a bot token to config.json and run `nanobot gateway`
πΉ Allow net search:
Add a Courageous Search API key underneath instruments.net.search.apiKey
πΉ Attempt MCP integration:
nanobot helps Mannequin Context Protocol servers for exterior instruments
πΉ Discover the supply (~4K strains):
https://github.com/HKUDS/nanobot
πΉ Key information to learn:
β’ agent/loop.py β The agent iteration loop
β’ agent/context.py β Immediate meeting pipeline
β’ agent/reminiscence.py β Persistent reminiscence system
β’ agent/instruments/ β Constructed-in device implementations
β’ agent/subagent.py β Background activity delegation
""")
We reveal the cron-style scheduling sample by defining easy scheduled jobs, itemizing their intervals and subsequent run occasions, and simulating the triggering of an automatic agent activity. We then run a bigger end-to-end instance that mixes context constructing, device use, reminiscence updates, and file operations right into a single multi-step workflow, so we are able to see the total pipeline working collectively in a practical activity. On the finish, we examine the ultimate workspace state, overview the saved reminiscence, and shut the tutorial with clear subsequent steps that join this pocket book implementation to the true nanobot mission and its supply code.
In conclusion, we walked by way of each main layer of the nanobotβs structure, from the iterative LLM-tool loop at its core to the session supervisor that offers our agent conversational reminiscence throughout turns. We constructed 5 built-in instruments, three customized instruments, two expertise, a session persistence layer, a subagent spawner, and a cron simulator, all whereas maintaining all the pieces in a single runnable script. What stands out is how nanobot proves {that a} production-grade agent framework doesnβt want lots of of 1000’s of strains of code; the patterns we carried out right here, context meeting, device dispatch, reminiscence consolidation, and background activity delegation, are the identical patterns that energy far bigger methods, simply stripped right down to their essence. We now have a working psychological mannequin of agentic AI internals and a codebase sufficiently small to learn in a single sitting, which makes nanobot a perfect alternative for anybody trying to construct, customise, or analysis AI brokers from the bottom up.
TryΒ theΒ Full Codes here.Β Additionally,Β be at liberty to observe us onΒ TwitterΒ and donβt neglect to affix ourΒ 120k+ ML SubRedditΒ and Subscribe toΒ our Newsletter. Wait! are you on telegram?Β now you can join us on telegram as well.
Michal Sutter is an information science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a stable basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at reworking complicated datasets into actionable insights.
