A technical design document for a Clawdbot-like system native to Urbit
Last updated: 2026-01-23
- Overview
- Clawdbot Architecture Reference
- Urbit Native Architecture
- Core Components
- Hoon Agent Structure
- LLM Integration Patterns
- Chat Integration
- Tool System
- Memory & Persistence
- Identity & Trust
- Implementation Roadmap
- Open Questions
This document outlines how to build a personal AI assistant that runs natively on Urbit, inspired by Clawdbot's architecture but adapted to leverage Urbit's unique properties:
- Sovereign identity (@p)
- Persistent state (Gall agents)
- Native networking (Ames)
- Immutable filesystem (%clay)
- Built-in chat (%chat / Groups)
The goal is an AI that lives on your ship, maintains context across conversations, can take actions on your behalf, and doesn't depend on corporate infrastructure beyond the LLM API calls themselves.
Understanding what we're mapping from:
- Long-running process owning all messaging connections
- WebSocket API for control plane (CLI, apps, automations)
- Single port multiplexes WS + HTTP (default 18789)
- Manages agent runs, sessions, cron jobs, health checks
The core AI interaction cycle:
Message In → Context Assembly → LLM Inference → Tool Execution → Stream Reply → Persist
Key properties:
- Serialized per-session (prevents races)
- Streaming responses via WebSocket events
- Tool execution with sandboxing options
- Automatic compaction when context exceeds limits
Modular adapters for messaging platforms:
- WhatsApp (Baileys), Telegram (grammY), Slack, Discord, Signal, iMessage, Tlon
- Each handles: inbound events → agent, agent replies → outbound
- Registered via plugin manifest with capabilities declared
- Conversation history per chat/user
- Workspace files (AGENTS.md, SCRATCHPAD.md, HEARTBEAT.md)
- Daily memory logs
- Compaction summarizes old context when hitting token limits
- SKILL.md files with instructions + scripts
- Dynamically loaded based on task relevance
- Can include shell scripts, reference docs, examples
- Persistent job definitions
- System events trigger agent runs
- Used for reminders, periodic checks, heartbeats
| Clawdbot | Urbit Equivalent |
|---|---|
| Gateway daemon | Gall agent (always-on, persistent) |
| WebSocket API | Airlock / pokes / scries |
| Channel plugins | Direct %chat / Groups integration |
| Session state | Agent state (persistent across restarts) |
| Workspace files | %clay filesystem |
| Identity/auth | @p (cryptographic, built-in) |
| Cron/timers | Behn timers |
| Tool execution | Mix of native + sidecar |
-
Identity is solved: @p is your identity. No OAuth, no API keys for identity. The AI assistant is a ship (or runs on one).
-
Persistence is native: Gall agent state survives restarts. No external database needed.
-
Networking is built-in: Ames handles ship-to-ship communication. Your AI can message other ships natively.
-
Chat is native: %chat and Groups are first-class. No need for external bridges.
-
Trust model is clear: You control your ship. The AI runs in your context with your permissions.
-
HTTP is async: Calling external APIs requires %iris and handling responses in
++on-arvo. No synchronous fetch. -
No shell access: Urbit can't exec shell commands. Need a sidecar for file system operations outside %clay.
-
Hoon learning curve: Building in Hoon vs TypeScript/JavaScript.
-
Streaming responses: Need subscription-based updates rather than WebSocket streams.
-
urWASM is new: Local model inference via WASM is possible but immature.
The central coordinator that:
- Receives chat messages
- Manages conversation state
- Calls LLM APIs
- Executes tools
- Sends replies
Handles async LLM API calls:
- Builds request payloads
- Manages API keys
- Parses responses
- Handles errors/retries
Integrates with Urbit's chat:
- Subscribes to specified channels/DMs
- Filters messages (mentions, DMs, all)
- Routes to main agent
Modular tool implementations:
- %clay file operations
- HTTP fetch (%iris)
- Timer scheduling (Behn)
- Ship-to-ship messaging
Handles persistence:
- Conversation history
- Long-term memory
- Configuration
- Skills/prompts
:: /app/ai-assistant.hoon
::
/- *ai-assistant
/+ default-agent, dbug, ai-lib
::
|%
+$ versioned-state
$% [%0 state-0]
==
+$ state-0
$: config=assistant-config
conversations=(map @p conversation)
pending-requests=(map @uvH pending-request)
memory=assistant-memory
==
+$ assistant-config
$: api-key=@t
model=@t
system-prompt=@t
allowed-ships=(set @p)
==
+$ conversation
$: history=(list message)
created=@da
updated=@da
==
+$ message
$: role=?(%user %assistant %system)
content=@t
timestamp=@da
==
+$ pending-request
$: ship=@p
prompt=@t
started=@da
==
+$ assistant-memory
$: facts=(list @t)
preferences=(map @t @t)
==
--
::
=| state-0
=* state -
::
%- agent:dbug
^- agent:gall
|_ =bowl:gall
+* this .
def ~(. (default-agent this %.n) bowl)
::
++ on-init
^- (quip card _this)
~& > "%ai-assistant initialized"
[~ this]
::
++ on-save
^- vase
!>(state)
::
++ on-load
|= old-vase=vase
^- (quip card _this)
[~ this(state !<(state-0 old-vase))]
::
++ on-poke
|= [=mark =vase]
^- (quip card _this)
?+ mark (on-poke:def mark vase)
:: Handle chat message
%ai-message
=/ msg !<(incoming-message vase)
(handle-message msg)
:: Handle configuration
%ai-config
=/ cfg !<(assistant-config vase)
[~ this(config.state cfg)]
:: Handle tool results
%ai-tool-result
=/ result !<(tool-result vase)
(handle-tool-result result)
==
::
++ on-watch
|= =path
^- (quip card _this)
?+ path (on-watch:def path)
[%responses ~]
:: Allow subscriptions for streaming responses
[~ this]
==
::
++ on-arvo
|= [=wire =sign-arvo]
^- (quip card _this)
?+ wire (on-arvo:def wire sign-arvo)
:: Handle HTTP response from LLM API
[%llm-request @ ~]
=/ request-id i.t.wire
?+ sign-arvo (on-arvo:def wire sign-arvo)
[%iris %http-response *]
(handle-llm-response request-id http-response.sign-arvo)
==
:: Handle timer fires
[%reminder @ ~]
=/ reminder-id i.t.wire
(handle-reminder reminder-id)
==
::
++ on-agent
|= [=wire =sign:agent:gall]
^- (quip card _this)
?+ wire (on-agent:def wire sign)
:: Handle chat agent responses
[%chat @ ~]
?+ sign (on-agent:def wire sign)
[%fact *]
(handle-chat-update !<(chat-update q.cage.sign))
==
==
::
++ on-peek on-peek:def
++ on-leave on-leave:def
++ on-fail on-fail:def
--:: Handle incoming message
++ handle-message
|= msg=incoming-message
^- (quip card _this)
:: Build conversation context
=/ conv (~(gut by conversations.state) ship.msg *conversation)
=/ new-history (snoc history.conv [%user content.msg now.bowl])
:: Build LLM request
=/ request-id (scot %uv (sham eny.bowl))
=/ llm-request (build-llm-request new-history system-prompt.config.state)
:: Send HTTP request to LLM API
=/ cards
:~ (send-llm-request request-id llm-request)
==
:: Update state
=/ new-conv conv(history new-history, updated now.bowl)
=/ new-conversations (~(put by conversations.state) ship.msg new-conv)
=/ new-pending (~(put by pending-requests.state) request-id [ship.msg content.msg now.bowl])
[cards this(conversations.state new-conversations, pending-requests.state new-pending)]
::
:: Build HTTP request to LLM API
++ send-llm-request
|= [request-id=@ta payload=json]
^- card
=/ url "https://api.anthropic.com/v1/messages"
=/ headers
:~ ['x-api-key' api-key.config.state]
['content-type' 'application/json']
['anthropic-version' '2023-06-01']
==
=/ body (as-octt:mimes:html (en:json:html payload))
:* %pass
/llm-request/[request-id]
%arvo %i
%request
[%'POST' url headers `body]
==
::
:: Handle LLM response
++ handle-llm-response
|= [request-id=@ta response=http-response]
^- (quip card _this)
:: Parse response
=/ pending (~(got by pending-requests.state) request-id)
=/ body ?~(body.response ~ (trip q.u.body.response))
=/ parsed (de:json:html body)
:: Extract assistant message
=/ assistant-text (extract-content parsed)
:: Update conversation
=/ conv (~(got by conversations.state) ship.pending)
=/ new-history (snoc history.conv [%assistant assistant-text now.bowl])
=/ new-conv conv(history new-history, updated now.bowl)
:: Send reply to chat
=/ reply-card (send-chat-message ship.pending assistant-text)
:: Clean up
=/ new-pending (~(del by pending-requests.state) request-id)
=/ new-conversations (~(put by conversations.state) ship.pending new-conv)
[[reply-card] this(conversations.state new-conversations, pending-requests.state new-pending)]Ship → %iris HTTP POST → api.anthropic.com → Response → Parse → Reply
Pros:
- Simple, well-understood
- Works with any LLM provider
- Full model capabilities
Cons:
- Requires external API key
- Latency from HTTP round-trip
- Data leaves your ship
Planet → Poke → Star (running inference) → Response → Planet
The star operator runs inference hardware and offers it to their planets:
:: On planet: request inference from star
++ request-inference
|= [star=@p prompt=@t]
^- card
:* %pass /inference %agent [star %inference-provider]
%poke %inference-request !>([prompt model-params])
==Pros:
- Data stays in Urbit network
- Star can batch/optimize
- Economic model (planets pay stars)
Cons:
- Requires star infrastructure
- Limited model selection
- Dependency on star uptime
Ship → urWASM runtime → Local model → Response
Using compiled WASM models:
:: Hypothetical urWASM inference call
++ local-inference
|= [model=@t prompt=@t]
^- @t
=/ wasm-module .^(@t %cx /===/models/[model]/wasm)
(run-wasm wasm-module prompt)Pros:
- Fully local, no external calls
- Maximum sovereignty
- No API costs
Cons:
- Limited to smaller models
- Requires significant compute
- urWASM still maturing
Start with Pattern 1 (direct API) for capability, design for Pattern 2/3 migration:
+$ inference-config
$% [%api url=@t key=@t model=@t]
[%star star=@p model=@t]
[%local model=@t]
==:: Subscribe to a chat channel on init or config
++ subscribe-to-chat
|= [host=@p name=@t]
^- card
:* %pass /chat/[name] %agent [host %chat]
%watch /mailbox/[name]
==
::
:: Handle incoming chat updates
++ handle-chat-update
|= upd=chat-update
^- (quip card _this)
?+ -.upd [~ this]
%message
:: Check if we should respond (mentions, DM, allowlist)
?. (should-respond message.upd)
[~ this]
:: Route to main message handler
(handle-message [author.message.upd content.message.upd])
==
::
:: Determine if we should respond to a message
++ should-respond
|= msg=chat-message
^- ?
?| :: Always respond to DMs
(is-dm msg)
:: Respond to mentions of our ship
(has-mention our.bowl content.msg)
:: Respond if from allowed ship
(~(has in allowed-ships.config.state) author.msg)
==:: Send a message to a chat channel
++ send-chat-message
|= [to=@p content=@t]
^- card
=/ memo
:* ~
our.bowl
now.bowl
:~ [%inline [%text content] ~]
==
==
:* %pass /chat-send %agent [to %chat]
%poke %chat-action !>([%post channel-id memo])
==| Tool | Implementation | Notes |
|---|---|---|
| Read file | %clay scry | .^(@t %cx /path) |
| Write file | %clay poke | Requires desk permissions |
| HTTP fetch | %iris | Async, handle in ++on-arvo |
| Set timer | %behn | For reminders/cron |
| Send message | %chat poke | To any ship |
| Query state | Scry | Any exposed Gall state |
:: Tool definitions
+$ tool
$% [%read-file path=@t]
[%write-file path=@t content=@t]
[%http-fetch url=@t]
[%set-timer at=@da message=@t]
[%send-message to=@p content=@t]
==
::
:: Execute a tool
++ execute-tool
|= =tool
^- (quip card tool-result)
?- -.tool
%read-file
=/ content .^(@t %cx (parse-path path.tool))
[~ [%ok content]]
::
%write-file
=/ card (write-clay path.tool content.tool)
[[card] [%pending %write]]
::
%http-fetch
=/ card (fetch-url url.tool)
[[card] [%pending %fetch]]
::
%set-timer
=/ card (set-behn-timer at.tool message.tool)
[[card] [%ok "Timer set"]]
::
%send-message
=/ card (send-chat-message to.tool content.tool)
[[card] [%ok "Message sent"]]
==For capabilities Urbit can't provide natively (shell exec, browser, etc.), run a sidecar:
┌─────────────────┐ ┌─────────────────┐
│ Urbit Ship │ HTTP │ Sidecar │
│ %ai-assistant │◄───────►│ (Node/Rust) │
│ │ │ │
│ - Core logic │ │ - Shell exec │
│ - State │ │ - Browser │
│ - Chat │ │ - File system │
└─────────────────┘ └─────────────────┘
The sidecar exposes an HTTP API on localhost that the Urbit agent can call via %iris.
Stored in agent state, automatically persisted:
+$ conversation
$: history=(list message)
summary=(unit @t) :: Compacted summary
created=@da
updated=@da
token-count=@ud :: Track for compaction
==Store durable facts in %clay:
/app/ai-assistant/memory/
facts.txt :: Key facts about user
preferences.txt :: User preferences
daily/
2026-01-23.txt :: Daily log
2026-01-22.txt
When conversation exceeds token limit:
++ maybe-compact
|= conv=conversation
^- conversation
?. (gth token-count.conv max-tokens)
conv
:: Summarize old messages, keep recent
=/ old-messages (scag (sub (lent history.conv) keep-recent) history.conv)
=/ recent (slag (sub (lent history.conv) keep-recent) history.conv)
=/ summary (summarize-messages old-messages)
conv(history recent, summary `summary, token-count (count-tokens recent))- Cryptographic identity: @p is tied to a key pair
- Reputation possible: Track behavior over time per-@p
- Network effects: Ships can attest to other ships
+$ trust-config
$: :: Who can message the AI
allowed-ships=(set @p)
:: Or allow anyone
open-access=?
:: DM-only mode
dm-only=?
:: Require mention in groups
require-mention=?
==The AI could run as its own ship (@p), building reputation:
- Other ships interact with it as a peer
- History of interactions is on-chain (Azimuth)
- Can hold assets, make transactions
- Identity persists across infrastructure changes
Goal: Basic chat → AI → reply loop
- Gall agent skeleton
- HTTP client for Anthropic API
- Single-channel chat integration
- Basic conversation state
- Configuration via pokes
Deliverable: Ship that responds to DMs with AI-generated replies
Goal: Useful capabilities
- %clay read/write tools
- HTTP fetch tool
- Behn timer integration (reminders)
- Conversation persistence
- Daily memory logs
- Compaction
Deliverable: AI that can read/write files, set reminders, remember context
Goal: Production-ready
- Multiple chat channel support
- Mention detection
- Allowlist/blocklist
- Error handling & retries
- Configuration UI (via Landscape)
- System prompt customization
Deliverable: Fully functional personal AI assistant
Goal: Sovereignty & federation
- Federated inference via stars
- urWASM local model support
- Sidecar integration for extended tools
- Multi-agent coordination
- Skill/plugin system
-
Model selection: Which models work best with Urbit's async HTTP pattern? Streaming vs. full response?
-
Token management: How to handle API keys securely in Urbit? Encrypted in state?
-
Compaction prompts: Who generates summaries during compaction? Same model? Cheaper model?
-
Star economics: What's the pricing model for federated inference? Per-token? Subscription?
-
urWASM readiness: Timeline for urWASM to support meaningful model sizes?
-
Groups vs. %chat: Which chat system to target? Both? Abstraction layer?
-
Tool sandboxing: How to limit what the AI can do? Permission system?
-
Multi-ship coordination: Can AIs on different ships collaborate? Protocol?
- Urbit Developers Documentation
- Gall Guide
- Sovereign Intelligence Blog Post
- ~niblyx-malnus MCP Server
- Clawdbot Documentation
- urWASM Discussion
- Set up a development ship (fakezod)
- Create basic Gall agent structure
- Test %iris HTTP calls to Anthropic
- Integrate with %chat for message flow
- Iterate from there
Document maintained by ~tolber-nocneb (Nimbus)