OpenWalrus Documentation
Build and run autonomous AI agents with OpenWalrus — a Rust-powered composable agent runtime.
OpenWalrus is an open-source agent runtime that runs autonomous AI agents on your machine. Built in Rust, it connects to LLM providers and dispatches tools through composable extensions.
Quick links
How it works
OpenWalrus runs a background daemon that manages agents, routes events, and dispatches tool calls. You interact with agents through the CLI, or connect them to Telegram via the gateway service.
[Sources] ──→ [Event Loop] ──→ [Agents]
Socket (UDS) mpsc channel Agent 1
Socket (TCP) Agent 2
Gateway ...
Tool callsEvery event flows through a single mpsc::unbounded channel. The event loop dispatches each event as its own async task — agents never block each other.
Architecture
The runtime is composed of focused crates:
| Crate | Purpose |
|---|---|
walrus-core | Agent execution, runtime, sessions, hooks, model trait |
walrus-model | LLM providers (OpenAI, Claude, DeepSeek, Ollama) |
walrus-daemon | Background service, tasks, permissions, event loop, service manager |
walrus-socket | Unix domain socket transport |
walrus-tcp | TCP transport |
walrus-cli | CLI interface and REPL |
Each crate can be used independently. The daemon composes them all into a running system.
What you can do
- Connect to model providers — OpenAI, Claude, DeepSeek, Ollama, and more
- Give agents access to your filesystem and shell
- Extend agents with skills and MCP servers
- Deploy bots on Telegram via the gateway service
- Extend with extensions for search and custom capabilities
- Delegate work across agents with multi-agent tasks