OpenWalrusOpenWalrus

Configuration Reference

Complete walrus.toml reference — daemon config, models, services, MCP servers, agents, tasks, and permissions.

The OpenWalrus daemon is configured via ~/.openwalrus/walrus.toml. This page documents every available field.

Full example

# Daemon agent config
[walrus]
model = "claude"
thinking = false

# Embedding model (optional)
[model]
embedding = "text-embedding-3-small"

# Provider definitions
[model.deepseek]
model = "deepseek-chat"
api_key = "${DEEPSEEK_API_KEY}"

[model.claude]
model = "claude-opus-4-6"
api_key = "${ANTHROPIC_API_KEY}"
api_standard = "anthropic"

[model.gpt-4o]
model = "gpt-4o"
api_key = "${OPENAI_API_KEY}"

# Extensions
[services.search]
kind = "extension"
crate = "walrus-search"
enabled = true

[services.telegram]
kind = "gateway"
crate = "walrus-telegram"
restart = "on_failure"
config = { telegram = { token = "${TELEGRAM_BOT_TOKEN}" } }

# MCP servers
[mcps.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/docs"]

[mcps.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_TOKEN = "${GITHUB_TOKEN}" }

# Sub-agents
[agents.researcher]
description = "Research specialist"
model = "claude"
max_iterations = 32
thinking = true
members = ["walrus"]
skills = ["web_search", "file_analysis"]
mcps = ["github"]

# Task concurrency
[tasks]
max_concurrent = 4
viewable_window = 16
task_timeout = 300

# Tool permissions
[permissions]
bash = "ask"
write = "deny"

[permissions.researcher]
bash = "allow"
write = "allow"

walrus

The daemon's own agent configuration:

FieldTypeDescriptionDefault
modelstringDefault model name (must match a [model.*] key)
thinkingboolEnable reasoning modefalse
heartbeat.intervalu64Minutes between auto-runs (0 = disabled)0
heartbeat.promptstringMessage sent on each tick

model

model (top-level)

FieldTypeDescription
embeddingstring?Default embedding model name

model.<name>

Each remote provider is a [model.<name>] section:

FieldTypeDescription
modelstringModel identifier sent to the API
api_keystring?API key (supports ${ENV_VAR} syntax)
api_standardstring?"openai" (default) or "anthropic"
base_urlstring?Custom API endpoint

See providers for API standard details and auto-detection.

services.<name>

Managed child processes for extensions:

FieldTypeDescriptionDefault
kindstring"extension" or "gateway""extension"
cratestringCargo package name (used as binary name)Required
restartstring"never", "on_failure", or "always""never"
enabledboolWhether to start this extensiontrue
envmap?Environment variables for the child process{}
configtable?Extension-specific configuration (passed as JSON){}

mcps.<name>

FieldTypeDescriptionDefault
commandstringCommand to start the MCP serverRequired
argsstring[]Command arguments[]
envmap?Environment variables for the server process{}

agents.<name>

Sub-agent definitions (can also use Markdown files in ~/.openwalrus/agents/):

FieldTypeDescriptionDefault
descriptionstringHuman-readable description
modelstring?Model namewalrus default
max_iterationsusizeMaximum tool-use rounds16
thinkingboolEnable reasoning modefalse
heartbeat.intervalu64Minutes between ticks (0 = disabled)0
heartbeat.promptstringMessage sent on each tick
membersstring[]Agents this agent can delegate to[]
skillsstring[]Allowed skills (empty = all)[]
mcpsstring[]Allowed MCP servers (empty = all)[]

See agents for full details.

tasks

Configure agent delegation:

FieldTypeDescriptionDefault
task_timeoutu64Per-task timeout in seconds300

permissions

Control tool access per tool and per agent:

# Global defaults
[permissions]
bash = "ask"      # Block and prompt user
write = "deny"    # Reject outright

# Per-agent overrides
[permissions.researcher]
bash = "allow"
read = "allow"

Three levels: allow (default), ask (block task for user approval), deny (reject with error).

Resolution order: agent override > global default > allow.

Environment variables

API keys and tokens support ${ENV_VAR} syntax. The daemon resolves these from your shell environment at startup.

What's next

On this page