Providers
How OpenWalrus routes model requests — two API standards, flattened config, and provider management.
OpenWalrus supports multiple LLM providers through a unified Model trait. All providers are API-based — configure an endpoint, point the daemon at it, and go.
Two API standards
All providers use one of two wire formats, selected by the api_standard field in config:
| Standard | Protocol | Used by |
|---|---|---|
openai (default) | OpenAI chat completions API | OpenAI, DeepSeek, Grok, Qwen, Kimi, Ollama, and any compatible endpoint |
anthropic | Anthropic Messages API | Claude |
If api_standard is omitted, OpenWalrus defaults to openai. If the base_url contains "anthropic", the Anthropic standard is auto-detected.
Provider configuration
Each provider is a [model.<name>] section in walrus.toml:
[model.deepseek]
model = "deepseek-chat"
api_key = "${DEEPSEEK_API_KEY}"Configure multiple providers and switch between them:
[model.gpt-4o]
model = "gpt-4o"
api_key = "${OPENAI_API_KEY}"
[model.claude]
model = "claude-sonnet-4-20250514"
api_key = "${ANTHROPIC_API_KEY}"
api_standard = "anthropic"Any model name is valid — the api_standard field (not the model name) determines which API protocol to use.
Selecting the active model
Set the default model in the [walrus] section:
[walrus]
model = "claude"Override per agent in [agents.*]:
[agents.researcher]
model = "deepseek"Provider manager
The ProviderManager holds all configured providers and routes requests by model name. It supports hot-reload — update the config and the active provider changes without restarting the daemon.
What's next
- Remote providers — OpenAI, Claude, DeepSeek, and more
- Configuration — full config setup