Hello from CrabTalk
Introducing CrabTalk — a local-first autonomous AI agent runtime built in Rust.
Update (v0.0.7): Local LLM inference was removed in v0.0.7. CrabTalk now connects to remote providers (OpenAI, Claude, DeepSeek, Ollama). Memory and search are now external WHS services.
Why We Built CrabTalk
AI agents are powerful — but running them shouldn't cost hundreds of dollars a month, expose your credentials to the internet, or require a multi-service Docker stack just to get started.
We built CrabTalk because we wanted agents that run on your machine, with built-in LLM inference, zero cloud dependencies, and a single binary you can install in seconds.
What Makes It Different
- Unlimited tokens, zero cost — a built-in model registry auto-selects the best model and quantization for your hardware. No API bills, no token budgets, no metering. Run agents 24/7 without worrying about a bill.
- Private by architecture — nothing is exposed to the internet. No API keys to leak, no credentials in plaintext, no telemetry.
- Single binary — download, run. No Docker, no gateway, no dependency hell. Cold start under 10 ms, models load async.
- Batteries included — shell access, browser control, messaging channels, and persistent memory are built in. No marketplace, no supply-chain risk from unvetted third-party plugins.
- Built in Rust — fast, safe, and resource-efficient. One binary per platform, auto-detected.
Want the full breakdown of the problems we're solving? Read Why we built CrabTalk.
Get Started
Install CrabTalk and run your first agent in under a minute:
curl -sSL https://crabtalk.ai/install | shCheck out the documentation to learn more.