Blog

Introducing Lobu

announcement open-source

I started working on this last summer as a Slack bot called Peerbot. The idea: mention the bot in any channel, get a sandboxed Claude Code instance just for you. I spent months on the hard infrastructure — worker isolation, persistent volumes, a credential proxy so workers never touch real API keys.

By October I had a working product, but it was Slack-only. The core infrastructure wasn’t platform-specific at all — only the message handling was. So I refactored the platform layer into thin adapters and opened it up to Telegram and WhatsApp.

The biggest unlock came in February when I integrated OpenClaw. I’d been managing tool execution, session lifecycle, and process state manually — OpenClaw handles all of that. My gateway became purely about orchestration while OpenClaw handled the agent runtime.

Messaging platforms
Slack
Block Kit, interactive actions
Telegram
Mini App, inline buttons
WhatsApp
Reply buttons, list menus
Discord
Servers, DMs, markdown replies
Teams
Channels, bots, enterprise workflows
Google Chat
Cards v2, Workspace spaces
  • Link users across platforms with single sign-on
  • Approval flows, rich cards, buttons, and more
Bring your own agent
Lobu Memory
Lobu
Control Plane
  • Workers never see secrets
  • HTTP proxy with domain allowlist
  • MCP proxy with per-user OAuth
  • BYO provider keys (Anthropic etc.)
Equip your agent
Lobu Skills
OpenClaw Runtime
User A
isolated
OpenClaw Runtime
User B
isolated
OpenClaw Runtime
User C
isolated
  • One sandbox per user and channel
  • Subprocess isolation with just-bash virtual filesystems
  • systemd-run hardening on Linux production hosts
  • No direct internet access (gateway proxy only)
  • Nix reproducible environments
  • OpenTelemetry for observability

What we solve on top of OpenClaw

OpenClaw is a great agent runtime. But running it for a team exposes real problems that the runtime itself doesn’t address.

Serverless execution. Stock OpenClaw runs as a long-lived process — you start it, it stays on, waiting for input. That’s fine on your laptop but it doesn’t work for multi-tenant infrastructure. You’d need one always-on process per user, burning compute 24/7. Lobu runs OpenClaw as serverless workers that scale to zero when idle and wake on the next message. Persistent volumes keep session state across restarts, so the agent picks up right where it left off.

Credential isolation. OpenClaw needs API keys to talk to LLM providers. In a multi-tenant setup, you can’t just set environment variables — every user has their own keys, and a compromised agent shouldn’t be able to read them. Workers don’t receive real API keys, ever. The gateway generates placeholder tokens (lobu_secret_<uuid>) and passes those instead. The real credentials stay encrypted in Postgres. All outbound traffic flows through the gateway’s HTTP proxy, which swaps placeholders for real keys at request time. A compromised worker literally doesn’t have the secrets.

Network isolation. Workers route outbound traffic through the gateway HTTP proxy. Outbound connections are denied by default — you control what domains workers can reach through allowlists. On Linux production hosts, systemd-run adds kernel-level egress blocking so workers can only reach loopback.

Stability. OpenClaw agents can brick their own environment — install a bad package, corrupt the shell config, fill the disk. In a multi-tenant system that’s unacceptable. Each Lobu worker runs as its own subprocess with a persistent workspace, just-bash virtual filesystem isolation, and host-enforced resource limits where available. If an agent trashes its environment, it only affects that one user’s sandbox.

MCP proxy. OpenClaw supports MCP servers, but in a multi-tenant setup you need per-user authentication. Lobu’s gateway proxies MCP calls so each user authenticates once via OAuth, and the gateway injects their credentials transparently. Workers don’t manage MCP tokens — they just call the MCP endpoint and the gateway handles auth.

Skills and Nix

Every agent is configurable through a settings page — providers, skills, MCP servers, Nix packages, and permissions. All without touching config files.

Skills are modular bundles of instructions, MCP servers, system packages, and network requirements. A skill declares what it needs: integrations, Nix packages, and domains to allowlist. Tool visibility and approval policy live separately in lobu.toml, which keeps the capability manifest distinct from security controls. Lobu ships a bundled starter skill with project and memory guidance that you can install with npx @lobu/cli@latest skills add lobu. Teams can still create project-owned local skills, and agents can request skill installation mid-conversation — the user gets a prefilled settings link, approves, and the agent resumes.

Nix is how we handle reproducible environments. Instead of baking every possible tool into a runtime image, users install what they need from the settings page — ffmpeg, python, curl, whatever. Nix gives us deterministic, conflict-free package management across sandboxes. It’s the same approach Replit uses for their development environments, and for the same reason: when you have many isolated environments, you need package management that’s reproducible and doesn’t break between runs.

One bot, everything in-app

The most important user flows still happen inside the same bot thread: messaging, auth handoffs, permission grants, and connection prompts. Teams can also manage agents from the admin UI when they want a broader control plane.

On Telegram, settings open as a native Mini App inside the chat. Authentication is handled by Telegram’s built-in signed payload — no tokens in URLs, no login screens. On Slack, the same settings page opens via Block Kit buttons with short-lived claim codes.

Both platforms share the same React settings page. When an agent needs a permission grant or a new integration mid-conversation, it posts the right UI natively — inline keyboard on Telegram, Block Kit button on Slack — back into the same thread. The user approves, and the agent continues.

Pricing

Pricing has evolved since launch. Today you can self-host the open-source stack for free, use managed cloud while it is free in beta, or work with us on an expert implementation. The latest details live on the pricing page.

Where this is going

The gateway is being rewritten with multi-tenancy as a first-class concern — usage tracking, audit logs, and a control plane so teams can manage agents without maintaining separate infrastructure per user. The end state: push your agent config and Lobu handles the rest.

Try it

Add to Slack — free, BYO keys, nothing to deploy. For self-hosting: see the getting started guide — Lobu boots as a single Node process; bring your own Postgres.