Open-source alternatives guide
Self-Host OpenClaw: Personal AI Assistant 2026
OpenClaw hits 247K GitHub stars as 2026's fastest-growing OSS project. Docker Compose guide to self-host your personal AI assistant with Telegram and Slack.
Self-Host OpenClaw: Personal AI Assistant 2026
TL;DR
OpenClaw is the personal AI assistant that went from 9,000 to 247,000+ GitHub stars in under 6 months — the fastest any open-source project has ever accumulated that kind of community signal. It runs on your own hardware, connects to messaging apps you already use (WhatsApp, Telegram, Slack, Discord, iMessage), and routes your queries to any LLM provider you choose. This guide covers the full Docker Compose self-hosting setup, from zero to a running AI agent in under 30 minutes.
Key Takeaways
- 247K+ GitHub stars — fastest-growing OSS project in history, surpassing even Docker and Node.js at comparable stages
- Self-hosted, no subscription — bring your own API key (Anthropic, OpenAI, Ollama, or any OpenAI-compatible endpoint)
- 50+ messaging integrations — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams, Matrix, LINE, and more
- Sandbox mode — tool executions run in isolated sub-containers, keeping your host secure
- Minimum 2 GB RAM, 1 vCPU — runs on any $5/month VPS
- Skill ecosystem — install community skills to extend OpenClaw beyond built-in capabilities
Why OpenClaw Is Different
Most AI assistants live in a browser tab. OpenClaw lives where you already communicate. The core concept: a self-hosted runtime that connects your AI provider of choice to your messaging apps as a first-class integration — not a webhook hack.
The result: you can message your own AI from WhatsApp at 11pm, ask it to check your calendar, draft a reply to an email, and run a web search — all without leaving your phone or giving any third party access to your data.
The explosion from 9,000 to 247,000 stars (January–March 2026) happened because OpenClaw solved the gap between "I want an AI like Claude at work" and "I don't want another SaaS subscription or cloud service that owns my data." The project even has an official AWS Lightsail AMI and DigitalOcean 1-Click install, which signals serious infrastructure investment for an OSS project.
Prerequisites
- A Linux VPS (Ubuntu 22.04+ recommended) or Docker Desktop on macOS/Windows
- Docker Engine + Docker Compose v2
- Minimum: 2 GB RAM, 1 vCPU, 5 GB disk (4 GB RAM recommended for production with multiple integrations)
- An API key for at least one LLM provider (Anthropic, OpenAI, or a local Ollama instance)
Step 1: Directory Setup
Create a dedicated directory and pull the official Docker Compose file:
mkdir openclaw && cd openclaw
curl -fsSL https://docs.openclaw.ai/install/docker-compose.yml -o docker-compose.yml
Or create the Compose file manually:
# docker-compose.yml
version: "3.9"
services:
openclaw:
image: openclaw/openclaw:latest
restart: unless-stopped
ports:
- "3100:3100" # Web UI + API gateway
volumes:
- ${OPENCLAW_CONFIG_DIR:-./config}:/home/node/.openclaw
- ${OPENCLAW_WORKSPACE_DIR:-./workspace}:/home/node/.openclaw/workspace
environment:
- NODE_ENV=production
- SETUP_PASSWORD=${SETUP_PASSWORD}
- OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
- OPENCLAW_SANDBOX=${OPENCLAW_SANDBOX:-1}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# OR: OPENAI_API_KEY=${OPENAI_API_KEY}
# OR: OLLAMA_BASE_URL=http://host.docker.internal:11434
extra_hosts:
- "host.docker.internal:host-gateway" # Required on Linux for Ollama
Step 2: Environment Configuration
Create your .env file:
# .env — never commit this to git
SETUP_PASSWORD=your-long-random-setup-password-here
OPENCLAW_GATEWAY_TOKEN=your-long-random-gateway-token-here
OPENCLAW_SANDBOX=1
# Choose ONE LLM provider:
ANTHROPIC_API_KEY=sk-ant-...
# OPENAI_API_KEY=sk-...
# Optional: local Ollama
# OLLAMA_BASE_URL=http://host.docker.internal:11434
Security notes:
SETUP_PASSWORDis used for the initial web UI setup — use a strong random password (32+ chars)OPENCLAW_GATEWAY_TOKENauthenticates API requests — treat like an API keyOPENCLAW_SANDBOX=1is strongly recommended — it runs tool executions in isolated sub-containers, preventing any tool from accessing your host filesystem
Generate secure values:
echo "SETUP_PASSWORD=$(openssl rand -base64 32)"
echo "OPENCLAW_GATEWAY_TOKEN=$(openssl rand -base64 32)"
Step 3: Start OpenClaw
docker compose up -d
# Verify it started
docker compose logs -f openclaw
# Look for: "OpenClaw is running on port 3100"
Open the web UI at http://your-server-ip:3100. Complete the setup wizard using your SETUP_PASSWORD.
Step 4: Connect Your First Messaging Platform
OpenClaw's power comes from connecting to the platforms you already use. Here are the three most popular integrations:
Telegram (Easiest — 5 minutes)
- Message
@BotFatheron Telegram - Send
/newbot, choose a name, copy the token - In the OpenClaw web UI → Channels → Telegram → paste token
- Message your new bot — it routes to your configured AI
Slack
- Go to api.slack.com/apps → Create New App → From scratch
- Enable Socket Mode, generate an App-Level Token (scope:
connections:write) - Enable Event Subscriptions, subscribe to
message.imandapp_mentionevents - Install to your workspace, copy the Bot Token (
xoxb-...) - In OpenClaw → Channels → Slack → paste both tokens
Discord
- Create a new app at discord.com/developers
- Under Bot, enable Message Content Intent and copy the token
- OAuth2 URL with scopes:
bot, permissions:Send Messages,Read Message History - Invite to your server
- In OpenClaw → Channels → Discord → paste token + server ID
WhatsApp (via WhatsApp Business API)
WhatsApp requires a Meta Business Account and WhatsApp Business API access. Once set up:
- OpenClaw → Channels → WhatsApp → enter your phone number ID and permanent token
Step 5: Configure Your LLM Provider
In the OpenClaw web UI → Settings → LLM Provider:
| Provider | Best for | Notes |
|---|---|---|
| Anthropic Claude | Complex reasoning, long context | claude-3-7-sonnet recommended |
| OpenAI GPT-4o | General purpose, tool use | Widely supported |
| Ollama (local) | Privacy, no API costs | Requires local Llama/Mistral |
| Groq | Speed | Fast inference, rate limits |
For local-first privacy, point OpenClaw at a local Ollama instance:
OLLAMA_BASE_URL=http://host.docker.internal:11434
OLLAMA_MODEL=llama3.2:latest
Step 6: Reverse Proxy with SSL (Production)
For a public-facing deployment, put OpenClaw behind Nginx or Caddy:
Caddy (Recommended — automatic HTTPS)
# /etc/caddy/Caddyfile
openclaw.yourdomain.com {
reverse_proxy localhost:3100
}
systemctl reload caddy
# That's it — Caddy handles Let's Encrypt automatically
Nginx
server {
listen 443 ssl;
server_name openclaw.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/openclaw.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/openclaw.yourdomain.com/privkey.pem;
location / {
proxy_pass http://localhost:3100;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Skills: Extending OpenClaw
OpenClaw uses a skills system (similar in concept to Claude Code skills) to add capabilities beyond built-in tools. Install community skills from the OpenClaw Skills Registry:
# Inside the OpenClaw web UI → Skills → Browse Registry
# Or via API:
curl -X POST http://localhost:3100/api/skills/install \
-H "Authorization: Bearer $OPENCLAW_GATEWAY_TOKEN" \
-d '{"skill": "calendar-google"}'
Popular community skills:
calendar-google— read/write Google Calendaremail-gmail— draft and send emailsgithub-tools— create issues, PRs, check CIhome-assistant— control smart home devicesfile-manager— read/write files in the workspace directory
OpenClaw vs Other Self-Hosted AI Assistants
| OpenClaw | Open WebUI | Jan | LibreChat | |
|---|---|---|---|---|
| Messaging integrations | 50+ | None | None | Discord only |
| Self-hosted | ✅ | ✅ | ✅ | ✅ |
| GitHub stars | 247K | 52K | 24K | 19K |
| Proactive notifications | ✅ | ❌ | ❌ | ❌ |
| Skills/plugins | ✅ | Extensions | ❌ | Plugins |
| Runs offline | With Ollama | With Ollama | ✅ | With Ollama |
| Setup complexity | Medium | Low | Low | Medium |
OpenClaw's key differentiator: it comes to you (via messaging), rather than requiring you to open a browser tab. For automation, scheduling, and AI that works proactively, it's in a different category from chat UI tools like Open WebUI or Jan.
Troubleshooting Common Issues
Container won't start — port 3100 in use:
lsof -i :3100
# Kill the conflicting process or change OpenClaw port in docker-compose.yml
Telegram messages not received:
- Check bot is not blocked
- Verify the token is correct in OpenClaw settings
- Inspect logs:
docker compose logs openclaw | grep telegram
Sandbox mode breaking tool execution:
- Ensure Docker socket is accessible:
ls -la /var/run/docker.sock - Add to docker-compose.yml volumes:
- /var/run/docker.sock:/var/run/docker.sock
Ollama not reachable:
- On Linux, use
host.docker.internalwithextra_hostsin docker-compose.yml (shown above) - Verify Ollama is running:
curl http://localhost:11434/api/tags
Recommendations
Use OpenClaw if:
- You want AI accessible from your phone without opening new apps
- Privacy is a priority — all data stays on your server
- You want proactive AI (scheduled tasks, notifications, reminders)
- You already use Telegram/Discord/Slack for communication
Stick with cloud alternatives if:
- You need the absolute latest AI capabilities with zero setup
- Your team needs shared AI workspace features (Notion-style)
- You're not comfortable managing a VPS
Methodology
- Sources: github.com/openclaw/openclaw (247K+ stars), docs.openclaw.ai/install/docker, AWS blog (Lightsail AMI launch), DigitalOcean 1-Click docs, Simon Willison's TIL on OpenClaw Docker, Towards AI Medium article (Feb 2026)
- Data as of: March 2026
Comparing self-hosted AI assistants? See LocalAI vs Ollama vs LM Studio 2026 and Open WebUI vs LibreChat vs Jan 2026.
Want to automate your AI workflows without self-hosting? Compare n8n vs Zapier alternatives 2026.
Operational Criteria That Matter More Than Feature Checklists
Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.
That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.
A Practical Adoption Path for Teams Replacing SaaS
For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.
Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.
Related Reading
Operational Criteria That Matter More Than Feature Checklists
Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.
That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.
A Practical Adoption Path for Teams Replacing SaaS
For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.
Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.
Related Reading
Operational Criteria That Matter More Than Feature Checklists
Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.
That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.
A Practical Adoption Path for Teams Replacing SaaS
For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.
Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.
Related Reading
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.