How to Self-Host OpenStatus in 2026
Why Self-Host OpenStatus
OpenStatus is an open source uptime monitor and status page in one — the same job Better Stack, Statuspage, and Checkly do commercially. The project (AGPL-3.0, ~8.5K GitHub stars) is written in TypeScript on Next.js with Drizzle as the ORM, which means the stack is boring in the good way: familiar to most web teams, friendly to existing Vercel-adjacent deployments, and straightforward to operate.
Teams pick self-hosted OpenStatus when SaaS status-page pricing starts to scale badly (per-check or per-seat), when regulated environments require logs and data to stay in-house, or when the public status page needs to live on the same infrastructure as the app it reports on.
What the Stack Includes
OpenStatus ships three conceptual pieces:
- Web app — dashboard, status page renderer, API
- Checker — HTTP/TCP probes from one or more regions
- Storage — Postgres (via Drizzle) for config, metadata, and incidents; time-series data typically lands in Tinybird or a compatible analytics store
Alerting plugs into Slack, Discord, email, and generic webhooks. Incident management is built in, so you can publish scheduled maintenance windows and post mortems directly on the status page.
Deployment Prerequisites
Before deploying, decide on four things:
- Regions. A single-region checker will mark your app down when a local network hiccup occurs. Budget for at least two regions if uptime numbers need to be credible.
- Database. Any managed Postgres works. Self-hosted Postgres is fine for small setups.
- Analytics backend. OpenStatus uses a time-series store for historical latency and response data. Tinybird is the default in the managed product; self-hosted deployments often point at a compatible OLAP store.
- Domain and TLS. Public status pages usually live on
status.yourdomain.com. Provision DNS and a certificate through your reverse proxy of choice.
Docker or Platform Setup
The simplest path for most teams is Docker Compose on a single VM, with the checker running as a separate service so it can be scaled or replicated across regions later.
# docker-compose.yml
services:
db:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_USER: openstatus
POSTGRES_PASSWORD: change-me
POSTGRES_DB: openstatus
volumes:
- openstatus_db:/var/lib/postgresql/data
web:
image: ghcr.io/openstatushq/openstatus-web:latest
restart: unless-stopped
depends_on: [db]
ports:
- "3000:3000"
env_file: .env
checker:
image: ghcr.io/openstatushq/openstatus-checker:latest
restart: unless-stopped
depends_on: [db, web]
env_file: .env
volumes:
openstatus_db:
For teams already running a PaaS, OpenStatus deploys cleanly on Dokploy, Coolify, or Railway — the web app is a standard Next.js build and the checker is a Node service.
Environment Variables and Storage
Key variables to set in .env:
DATABASE_URL— Postgres connection stringAUTH_SECRET— generated withopenssl rand -hex 32NEXT_PUBLIC_APP_URL— public URL of the dashboardTINYBIRD_TOKEN/ analytics backend credentials- Provider secrets for email (Resend, SES) and Slack/Discord webhooks
Persist the Postgres volume and the checker's state directory. Do not mount them on network storage with variable latency — cron-style checks are sensitive to clock drift.
Status Pages, Alerts, and Multi-Region Caveats
Three operational details catch teams out.
Cron accuracy. OpenStatus checks run on a scheduled interval. On a single-VM deployment with a noisy neighbor, 1-minute checks can skew by 5–15 seconds. For high-precision SLO reporting, dedicate a small VM to the checker and avoid packing it next to the web app.
Single-region blind spots. If your checker and your app share an AZ or provider, an outage of that provider marks your app down even when external users are fine. Deploy checkers in at least one different provider (Hetzner + Fly, or AWS + Cloudflare) to catch region-scoped failures rather than reporting them as incidents.
Public page CDN. The status page is the one thing that must stay up when your app is down. Put Cloudflare or another CDN in front of it, and make sure the status page renders from cache even if the API or database is unreachable.
Alerting supports Slack, Discord, email, and generic webhooks. Route webhooks through a low-dependency path (e.g., a dedicated SMTP provider, not the same mail server your app uses) so alert delivery does not depend on the system under test.
Operations, Backups, and Upgrades
- Backups: daily
pg_dumpof the OpenStatus database to object storage; weekly restore tests. - Upgrades: OpenStatus releases frequently; pin image tags per environment and promote after a short soak in staging.
- Migrations: Drizzle migrations run on boot. Always snapshot before a major version bump.
- Observability: export OpenStatus container logs to your log sink. Watch for elevated checker error rates — they usually indicate network issues, not application downtime.
For a broader hardening checklist, see the self-hosting security checklist.
When OpenStatus Is the Right Fit
OpenStatus fits teams that want both uptime monitoring and a branded public status page in the same tool, run inside their own infrastructure, on a modern TypeScript stack. Pick Uptime Kuma if you only need internal dashboards and do not care about a polished public page; see Uptime Kuma vs OpenStatus. If you were previously evaluating Better Stack and want to compare the rest of the OSS category, see alternatives to Better Stack.
Explore this tool
Find openstatusalternatives on OSSAlt →