Skip to main content

Self-Hosting Uptime Kuma for Monitoring 2026

·OSSAlt Team
uptime-kumamonitoringself-hostingdockerguide
Share:

Uptime Kuma is the most popular open source monitoring tool with 62K+ GitHub stars. It replaces Better Stack, Pingdom, and UptimeRobot. One Docker command gives you monitoring for all your services with 90+ notification channels.

Requirements

  • VPS with 512 MB RAM minimum
  • Docker
  • Domain name (e.g., status.yourdomain.com)
  • 5 GB disk

Step 1: Deploy with Docker

docker run -d \
  --name uptime-kuma \
  --restart unless-stopped \
  -p 3001:3001 \
  -v uptime-kuma:/app/data \
  louislam/uptime-kuma:latest

That's it. Uptime Kuma runs on port 3001.

Step 2: Reverse Proxy (Caddy)

# /etc/caddy/Caddyfile
status.yourdomain.com {
    reverse_proxy localhost:3001
}
sudo systemctl restart caddy

Step 3: Initial Setup

  1. Open https://status.yourdomain.com
  2. Create your admin account
  3. Start adding monitors

Step 4: Add Monitors

Monitor types available:

TypeUse Case
HTTP(s)Website availability, API endpoints
TCP PortDatabase, Redis, custom services
PingServer reachability
DNSDNS record verification
Docker ContainerContainer health via Docker socket
Steam Game ServerGame server monitoring
MQTTIoT broker monitoring
gRPCgRPC service health
KeywordCheck if a page contains specific text
JSON QueryValidate API response values
PushReceive heartbeats from your services

Example monitors to set up:

Website:         https://yourdomain.com          (HTTP, 60s interval)
API:             https://api.yourdomain.com/health (HTTP, 30s interval)
Database:        db.internal:5432                 (TCP, 60s interval)
Redis:           redis.internal:6379              (TCP, 60s interval)
Mail server:     mail.yourdomain.com:587          (TCP, 300s interval)
DNS:             yourdomain.com (A record)        (DNS, 300s interval)

Step 5: Configure Notifications

Uptime Kuma supports 90+ notification services. Most popular:

ServiceSetup
SlackIncoming webhook URL
DiscordWebhook URL from channel settings
TelegramBot token + chat ID
Email (SMTP)Host, port, username, password
PagerDutyIntegration key
PushoverUser key + app token
NtfyTopic URL (self-hostable too)
GotifyServer URL + app token

Setting up Discord notifications:

  1. Discord channel → Edit ChannelIntegrationsWebhooks
  2. Copy webhook URL
  3. In Uptime Kuma → SettingsNotificationsSetup Notification
  4. Select Discord, paste webhook URL
  5. Set as default notification for all monitors

Step 6: Create Status Pages

  1. Click Status Pages in the sidebar
  2. Create a new status page
  3. Add monitor groups (e.g., "Website", "API", "Infrastructure")
  4. Assign monitors to groups
  5. Customize with your logo and description
  6. Share the public URL with your team or users

Custom domain for status page:

# /etc/caddy/Caddyfile
status.yourdomain.com {
    reverse_proxy localhost:3001
}

Step 7: Docker Socket Monitoring (Optional)

Monitor Docker containers directly:

# Updated docker-compose.yml
services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma:/app/data
      - /var/run/docker.sock:/var/run/docker.sock:ro

Now add monitors of type Docker Container to track container health.

Step 8: Push Monitors for Cron Jobs

For monitoring cron jobs and background tasks:

  1. Create a Push type monitor in Uptime Kuma
  2. Copy the push URL
  3. Add to your cron job:
# At the end of your cron job
curl -s "https://status.yourdomain.com/api/push/YOUR_PUSH_TOKEN?status=up&msg=OK"

If the push doesn't arrive within the heartbeat interval, Uptime Kuma alerts you.

Production Hardening

Backups:

# Backup SQLite database (daily cron)
docker cp uptime-kuma:/app/data/kuma.db /backups/kuma-$(date +%Y%m%d).db

Updates:

docker pull louislam/uptime-kuma:latest
docker stop uptime-kuma
docker rm uptime-kuma
docker run -d \
  --name uptime-kuma \
  --restart unless-stopped \
  -p 3001:3001 \
  -v uptime-kuma:/app/data \
  louislam/uptime-kuma:latest

Security:

  • Enable 2FA in account settings
  • Use a strong admin password
  • Put behind reverse proxy with HTTPS
  • Consider restricting dashboard access by IP

Resource Usage

MonitorsRAMCPUDisk
1-50256 MB1 core1 GB
50-200512 MB1 core5 GB
200+1 GB2 cores10 GB

VPS Recommendations

ProviderSpecPrice
Hetzner2 vCPU, 2 GB RAM€4.50/month
DigitalOcean1 vCPU, 1 GB RAM$6/month
Linode1 vCPU, 1 GB RAM$5/month

Uptime Kuma is extremely lightweight — it can share a VPS with other services easily.

Why Self-Host Uptime Kuma

Managed uptime monitoring services charge per monitor and per notification channel. UptimeRobot's Free plan covers 50 monitors with 5-minute check intervals — workable for personal projects, but insufficient for production infrastructure. Their Pro plan ($7/month) adds 1-minute intervals and 50 monitors. Better Stack's Developer plan is free for 10 monitors; their Startup plan runs $24/month for 30 monitors with 30-second intervals and on-call escalations. Pingdom charges $15–115/month depending on monitor count and check frequency.

Self-hosted Uptime Kuma gives you unlimited monitors, 20-second check intervals, and 90+ notification channels for the cost of a $5/month VPS — or often nothing extra if it shares a server with your other services. A team monitoring 50 services at 30-second intervals pays $0 versus $24–50/month on managed platforms. Over three years, that's $1,500+ in savings for a setup that takes 5 minutes to deploy.

90+ notification channels: The breadth of Uptime Kuma's notification integrations is genuinely remarkable. It covers every major platform: Slack, Discord, Telegram, email, PagerDuty, OpsGenie, Pushover, Ntfy, Matrix, Gotify, Rocket.Chat, and dozens more. Managed services often charge extra for certain notification channels or limit integrations to specific plans.

Status pages at no extra cost: Most managed uptime services charge separately for public status pages. Better Stack's status pages are a paid feature. With Uptime Kuma, status pages are included and you can create as many as you need — one per product, one for internal teams, one for external customers.

When NOT to self-host Uptime Kuma: The main limitation of self-hosted Uptime Kuma is that monitoring runs from a single location. Managed services check from multiple global locations, so they can distinguish between a global outage and a region-specific network issue. If you have global users and need multi-region monitoring, consider whether the savings justify running checks from only one server location. Tools like UptimeRobot still offer free multi-location checks on their free tier, which may complement a self-hosted setup.

Prerequisites (Expanded)

512 MB RAM minimum: Uptime Kuma is a Node.js application backed by SQLite. At 50–100 monitors, it uses around 150–200 MB of RAM. The 512 MB minimum provides comfortable headroom, but in practice Uptime Kuma can share a 1 GB VPS with other lightweight services (a blog, a small API, etc.) without issue. The resource table above shows realistic usage at different monitor counts.

Docker: Uptime Kuma's Docker image is the recommended deployment method. It bundles all dependencies and handles the SQLite data volume correctly. If you're not already running Docker on your server, installing it takes about 2 minutes:

curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

Domain name: While you can access Uptime Kuma directly via IP, a domain is required for HTTPS (which your browser will insist on for the admin interface after the initial setup). A subdomain like status.yourdomain.com also makes it easy to share a public status page with customers.

5 GB disk: SQLite stores check history and response time data. With 100 monitors checking every 60 seconds, expect roughly 200–500 MB per year of history. 5 GB is generous for most deployments — you won't need to worry about disk for years.

Ubuntu 22.04 LTS is the recommended OS. Avoid running Uptime Kuma directly on macOS or Windows in production — the Docker volume handling and file permissions behave differently than on Linux. If you're on a budget, any major provider's cheapest tier is sufficient.

See the VPS comparison for self-hosters for a side-by-side comparison of the smallest tier across Hetzner, DigitalOcean, Vultr, and Linode — including network performance data that matters for monitoring accuracy.

Production Security Hardening

Uptime Kuma's admin interface exposes a list of every service you monitor, their check history, and — crucially — authentication credentials stored for services that require login-based health checks. Secure it properly.

Firewall with UFW: Port 3001 (Uptime Kuma's internal port) should never be directly accessible from the internet. Only expose ports 80, 443, and 22.

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Fail2ban for SSH: Brute-force SSH attacks are a constant background noise for any public server.

sudo apt install fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Enable 2FA in Uptime Kuma: Go to Settings → Security and enable two-factor authentication with a TOTP app (Google Authenticator, Authy, etc.). This is one of the most important security steps for a publicly accessible admin interface.

Keep the admin interface private: Consider blocking public access to the admin dashboard entirely and only exposing the public status page. With Caddy, you can serve different paths with different access controls:

status.yourdomain.com {
    # Public status page
    handle /status/* {
        reverse_proxy localhost:3001
    }
    # Admin interface — restrict to your IP
    handle {
        reverse_proxy localhost:3001
        @not_my_ip not remote_ip YOUR_IP
        respond @not_my_ip 403
    }
}

Disable SSH password auth:

sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo systemctl restart sshd

Enable automatic security updates:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

Back up the SQLite database: The kuma.db file contains all your monitors, history, and settings. Loss of this file means reconfiguring everything from scratch. Set up daily backups with off-site copies using automated server backups with restic.

For a complete server hardening checklist, see the self-hosting security checklist.

Troubleshooting Common Issues

Uptime Kuma shows services as "down" that are clearly up

The most common cause is the Docker network. If you're monitoring services running in other Docker containers using their internal hostnames, Uptime Kuma must be on the same Docker network. By default, Uptime Kuma's container is on the default bridge network, but your other containers may be on custom networks. Add Uptime Kuma to the relevant networks:

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    networks:
      - default
      - myapp_network

networks:
  myapp_network:
    external: true

Notifications not being sent

Test your notification channel from Settings → Notifications → the notification you configured → Test. If the test fails, check the error message. For Slack and Discord webhooks, the URL must be the full webhook URL including the path. For email, verify that your SMTP credentials are correct and that the provider allows SMTP authentication.

If notifications work during the test but not for real alerts, check the notification assignment on each monitor — notifications must be explicitly assigned to monitors, not just created globally.

SSL certificate check showing wrong expiry date

Uptime Kuma's SSL certificate monitoring checks the certificate served by your domain, not the one stored on disk. If you recently renewed a certificate but the old one is still cached, wait a few minutes for propagation. If Uptime Kuma is behind a load balancer or CDN, it may be checking the CDN's certificate rather than your origin's.

Container takes more than 2 minutes to start

Uptime Kuma runs database migrations on startup for each new version. This is normal and can take 30–60 seconds on slow storage. If the container is consistently slow to start, check the logs:

docker logs uptime-kuma --tail 50

Look for migration progress messages. If you see errors, the SQLite database may be corrupted — restore from your backup.

Push monitors going red even when the cron job runs

The heartbeat interval on Push monitors must be slightly longer than your cron schedule. If your cron runs every 5 minutes but the heartbeat interval is set to 5 minutes, clock drift or a few seconds of execution delay can cause false alarms. Set the heartbeat interval to 6–7 minutes for a cron that runs every 5 minutes.

Building a Complete Observability Stack

Uptime Kuma excels at uptime monitoring — checking that services respond and notifying you when they don't. But a complete observability strategy typically combines multiple tools that each cover different signals.

For infrastructure-level metrics (CPU, memory, disk, network I/O), pair Uptime Kuma with Netdata or Prometheus + Grafana. These capture the performance data that explains why a service went down, not just that it did. An Uptime Kuma alert tells you the service is unreachable; Netdata's resource graphs tell you whether the server ran out of memory 30 seconds before the alert fired.

For log aggregation, Loki + Grafana captures the application-level log output that explains error conditions. When Uptime Kuma alerts on a failed health check, the first step in diagnosis is usually reviewing the application logs from the failure window — Grafana Loki makes that instant with correlated time ranges.

For application error tracking, an error monitoring tool catches exceptions and stack traces from your application code before they accumulate into service failures. Uptime Kuma monitors the surface (is the endpoint responding?); error tracking monitors the interior (is the application throwing unhandled exceptions?).

This layered approach covers the monitoring spectrum: Uptime Kuma for external availability, Netdata/Prometheus for resource metrics, Loki for log aggregation, and error tracking for application-level failures. Each layer is independently valuable but the combination gives you rapid root-cause analysis when incidents occur.

See the best open source monitoring tools for a full evaluation of tools at each layer of the observability stack.

Compare monitoring tools on OSSAlt — features, notification channels, and self-hosting options side by side.

See open source alternatives to Uptime Kuma on OSSAlt.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.