Skip to main content

Uptime Kuma: Self-Hosted Monitoring for Homelabs 2026

·OSSAlt Team
uptime-kumamonitoringhomelabself-hostingdockerstatus-pagealerts

Uptime Kuma: Self-Hosted Monitoring for Homelabs 2026

Better Stack charges $20/month for 100 monitors. Uptime Kuma (84,100 GitHub stars) gives you unlimited monitors, 90+ notification channels, and public status pages for the cost of a VPS you probably already have. It runs in a single Docker container, uses under 400 MB of RAM for most homelabs, and takes 5 minutes to set up.

This is the monitoring tool the homelab community has standardized on — and for good reason.

TL;DR

Uptime Kuma is a self-hosted uptime monitoring tool with a polished web UI, real-time status pages, and support for HTTP/HTTPS, TCP, DNS, Docker container, ping, SSL certificate expiry, and push monitoring. The main trade-off: it monitors from a single location (your server), so if your server goes down, alerts stop. SaaS tools like Better Stack and Freshping check from globally distributed nodes. For homelab use — where your primary concern is "is my Nextcloud/Plex/Vaultwarden up?" — Uptime Kuma is the right tool.

Key Takeaways

  • 84,100 GitHub stars, MIT license, v2.1.3 (February 2026)
  • Monitor types: HTTP/HTTPS, TCP, ping, DNS, Docker containers, push/heartbeat, SSL expiry, domain expiry, WebSocket, Steam servers
  • 90+ notification channels: Telegram, Discord, Slack, PagerDuty, OpsGenie, Ntfy, Gotify, Apprise, and dozens more
  • Public status pages: multiple, customizable, real-time via WebSocket — no page refresh
  • Minimum RAM: ~400 MB in practice; 1 GB VPS handles most homelab deployments comfortably
  • v2.0 added MariaDB support — previously SQLite-only; large deployments hit performance limits with v1

Docker Compose Setup

Uptime Kuma runs as a single container with a volume for persistent data.

version: "3.8"

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: always
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma:/app/data
      # Uncomment to enable Docker container monitoring
      # - /var/run/docker.sock:/var/run/docker.sock

volumes:
  uptime-kuma:
docker compose up -d

Open http://your-server:3001 and create your admin account on first launch. No database setup required — Uptime Kuma uses SQLite by default (v2 adds MariaDB for larger deployments).

With Nginx Proxy Manager or Traefik: Uptime Kuma works behind a reverse proxy with standard configuration. Use the Docker service name uptime-kuma:3001 as the upstream when proxying internally.


What You Can Monitor

HTTP/HTTPS — The core use case. Point Uptime Kuma at any URL and it checks availability on a configurable interval (minimum 20 seconds). Options include:

  • Expected status code (default: any 2xx)
  • Keyword match (page must contain specific text — useful for checking app login pages)
  • JSON query (extract a value from a JSON response and compare)
  • Certificate validation toggle (for self-signed certs on internal services)

TCP Port — Check if a port is open without making an HTTP request. Useful for databases, SMTP servers, or any service that doesn't expose HTTP.

DNS Record — Query a DNS server for a specific record type and verify the response matches an expected value. Catch DNS misconfigurations before users do.

Docker Container — Mount /var/run/docker.sock into the Uptime Kuma container to monitor whether specific Docker containers are in a running state. Shows container status directly in the dashboard without needing to SSH in.

Push/Heartbeat — Inverted monitoring: instead of Uptime Kuma polling a service, the service sends periodic "I'm alive" pings to a Uptime Kuma endpoint. If a ping doesn't arrive within the expected interval, an alert fires. Ideal for monitoring cron jobs, scheduled scripts, and background workers that run silently.

SSL Certificate Expiry — Separate from HTTPS monitoring. Alerts X days before a certificate expires, giving you time to renew before users hit SSL errors.

Domain Expiry — Added in v2.1 (February 2026). Monitors domain registration expiry — critical if your homelab is accessible via a custom domain.

Ping (ICMP) — Basic reachability check. Useful for network devices (routers, switches, NAS units) that don't expose HTTP.


Notification Channels

Uptime Kuma supports 90+ notification integrations. Configure multiple channels per monitor — for example, send non-critical alerts to Telegram and critical outages to PagerDuty.

Most popular for homelabs:

ChannelSetup complexityBest for
TelegramLow (bot token + chat ID)Personal alerts
DiscordLow (webhook URL)Shared homelab with friends/family
SlackLow (webhook URL)Team homelabs
NtfyLow (topic URL)Push notifications, self-hosted option
GotifyLow (server URL + token)Fully self-hosted push
Email (SMTP)Medium (SMTP credentials)Traditional alerting
PagerDutyMedium (integration key)Production-grade on-call
AppriseHigh (URL notation)78+ services via single config

Configuring Telegram alerts (example):

  1. Message @BotFather on Telegram → /newbot → save the bot token
  2. Send any message to your new bot to create a chat
  3. Get your chat ID: https://api.telegram.org/bot<TOKEN>/getUpdates
  4. In Uptime Kuma: Settings → Notifications → Add → Telegram → paste token + chat ID

Setting Up a Public Status Page

Status pages are one of Uptime Kuma's most useful features — a publicly accessible page showing your services' uptime history and current status.

  1. Status Pages tab → New Status Page
  2. Set a slug (e.g., status) — accessible at http://your-server:3001/status/status
  3. Add monitors to the page (drag from your monitor list)
  4. Optionally assign a custom domain for external-facing status pages

Custom domain with Nginx Proxy Manager:

  • Proxy status.yourdomain.comuptime-kuma:3001
  • No path rewriting needed — the status page slug handles routing

Status pages update in real time via WebSocket. Visitors don't need to refresh — the page reflects live status changes within seconds of detection.


Homelab Monitor Checklist

Here's a practical starting configuration for a typical homelab:

ServiceMonitor TypeCheck IntervalAlert
NextcloudHTTPS keyword60sTelegram
Plex / JellyfinTCP :32400 / :809660sTelegram
VaultwardenHTTPS30sTelegram + Email
Home AssistantHTTPS60sTelegram
Pi-holeHTTP :80120sTelegram
Nginx Proxy ManagerHTTPS60sTelegram
PortainerHTTPS :9443120sTelegram
Router/NASPing60sTelegram
SSL certificate (main domain)SSL expiryDailyEmail (30 days before)
Domain expiryDomain expiryDailyEmail (60 days before)
Backup jobPush/heartbeat25 hoursTelegram

The backup job heartbeat is underused but valuable: have your backup script curl the Uptime Kuma push URL after each successful run. If the push doesn't arrive within 25 hours, you know the backup silently failed.


v2 Migration: What Changed

Uptime Kuma v2 (released 2024/2025) brought significant architectural changes:

MariaDB support — The v1 SQLite backend worked fine for small deployments but degraded at scale (hundreds of monitors with short check intervals). v2 adds MariaDB as an optional backend. For most homelabs, SQLite is still adequate. For production or large deployments:

# Add to docker-compose.yaml for MariaDB backend
  db:
    image: mariadb:11
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
      MYSQL_DATABASE: uptime_kuma
      MYSQL_USER: uptime_kuma
      MYSQL_PASSWORD: ${DB_PASSWORD}
    volumes:
      - db:/var/lib/mysql

  uptime-kuma:
    environment:
      DATABASE_TYPE: mariadb
      DATABASE_HOSTNAME: db
      DATABASE_NAME: uptime_kuma
      DATABASE_USERNAME: uptime_kuma
      DATABASE_PASSWORD: ${DB_PASSWORD}

Heartbeat table restructure — The migration from v1 aggregates historical heartbeat data. For large databases (years of data, many monitors), this migration can take several minutes. Back up before upgrading.

New v2.1 features (February 2026):

  • Globalping integration — check from worldwide distributed probes (useful for public-facing services)
  • Domain expiry monitoring — separate from SSL, tracks domain registration dates

Honest Limitations

Single-location monitoring — Uptime Kuma checks from wherever it's running. If your server loses internet connectivity or goes down, no alerts fire for anything. SaaS tools (Better Stack, Freshping) check from globally distributed nodes and remain available even when your server is down.

Workaround: Run a second Uptime Kuma instance on a cloud VPS (not at home) and have it monitor just your home server's external IP. Two free VPSes (Oracle Cloud's always-free tier, for example) cover the single-point-of-failure problem.

No RBAC / multi-user — One admin account with full control. Not suitable for teams where different people should have different access levels.

No on-call scheduling — Uptime Kuma fires alerts immediately via your configured channels. There's no "on-call rotation" or "business hours only" alerting. For simple homelabs, this is fine. For production environments, look at Grafana OnCall or PagerDuty.

Limited historical data UI — Graphs display roughly 1 week of data in the default view. Long-term uptime history is stored but not easily browsable. Uptime percentages shown are calculated over ~180 days.

No REST API for management — You can't programmatically add/remove monitors via API (this is in development). Configuration is UI-only.


Uptime Kuma vs Alternatives

Uptime KumaBetter StackFreshpingGrafana + Prometheus
Self-hostedYesNoNoYes
CostFree$20+/monthFree (50 monitors)Free (infra)
Multi-regionNoYesYesNo
Notification channels90+ManyLimitedVia Alertmanager
Status pageYes (multiple)YesYesNo (Grafana public)
Docker monitoringYesNoNoYes (cAdvisor)
Push/heartbeatYesYesNoVia Pushgateway
SLA reportsLimitedYesLimitedCustom dashboards
On-call schedulingNoYesNoVia OnCall
Setup time5 minutesInstantInstant2–4 hours

For homelab use, Uptime Kuma wins every comparison. For production services where downtime is business-critical, Better Stack's multi-region checking and SLA reports justify the cost.


Integration with Home Assistant

Uptime Kuma has a native Home Assistant integration. Once configured, monitor status appears as binary sensors in HA — enabling automations like:

  • Turn on a dashboard LED when any critical service goes down
  • Announce via smart speaker when Nextcloud comes back online
  • Log service outage events to HA's history for analysis

Configure in configuration.yaml:

uptime_kuma:
  host: http://uptime-kuma:3001
  verify_ssl: false

This is a compelling combination for home automation enthusiasts who already run Home Assistant — service health becomes part of the same automation platform as lights, sensors, and climate control.


Uptime Kuma is the rare self-hosted tool that genuinely competes with its commercial equivalents for the target use case. For homelab monitoring, it's not a budget compromise — it's the right tool. The limitations (single-location, no RBAC, no API) are real but rarely matter for personal and family homelab deployments.


Browse all uptime monitoring alternatives at OSSAlt. Related: Grafana vs Uptime Kuma comparison, complete homelab software stack guide.

Comments