Open-source alternatives guide
Traefik vs Caddy vs Nginx (2026)
Traefik vs Caddy vs Nginx as reverse proxies for self-hosted apps in 2026. Auto TLS, Docker routing, config complexity compared. Which reverse proxy to.
TL;DR
Three great reverse proxies for self-hosters, each with a different philosophy: Caddy (Apache 2.0, ~58K stars, Go) is the simplest — automatic HTTPS with zero config, readable Caddyfile syntax. Traefik (MIT, ~52K stars, Go) is Docker-native — auto-discovers containers via labels, no config file changes needed when adding new services. Nginx (BSD) is the battle-tested veteran — maximum flexibility and performance, steeper config curve. Most homelabs start with Caddy, teams running many Docker services prefer Traefik.
Key Takeaways
- Caddy: Simplest, automatic HTTPS, great for static sites and straightforward proxying
- Traefik: Docker-native, zero-downtime reloads, automatic service discovery via labels
- Nginx: Most flexible, highest performance, requires manual SSL cert renewal (or nginx-proxy)
- Auto TLS: Caddy and Traefik handle Let's Encrypt automatically; Nginx needs Certbot
- Config hot-reload: All three support it; Traefik does it without a restart
- Performance: All can handle tens of thousands of concurrent connections — not a bottleneck
Feature Comparison
| Feature | Caddy | Traefik v3 | Nginx |
|---|---|---|---|
| License | Apache 2.0 | MIT | BSD-2-Clause |
| GitHub Stars | ~58K | ~52K | ~20K (mainline) |
| Auto HTTPS | Yes (built-in) | Yes (ACME) | No (need Certbot) |
| Docker label routing | No (manual) | Yes (native) | No (need nginx-proxy) |
| Config language | Caddyfile | TOML/YAML/labels | Nginx config (nginx.conf) |
| Config hot-reload | Yes | Yes (zero downtime) | Yes (nginx -s reload) |
| Built-in auth | Basic auth | Basic auth | Basic auth |
| Rate limiting | Plugin | Middleware | Module (nginx-plus or lua) |
| Load balancing | Yes | Yes | Yes |
| WebSocket support | Yes | Yes | Yes |
| gRPC support | Yes | Yes | Yes |
| Metrics | /metrics endpoint | Prometheus built-in | With stub_status |
| RAM usage | ~30MB | ~50MB | ~10MB |
Option 1: Caddy — Simplest Auto-HTTPS
Caddy has the most human-friendly configuration. One line per site, automatic HTTPS, no certificate management.
Caddy Docker Setup
# docker-compose.yml
services:
caddy:
image: caddy:alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp" # HTTP/3
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- proxy
volumes:
caddy_data:
caddy_config:
networks:
proxy:
external: true
# Create shared network first:
docker network create proxy
Caddyfile Examples
# Single site — auto HTTPS:
app.yourdomain.com {
reverse_proxy localhost:8080
}
# Multiple sites:
grafana.yourdomain.com {
reverse_proxy grafana:3000
}
paperless.yourdomain.com {
reverse_proxy paperless:8000
}
# Basic auth:
private.yourdomain.com {
basicauth {
admin JDJhJDE0JG... # caddy hash-password
}
reverse_proxy localhost:9000
}
# Strip path prefix:
yourdomain.com/api/* {
uri strip_prefix /api
reverse_proxy api-service:8080
}
# Multiple upstreams (load balance):
api.yourdomain.com {
reverse_proxy {
to localhost:8081
to localhost:8082
lb_policy round_robin
}
}
Wildcard TLS (DNS challenge)
For *.yourdomain.com wildcard certs (requires DNS API):
*.yourdomain.com {
tls {
dns cloudflare {env.CF_API_TOKEN}
}
@grafana host grafana.yourdomain.com
handle @grafana {
reverse_proxy grafana:3000
}
}
Use caddy:cloudflare Docker image (with DNS plugin):
image: ghcr.io/caddybuilds/caddy-cloudflare:latest
Option 2: Traefik — Docker-Native Auto-Discovery
Traefik discovers services automatically by reading Docker container labels. Add a new container → Traefik routes to it without any config file changes.
Traefik Docker Setup
# docker-compose.yml
services:
traefik:
image: traefik:v3
container_name: traefik
restart: unless-stopped
command:
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.email=you@yourdomain.com"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_certs:/letsencrypt
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(`traefik.yourdomain.com`)"
- "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$hash"
networks:
- proxy
volumes:
traefik_certs:
networks:
proxy:
external: true
Adding Services via Labels
Any container on the proxy network with Traefik labels gets automatically routed:
# Any other service's docker-compose.yml:
services:
grafana:
image: grafana/grafana:latest
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.yourdomain.com`)"
- "traefik.http.routers.grafana.tls.certresolver=letsencrypt"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
networks:
proxy:
external: true
No Traefik config changes needed. Start the container → routing is live.
Traefik Middlewares
Apply reusable middleware via labels:
labels:
# Rate limiting:
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
# Apply to router:
- "traefik.http.routers.myapp.middlewares=ratelimit@docker"
# Strip prefix:
- "traefik.http.middlewares.stripprefix.stripprefix.prefixes=/app"
# Forward auth (SSO via Authentik):
- "traefik.http.middlewares.authentik.forwardauth.address=https://auth.yourdomain.com/outpost.goauthentik.io/auth/traefik"
Option 3: Nginx — Battle-Tested Flexibility
Nginx is the most widely deployed web server and reverse proxy. Maximum flexibility, best for complex routing rules, requires manual TLS management.
Nginx Docker Setup
services:
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./conf.d:/etc/nginx/conf.d:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
- certbot_webroot:/var/www/certbot:ro
certbot:
image: certbot/certbot:latest
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- certbot_webroot:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
certbot_webroot:
Nginx Config Example
# /etc/nginx/conf.d/app.conf
server {
listen 80;
server_name app.yourdomain.com;
# Let's Encrypt renewal:
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name app.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/app.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Get certificates with Certbot:
certbot certonly --webroot -w /var/www/certbot \
-d app.yourdomain.com --email you@example.com --agree-tos
Decision Guide
Choose Caddy if:
- You're new to reverse proxies
- You want the fastest path to HTTPS — minimal config
- You run a small number of services (under ~20)
- You prefer editing a config file to managing labels
- Your sites are mostly static or simple reverse proxies
Choose Traefik if:
- You run many Docker services and want zero-touch routing
- Services are added/removed frequently
- You want a dashboard showing all routes
- You're already in a Docker Compose-heavy workflow
- You need advanced middleware (rate limiting, auth, headers) per-service
Choose Nginx if:
- You need maximum flexibility and fine-grained control
- You have complex routing rules (regex, geo-IP, etc.)
- You serve static files at high scale
- You need the nginx ecosystem (ModSecurity WAF, GeoIP2, etc.)
- You're familiar with Nginx and don't want to learn a new tool
See all open source infrastructure tools at OSSAlt.com/categories/infrastructure.
Choosing a Deployment Platform
Before selecting a self-hosting stack, decide whether you want to manage Docker Compose files manually or use a platform that abstracts deployment, SSL, and domain management.
Manual Docker Compose gives you maximum control. You manage nginx or Traefik configuration, Let's Encrypt certificate renewal, and compose file versions yourself. This is the right approach if you want to understand every layer of your infrastructure or have highly custom requirements.
Managed PaaS platforms like Coolify or Dokploy deploy Docker Compose applications with SSL, custom domains, and rolling deployments through a web UI. You lose some control but gain significant operational simplicity — especially for multi-service deployments where managing compose files across servers becomes complex.
Server sizing: Self-hosted services have widely varying resource requirements. Most lightweight services (Uptime Kuma, AdGuard Home, Vaultwarden) run comfortably on a $5-6/month VPS with 1GB RAM. Medium services (Nextcloud, Gitea, n8n) need 2-4GB RAM. AI services with local model inference need 16-32GB RAM and ideally a GPU.
Networking and DNS: Point your domain to your server's public IP before deploying. Use Cloudflare as your DNS provider — it provides DDoS protection, free SSL termination at the edge, and the ability to hide your server's real IP. Enable Cloudflare's proxy mode for public-facing services; disable it for services that need direct TCP connections (like game servers or custom protocols).
Monitoring your stack: Use Uptime Kuma to monitor all services from a single dashboard with alerting to your preferred notification channel.
Related Self-Hosting Guides
Reverse proxies work best when combined with authentication middleware. Authelia adds 2FA and SSO in front of any application you proxy. For automated deployment of new services behind your reverse proxy, Coolify handles SSL certificate provisioning and proxy configuration automatically.
Network Security and Hardening
Self-hosted services exposed to the internet require baseline hardening. The default Docker networking model exposes container ports directly — without additional configuration, any open port is accessible from anywhere.
Firewall configuration: Use ufw (Uncomplicated Firewall) on Ubuntu/Debian or firewalld on RHEL-based systems. Allow only ports 22 (SSH), 80 (HTTP redirect), and 443 (HTTPS). Block all other inbound ports. Docker bypasses ufw's OUTPUT rules by default — install the ufw-docker package or configure Docker's iptables integration to prevent containers from opening ports that bypass your firewall rules.
SSH hardening: Disable password authentication and root login in /etc/ssh/sshd_config. Use key-based authentication only. Consider changing the default SSH port (22) to a non-standard port to reduce brute-force noise in your logs.
Fail2ban: Install fail2ban to automatically ban IPs that make repeated failed authentication attempts. Configure jails for SSH, Nginx, and any application-level authentication endpoints.
TLS/SSL: Use Let's Encrypt certificates via Certbot or Traefik's automatic ACME integration. Never expose services over HTTP in production. Configure HSTS headers to prevent protocol downgrade attacks. Check your SSL configuration with SSL Labs' server test — aim for an A or A+ rating.
Container isolation: Avoid running containers as root. Add user: "1000:1000" to your docker-compose.yml service definitions where the application supports non-root execution. Use read-only volumes (volumes: - /host/path:/container/path:ro) for configuration files the container only needs to read.
Secrets management: Never put passwords and API keys directly in docker-compose.yml files committed to version control. Use Docker secrets, environment files (.env), or a secrets manager like Vault for sensitive configuration. Add .env to your .gitignore before your first commit.
Production Deployment Checklist
Before treating any self-hosted service as production-ready, work through this checklist. Each item represents a class of failure that will eventually affect your service if left unaddressed.
Infrastructure
- Server OS is running latest security patches (
apt upgrade/dnf upgrade) - Firewall configured: only ports 22, 80, 443 open
- SSH key-only authentication (password auth disabled)
- Docker and Docker Compose are current stable versions
- Swap space configured (at minimum equal to RAM for <4GB servers)
Application
- Docker image version pinned (not
latest) in docker-compose.yml - Data directories backed by named volumes (not bind mounts to ephemeral paths)
- Environment variables stored in
.envfile (not hardcoded in compose) - Container restart policy set to
unless-stoppedoralways - Health check configured in Compose or Dockerfile
Networking
- SSL certificate issued and auto-renewal configured
- HTTP requests redirect to HTTPS
- Domain points to server IP (verify with
dig +short your.domain) - Reverse proxy (Nginx/Traefik) handles SSL termination
Monitoring and Backup
- Uptime monitoring configured with alerting
- Automated daily backup of Docker volumes to remote storage
- Backup tested with a successful restore drill
- Log retention configured (no unbounded log accumulation)
Access Control
- Default admin credentials changed
- Email confirmation configured if the app supports it
- User registration disabled if the service is private
- Authentication middleware added if the service lacks native login
Conclusion and Getting Started
The self-hosting ecosystem has matured dramatically. What required significant Linux expertise in 2015 is now achievable for any developer comfortable with Docker Compose and a basic understanding of DNS. The tools have gotten better, the documentation has improved, and the community has built enough tutorials that most common configurations have been solved publicly.
The operational overhead that remains is real but manageable. A stable self-hosted service — one that is properly monitored, backed up, and kept updated — requires roughly 30-60 minutes of attention per month once the initial deployment is complete. That time investment is justified for services where data ownership, cost savings, or customization requirements make the cloud alternative unsuitable.
Start with one service. Trying to migrate your entire stack to self-hosted infrastructure at once is a recipe for an overwhelming weekend project that doesn't get finished. Pick the service where the cloud alternative is most expensive or where data ownership matters most, run it for 30 days, and then evaluate whether to expand.
Build your operational foundation before adding services. Get monitoring, backup, and SSL configured correctly for your first service before adding a second. These cross-cutting concerns become easier to extend to new services once the pattern is established, and much harder to retrofit to a fleet of services that were deployed without them.
Treat this like a product. Your self-hosted services have users (even if that's just you). Write a runbook. Document the restore procedure. Create a status page. These practices don't take long but they transform self-hosting from a series of experiments into reliable infrastructure you can depend on.
The community around self-hosted software is active and helpful. Reddit's r/selfhosted, the Awesome-Selfhosted GitHub list, and Discord servers for specific applications all have people who have already solved the problem you're encountering. The configuration questions that feel unique usually aren't.
All three reverse proxies handle the core use case — SSL termination, domain routing, and Docker integration — reliably. The choice between them is a tooling preference decision rather than a correctness decision. Caddy's automatic HTTPS and minimal configuration make it the fastest path to a working reverse proxy; Traefik's Docker provider and dashboard make it the best fit for dynamic container environments; Nginx's ubiquity makes it the best choice when configuration resources and community documentation matter. Run whichever one you already know, or pick Caddy if you're starting fresh.
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.