Open-source alternatives guide
Docker Swarm vs Kubernetes (2026)
Docker Swarm vs Kubernetes for self-hosted workloads in 2026. Complexity, resource usage, rolling deploys, and when each orchestrator is the right choice.
TL;DR
For most self-hosters and small teams: Docker Swarm. It's built into Docker, takes 10 minutes to set up, handles rolling deploys and secrets, and requires zero additional tooling. Kubernetes is the right choice when you need advanced autoscaling, complex networking, a large ecosystem of operators, or when you're building a platform for other developers. The self-hosting community increasingly uses K3s (lightweight Kubernetes) as a middle ground — full Kubernetes API with 40% lower resource overhead.
Key Takeaways
- Docker Swarm: Built into Docker, simple, zero learning curve for Docker Compose users
- Kubernetes (k8s): Most powerful, vast ecosystem, required for cloud-native at scale
- K3s: Lightweight Kubernetes by Rancher — full K8s API, ~512MB RAM, perfect for self-hosting
- K3s adoption: Increasingly the default for homelab and small VPS Kubernetes deployments
- Swarm for: <10 nodes, simple rolling deploys, Docker Compose migration, low ops overhead
- K8s/K3s for: Complex microservices, autoscaling, Helm chart ecosystem, team platform
The Complexity Reality Check
Before comparing features, the honest overhead assessment:
| Task | Docker Swarm | Kubernetes (k3s) | Kubernetes (full) |
|---|---|---|---|
| Initial setup | 10 minutes | 30 minutes | 2–8 hours |
| Mental model | Docker Compose + clustering | New paradigm | New paradigm |
| YAML verbosity | Low | High | Very high |
| Debugging | docker service logs | kubectl logs/describe/events | Same + cluster logs |
| Upgrades | Automatic with image update | Helm + ArgoCD or manual | Complex |
| On-call risk | Low | Medium | High |
| Required knowledge | Docker Compose | Pods/Deployments/Services/Ingress/PVC | + cluster ops |
If your team hasn't used Kubernetes before, the operational burden is real. Kubernetes is powerful but unforgiving — misconfigured RBAC, broken networking, or resource limits crashing pods all require K8s-specific debugging skills.
Docker Swarm: Production-Ready Simplicity
Docker Swarm is built into Docker Engine — no additional installation required. It's a direct evolution of Docker Compose for multi-node deployments.
Set Up a Swarm (3 commands)
# On the manager node:
docker swarm init --advertise-addr YOUR_IP
# Output:
# Swarm initialized: current node is now a manager.
# To add a worker to this swarm, run the following command:
# docker swarm join --token SWMTKN-1-... <manager-ip>:2377
# On worker nodes — paste the join command:
docker swarm join --token SWMTKN-1-... <manager-ip>:2377
# Verify cluster:
docker node ls
# ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
# abc123 * node1 Ready Active Leader
# def456 node2 Ready Active
# ghi789 node3 Ready Active
Deploy a Stack (Compose → Swarm)
Swarm uses Docker Compose files with a deploy section:
# docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "80:80"
deploy:
replicas: 3 # Run 3 containers across the cluster
update_config:
parallelism: 1 # Update 1 replica at a time
delay: 10s # Wait 10s between updates
failure_action: rollback
order: start-first # Start new before stopping old (zero downtime)
restart_policy:
condition: on-failure
max_attempts: 3
resources:
limits:
cpus: '0.5'
memory: 128M
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
volumes:
- db_data:/var/lib/postgresql/data
deploy:
replicas: 1
placement:
constraints:
- node.labels.role == database # Pin to specific node
secrets:
db_password:
external: true # Created via: echo "password" | docker secret create db_password -
volumes:
db_data:
# Deploy the stack:
docker stack deploy -c docker-compose.yml myapp
# List services:
docker service ls
# Check rolling update status:
docker service ps myapp_web
# Scale a service:
docker service scale myapp_web=5
# Rolling update (update image):
docker service update --image nginx:1.25 myapp_web
# Remove stack:
docker stack rm myapp
Swarm Secrets
# Create encrypted secret:
echo "my-db-password" | docker secret create db_password -
# Reference in compose:
secrets:
- db_password
# Accessible in container at: /run/secrets/db_password
Secrets are encrypted at rest and only sent to nodes that run services that need them.
Swarm Networking
# Create overlay network (spans all nodes):
docker network create --driver overlay myapp-network
# Services on the same overlay network can reach each other by service name
# (Docker Swarm built-in DNS)
Kubernetes: The Enterprise Standard
Kubernetes is the dominant container orchestration platform — runs 90%+ of containerized workloads in enterprise. Vast ecosystem: 3,000+ Helm charts, operators for every database, built-in horizontal pod autoscaling.
K3s: Lightweight Kubernetes for Self-Hosters
K3s by Rancher (CNCF project) is the recommended Kubernetes distribution for self-hosting. It's a single binary (<100MB), uses SQLite instead of etcd by default, and removes cloud-provider-specific code.
# Install K3s on server (single node):
curl -sfL https://get.k3s.io | sh -
# Check cluster:
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# myserver Ready control-plane,master 1m v1.32.0+k3s1
# Join worker nodes:
# On server, get node token:
cat /var/lib/rancher/k3s/server/node-token
# On worker:
curl -sfL https://get.k3s.io | K3S_URL=https://SERVER_IP:6443 \
K3S_TOKEN=mynodetoken sh -
Deploy an Application on K3s/Kubernetes
# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: web-app-service
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
annotations:
kubernetes.io/ingress.class: traefik # K3s default ingress
spec:
rules:
- host: myapp.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
kubectl apply -f deployment.yml
# Check status:
kubectl get pods
kubectl get deployments
kubectl get services
# Rolling update:
kubectl set image deployment/web-app web=nginx:1.25
# Scale:
kubectl scale deployment web-app --replicas=5
# Logs:
kubectl logs -f deployment/web-app
# Describe (debug):
kubectl describe pod web-app-xxx
Helm: Package Manager for Kubernetes
Helm charts make deploying complex applications to Kubernetes as simple as:
# Install Helm:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add a chart repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
# Install PostgreSQL:
helm install my-postgres bitnami/postgresql \
--set auth.postgresPassword=mypassword \
--set primary.persistence.size=20Gi
# Install Grafana:
helm install grafana grafana/grafana \
--namespace monitoring --create-namespace \
--set persistence.enabled=true
# Upgrade:
helm upgrade my-postgres bitnami/postgresql --set image.tag=16.2.0
# List releases:
helm list --all-namespaces
Horizontal Pod Autoscaler
Kubernetes can automatically scale pods based on CPU/memory:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Resource Requirements
| Setup | Manager RAM | Worker RAM | Min Nodes |
|---|---|---|---|
| Docker Swarm (manager) | ~50MB overhead | ~10MB overhead | 1 |
| K3s single node | ~512MB | ~256MB per worker | 1 |
| K3s HA (3 servers) | ~512MB each | ~256MB each | 3+ |
| Full K8s (kubeadm) | 2GB+ | 1GB+ | 3+ |
| K3s + Rancher UI | 4GB+ | 256MB | 3+ |
For a single VPS (2GB RAM): K3s or Docker Swarm both work. Full Kubernetes needs 4GB+ minimum.
GitOps with Kubernetes (ArgoCD)
# argocd-app.yml — deploy your app declaratively:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://git.yourdomain.com/username/myapp.git
targetRevision: main
path: k8s/
destination:
server: https://kubernetes.default.svc
namespace: myapp
syncPolicy:
automated:
prune: true # Delete resources removed from Git
selfHeal: true # Revert manual cluster changes
Install ArgoCD on K3s:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Git push → ArgoCD detects change → automatically syncs cluster state. This is the gold standard for production Kubernetes deployments.
Decision Guide
Choose Docker Swarm if:
→ You already use Docker Compose (minimal learning curve)
→ <10 nodes, <20 services
→ Simple rolling deploys are all you need
→ Small team, low ops budget
→ No need for Helm chart ecosystem
→ Your existing Compose files work with minor additions
Choose K3s (lightweight Kubernetes) if:
→ You want full Kubernetes API without massive resource overhead
→ Helm chart ecosystem matters (databases, monitoring, ingress controllers)
→ Planning to scale to cloud Kubernetes (AKS, EKS, GKE) later
→ Need HPA (horizontal pod autoscaling)
→ GitOps workflow with ArgoCD or Flux
→ Self-hosting on small VPS (2–4GB RAM)
Choose Full Kubernetes (kubeadm/RKE2) if:
→ Enterprise production workloads
→ Team already experienced with Kubernetes
→ Need advanced multi-cluster or cloud provider features
→ Running 10+ nodes with dedicated ops team
Avoid Full Kubernetes if:
→ Solo developer or team < 5 engineers
→ No existing Kubernetes experience
→ Budget < $100/month for infra
→ Simple web app without complex scaling needs
Migration Path: Swarm → K3s
If you start with Swarm and outgrow it:
# Kompose converts Docker Compose/Swarm files to Kubernetes manifests:
brew install kompose # macOS
# apt-get install kompose # Ubuntu
kompose convert -f docker-compose.yml
# Generates: web-deployment.yaml, web-service.yaml, db-deployment.yaml, etc.
# Review and adjust, then:
kubectl apply -f .
See all open source container orchestration tools at OSSAlt.com/categories/containers.
Choosing a Deployment Platform
Before selecting a self-hosting stack, decide whether you want to manage Docker Compose files manually or use a platform that abstracts deployment, SSL, and domain management.
Manual Docker Compose gives you maximum control. You manage nginx or Traefik configuration, Let's Encrypt certificate renewal, and compose file versions yourself. This is the right approach if you want to understand every layer of your infrastructure or have highly custom requirements.
Managed PaaS platforms like Coolify or Dokploy deploy Docker Compose applications with SSL, custom domains, and rolling deployments through a web UI. You lose some control but gain significant operational simplicity — especially for multi-service deployments where managing compose files across servers becomes complex.
Server sizing: Self-hosted services have widely varying resource requirements. Most lightweight services (Uptime Kuma, AdGuard Home, Vaultwarden) run comfortably on a $5-6/month VPS with 1GB RAM. Medium services (Nextcloud, Gitea, n8n) need 2-4GB RAM. AI services with local model inference need 16-32GB RAM and ideally a GPU.
Networking and DNS: Point your domain to your server's public IP before deploying. Use Cloudflare as your DNS provider — it provides DDoS protection, free SSL termination at the edge, and the ability to hide your server's real IP. Enable Cloudflare's proxy mode for public-facing services; disable it for services that need direct TCP connections (like game servers or custom protocols).
Monitoring your stack: Use Uptime Kuma to monitor all services from a single dashboard with alerting to your preferred notification channel.
Network Security and Hardening
Self-hosted services exposed to the internet require baseline hardening. The default Docker networking model exposes container ports directly — without additional configuration, any open port is accessible from anywhere.
Firewall configuration: Use ufw (Uncomplicated Firewall) on Ubuntu/Debian or firewalld on RHEL-based systems. Allow only ports 22 (SSH), 80 (HTTP redirect), and 443 (HTTPS). Block all other inbound ports. Docker bypasses ufw's OUTPUT rules by default — install the ufw-docker package or configure Docker's iptables integration to prevent containers from opening ports that bypass your firewall rules.
SSH hardening: Disable password authentication and root login in /etc/ssh/sshd_config. Use key-based authentication only. Consider changing the default SSH port (22) to a non-standard port to reduce brute-force noise in your logs.
Fail2ban: Install fail2ban to automatically ban IPs that make repeated failed authentication attempts. Configure jails for SSH, Nginx, and any application-level authentication endpoints.
TLS/SSL: Use Let's Encrypt certificates via Certbot or Traefik's automatic ACME integration. Never expose services over HTTP in production. Configure HSTS headers to prevent protocol downgrade attacks. Check your SSL configuration with SSL Labs' server test — aim for an A or A+ rating.
Container isolation: Avoid running containers as root. Add user: "1000:1000" to your docker-compose.yml service definitions where the application supports non-root execution. Use read-only volumes (volumes: - /host/path:/container/path:ro) for configuration files the container only needs to read.
Secrets management: Never put passwords and API keys directly in docker-compose.yml files committed to version control. Use Docker secrets, environment files (.env), or a secrets manager like Vault for sensitive configuration. Add .env to your .gitignore before your first commit.
Production Deployment Checklist
Before treating any self-hosted service as production-ready, work through this checklist. Each item represents a class of failure that will eventually affect your service if left unaddressed.
Infrastructure
- Server OS is running latest security patches (
apt upgrade/dnf upgrade) - Firewall configured: only ports 22, 80, 443 open
- SSH key-only authentication (password auth disabled)
- Docker and Docker Compose are current stable versions
- Swap space configured (at minimum equal to RAM for <4GB servers)
Application
- Docker image version pinned (not
latest) in docker-compose.yml - Data directories backed by named volumes (not bind mounts to ephemeral paths)
- Environment variables stored in
.envfile (not hardcoded in compose) - Container restart policy set to
unless-stoppedoralways - Health check configured in Compose or Dockerfile
Networking
- SSL certificate issued and auto-renewal configured
- HTTP requests redirect to HTTPS
- Domain points to server IP (verify with
dig +short your.domain) - Reverse proxy (Nginx/Traefik) handles SSL termination
Monitoring and Backup
- Uptime monitoring configured with alerting
- Automated daily backup of Docker volumes to remote storage
- Backup tested with a successful restore drill
- Log retention configured (no unbounded log accumulation)
Access Control
- Default admin credentials changed
- Email confirmation configured if the app supports it
- User registration disabled if the service is private
- Authentication middleware added if the service lacks native login
Conclusion and Getting Started
The self-hosting ecosystem has matured dramatically. What required significant Linux expertise in 2015 is now achievable for any developer comfortable with Docker Compose and a basic understanding of DNS. The tools have gotten better, the documentation has improved, and the community has built enough tutorials that most common configurations have been solved publicly.
The operational overhead that remains is real but manageable. A stable self-hosted service — one that is properly monitored, backed up, and kept updated — requires roughly 30-60 minutes of attention per month once the initial deployment is complete. That time investment is justified for services where data ownership, cost savings, or customization requirements make the cloud alternative unsuitable.
Start with one service. Trying to migrate your entire stack to self-hosted infrastructure at once is a recipe for an overwhelming weekend project that doesn't get finished. Pick the service where the cloud alternative is most expensive or where data ownership matters most, run it for 30 days, and then evaluate whether to expand.
Build your operational foundation before adding services. Get monitoring, backup, and SSL configured correctly for your first service before adding a second. These cross-cutting concerns become easier to extend to new services once the pattern is established, and much harder to retrofit to a fleet of services that were deployed without them.
Treat this like a product. Your self-hosted services have users (even if that's just you). Write a runbook. Document the restore procedure. Create a status page. These practices don't take long but they transform self-hosting from a series of experiments into reliable infrastructure you can depend on.
The community around self-hosted software is active and helpful. Reddit's r/selfhosted, the Awesome-Selfhosted GitHub list, and Discord servers for specific applications all have people who have already solved the problem you're encountering. The configuration questions that feel unique usually aren't.
For most self-hosting use cases, Docker Swarm's simplicity makes it the right choice over Kubernetes. Kubernetes' operational overhead — etcd management, control plane maintenance, and complex networking primitives — is justified at scale but is excessive for single-organization deployments running 10-50 services. Swarm's rolling updates, service replication, and overlay networking cover the majority of production requirements with configuration that can be understood and maintained by a single operator. Choose Kubernetes when you have a dedicated infrastructure team, need advanced scheduling policies (GPU affinity, pod topology spread), or are operating at a scale where Swarm's limitations become real constraints rather than theoretical ones.
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.