Skip to main content

Docker Swarm vs Kubernetes: Which Orchestrator for Self-Hosters 2026?

·OSSAlt Team
docker-swarmkubernetesk8scontainersorchestrationself-hostingdevops2026

TL;DR

For most self-hosters and small teams: Docker Swarm. It's built into Docker, takes 10 minutes to set up, handles rolling deploys and secrets, and requires zero additional tooling. Kubernetes is the right choice when you need advanced autoscaling, complex networking, a large ecosystem of operators, or when you're building a platform for other developers. The self-hosting community increasingly uses K3s (lightweight Kubernetes) as a middle ground — full Kubernetes API with 40% lower resource overhead.

Key Takeaways

  • Docker Swarm: Built into Docker, simple, zero learning curve for Docker Compose users
  • Kubernetes (k8s): Most powerful, vast ecosystem, required for cloud-native at scale
  • K3s: Lightweight Kubernetes by Rancher — full K8s API, ~512MB RAM, perfect for self-hosting
  • K3s adoption: Increasingly the default for homelab and small VPS Kubernetes deployments
  • Swarm for: <10 nodes, simple rolling deploys, Docker Compose migration, low ops overhead
  • K8s/K3s for: Complex microservices, autoscaling, Helm chart ecosystem, team platform

The Complexity Reality Check

Before comparing features, the honest overhead assessment:

TaskDocker SwarmKubernetes (k3s)Kubernetes (full)
Initial setup10 minutes30 minutes2–8 hours
Mental modelDocker Compose + clusteringNew paradigmNew paradigm
YAML verbosityLowHighVery high
Debuggingdocker service logskubectl logs/describe/eventsSame + cluster logs
UpgradesAutomatic with image updateHelm + ArgoCD or manualComplex
On-call riskLowMediumHigh
Required knowledgeDocker ComposePods/Deployments/Services/Ingress/PVC+ cluster ops

If your team hasn't used Kubernetes before, the operational burden is real. Kubernetes is powerful but unforgiving — misconfigured RBAC, broken networking, or resource limits crashing pods all require K8s-specific debugging skills.


Docker Swarm: Production-Ready Simplicity

Docker Swarm is built into Docker Engine — no additional installation required. It's a direct evolution of Docker Compose for multi-node deployments.

Set Up a Swarm (3 commands)

# On the manager node:
docker swarm init --advertise-addr YOUR_IP

# Output:
# Swarm initialized: current node is now a manager.
# To add a worker to this swarm, run the following command:
#   docker swarm join --token SWMTKN-1-... <manager-ip>:2377

# On worker nodes — paste the join command:
docker swarm join --token SWMTKN-1-... <manager-ip>:2377

# Verify cluster:
docker node ls
# ID          HOSTNAME    STATUS    AVAILABILITY  MANAGER STATUS
# abc123 *    node1       Ready     Active        Leader
# def456      node2       Ready     Active
# ghi789      node3       Ready     Active

Deploy a Stack (Compose → Swarm)

Swarm uses Docker Compose files with a deploy section:

# docker-compose.yml
version: '3.8'

services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    deploy:
      replicas: 3              # Run 3 containers across the cluster
      update_config:
        parallelism: 1         # Update 1 replica at a time
        delay: 10s             # Wait 10s between updates
        failure_action: rollback
        order: start-first     # Start new before stopping old (zero downtime)
      restart_policy:
        condition: on-failure
        max_attempts: 3
      resources:
        limits:
          cpus: '0.5'
          memory: 128M

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    volumes:
      - db_data:/var/lib/postgresql/data
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.role == database   # Pin to specific node

secrets:
  db_password:
    external: true   # Created via: echo "password" | docker secret create db_password -

volumes:
  db_data:
# Deploy the stack:
docker stack deploy -c docker-compose.yml myapp

# List services:
docker service ls

# Check rolling update status:
docker service ps myapp_web

# Scale a service:
docker service scale myapp_web=5

# Rolling update (update image):
docker service update --image nginx:1.25 myapp_web

# Remove stack:
docker stack rm myapp

Swarm Secrets

# Create encrypted secret:
echo "my-db-password" | docker secret create db_password -

# Reference in compose:
secrets:
  - db_password
# Accessible in container at: /run/secrets/db_password

Secrets are encrypted at rest and only sent to nodes that run services that need them.

Swarm Networking

# Create overlay network (spans all nodes):
docker network create --driver overlay myapp-network

# Services on the same overlay network can reach each other by service name
# (Docker Swarm built-in DNS)

Kubernetes: The Enterprise Standard

Kubernetes is the dominant container orchestration platform — runs 90%+ of containerized workloads in enterprise. Vast ecosystem: 3,000+ Helm charts, operators for every database, built-in horizontal pod autoscaling.

K3s: Lightweight Kubernetes for Self-Hosters

K3s by Rancher (CNCF project) is the recommended Kubernetes distribution for self-hosting. It's a single binary (<100MB), uses SQLite instead of etcd by default, and removes cloud-provider-specific code.

# Install K3s on server (single node):
curl -sfL https://get.k3s.io | sh -

# Check cluster:
kubectl get nodes
# NAME       STATUS   ROLES                  AGE   VERSION
# myserver   Ready    control-plane,master   1m    v1.32.0+k3s1

# Join worker nodes:
# On server, get node token:
cat /var/lib/rancher/k3s/server/node-token

# On worker:
curl -sfL https://get.k3s.io | K3S_URL=https://SERVER_IP:6443 \
  K3S_TOKEN=mynodetoken sh -

Deploy an Application on K3s/Kubernetes

# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web
        image: nginx:alpine
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "100m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  annotations:
    kubernetes.io/ingress.class: traefik   # K3s default ingress
spec:
  rules:
  - host: myapp.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app-service
            port:
              number: 80
kubectl apply -f deployment.yml

# Check status:
kubectl get pods
kubectl get deployments
kubectl get services

# Rolling update:
kubectl set image deployment/web-app web=nginx:1.25

# Scale:
kubectl scale deployment web-app --replicas=5

# Logs:
kubectl logs -f deployment/web-app

# Describe (debug):
kubectl describe pod web-app-xxx

Helm: Package Manager for Kubernetes

Helm charts make deploying complex applications to Kubernetes as simple as:

# Install Helm:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add a chart repository:
helm repo add bitnami https://charts.bitnami.com/bitnami

# Install PostgreSQL:
helm install my-postgres bitnami/postgresql \
  --set auth.postgresPassword=mypassword \
  --set primary.persistence.size=20Gi

# Install Grafana:
helm install grafana grafana/grafana \
  --namespace monitoring --create-namespace \
  --set persistence.enabled=true

# Upgrade:
helm upgrade my-postgres bitnami/postgresql --set image.tag=16.2.0

# List releases:
helm list --all-namespaces

Horizontal Pod Autoscaler

Kubernetes can automatically scale pods based on CPU/memory:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-app
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Resource Requirements

SetupManager RAMWorker RAMMin Nodes
Docker Swarm (manager)~50MB overhead~10MB overhead1
K3s single node~512MB~256MB per worker1
K3s HA (3 servers)~512MB each~256MB each3+
Full K8s (kubeadm)2GB+1GB+3+
K3s + Rancher UI4GB+256MB3+

For a single VPS (2GB RAM): K3s or Docker Swarm both work. Full Kubernetes needs 4GB+ minimum.


GitOps with Kubernetes (ArgoCD)

# argocd-app.yml — deploy your app declaratively:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://git.yourdomain.com/username/myapp.git
    targetRevision: main
    path: k8s/
  destination:
    server: https://kubernetes.default.svc
    namespace: myapp
  syncPolicy:
    automated:
      prune: true      # Delete resources removed from Git
      selfHeal: true   # Revert manual cluster changes

Install ArgoCD on K3s:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Git push → ArgoCD detects change → automatically syncs cluster state. This is the gold standard for production Kubernetes deployments.


Decision Guide

Choose Docker Swarm if:
  → You already use Docker Compose (minimal learning curve)
  → <10 nodes, <20 services
  → Simple rolling deploys are all you need
  → Small team, low ops budget
  → No need for Helm chart ecosystem
  → Your existing Compose files work with minor additions

Choose K3s (lightweight Kubernetes) if:
  → You want full Kubernetes API without massive resource overhead
  → Helm chart ecosystem matters (databases, monitoring, ingress controllers)
  → Planning to scale to cloud Kubernetes (AKS, EKS, GKE) later
  → Need HPA (horizontal pod autoscaling)
  → GitOps workflow with ArgoCD or Flux
  → Self-hosting on small VPS (2–4GB RAM)

Choose Full Kubernetes (kubeadm/RKE2) if:
  → Enterprise production workloads
  → Team already experienced with Kubernetes
  → Need advanced multi-cluster or cloud provider features
  → Running 10+ nodes with dedicated ops team

Avoid Full Kubernetes if:
  → Solo developer or team < 5 engineers
  → No existing Kubernetes experience
  → Budget < $100/month for infra
  → Simple web app without complex scaling needs

Migration Path: Swarm → K3s

If you start with Swarm and outgrow it:

# Kompose converts Docker Compose/Swarm files to Kubernetes manifests:
brew install kompose   # macOS
# apt-get install kompose  # Ubuntu

kompose convert -f docker-compose.yml

# Generates: web-deployment.yaml, web-service.yaml, db-deployment.yaml, etc.
# Review and adjust, then:
kubectl apply -f .

See all open source container orchestration tools at OSSAlt.com/categories/containers.

Comments