Skip to main content

Open-source alternatives guide

Self-Host on Hetzner ARM Servers 2026

Hetzner's ARM CAX servers offer 30-50% better value than x86 for self-hosted workloads. This guide covers which apps run on ARM, server selection, setup, and.

·OSSAlt Team
Share:

Why Hetzner ARM in 2026?

Hetzner's ARM-based CAX series servers (powered by Ampere Altra processors) offer the best price-to-performance ratio in European cloud hosting. Compared to equivalent x86 instances, ARM servers provide more RAM and cores at the same or lower cost.

Why ARM specifically works well for self-hosting:

  • Most self-hosted applications are I/O-bound, not compute-bound. ARM CPUs handle these workloads efficiently.
  • The Docker ecosystem has broad ARM64 (arm64/aarch64) image support in 2026 — the compatibility gap that existed in 2020-2022 has largely closed.
  • Ampere Altra processors have excellent performance-per-watt characteristics, which Hetzner passes on as lower prices.

The key constraint: Not every Docker image has an ARM64 variant. Before deploying on ARM, verify your specific applications support it.

Hetzner CAX Series Pricing

ServervCPURAMStorageMonthly
CAX112 (ARM64)4GB40GB SSD~$4
CAX214 (ARM64)8GB80GB SSD~$6
CAX318 (ARM64)16GB160GB SSD~$12
CAX4116 (ARM64)32GB320GB SSD~$24

Prices vary by region and may change — check hetzner.com for current pricing.

Comparison to equivalent x86 (CPX series):

MetricCAX21 (ARM)CPX21 (x86)Advantage
vCPU43ARM: +1 core
RAM8GB4GBARM: 2x more RAM
Monthly~$6~$6.50ARM: cheaper
Performance~Similar~SimilarARM: better value

For self-hosting purposes, the CAX21 at ~$6/month with 4 cores and 8GB RAM is one of the best value propositions in cloud hosting.

What Runs on Hetzner ARM in 2026

Works Well (Multi-Architecture Docker Images)

The following commonly self-hosted applications have fully supported ARM64 Docker images:

Productivity & Communication:

  • Nextcloud
  • Docmost
  • Outline Wiki
  • Rallly
  • Cal.com (most features)

Infrastructure & DevOps:

  • Portainer
  • Dockge
  • Nginx Proxy Manager
  • Traefik
  • Caddy
  • Gitea
  • Forgejo
  • Uptime Kuma
  • Grafana + Prometheus

Data Management:

  • PostgreSQL
  • MySQL/MariaDB
  • MongoDB
  • Redis
  • Minio (S3-compatible)
  • PocketBase
  • NocoDB

AI and ML:

  • Ollama (native ARM support)
  • Open WebUI
  • Weaviate
  • Qdrant

Communication:

  • Matrix (Synapse, Dendrite)
  • Mattermost
  • Rocket.Chat

Automation:

  • n8n
  • Kestra

Monitoring:

  • Grafana
  • Prometheus
  • Netdata

Limited or No ARM Support

Some self-hosted tools have limited or no ARM64 support:

Check before deploying:

  • Tabby (AI code assistant): Suspended ARM support as of recent releases — verify current status
  • Some Immich ML container configurations: GPU ML acceleration ARM support varies
  • Older enterprise tools: Legacy applications often ship x86-only

Rule of thumb: If an image is on Docker Hub or ghcr.io with linux/arm64 listed under supported platforms, it runs on Hetzner ARM. Check the "Tags" page of any Docker image for platform support.

Step 1: Choose Your ARM Server

For Personal Use (1-5 services)

CAX11 (~$4/month): 2 vCPUs, 4GB RAM

  • Suitable for: Docmost, Rallly, Uptime Kuma, Vaultwarden, simple services
  • Not suitable for: Nextcloud (needs more RAM), AI workloads

For Small Team (5-20 services)

CAX21 (~$6/month): 4 vCPUs, 8GB RAM

  • Suitable for: Most productivity tools, Nextcloud with limited users, Gitea, n8n
  • Run 5-10 lightweight services concurrently

For Production / AI Workloads

CAX31 (~$12/month): 8 vCPUs, 16GB RAM

  • Suitable for: Ollama (7B models), Dify, multiple services, databases with real load

For Heavy Workloads

CAX41 (~$24/month): 16 vCPUs, 32GB RAM

  • Suitable for: Ollama (30B models), full self-hosted AI stack, many concurrent users

Step 2: Create and Configure Your Server

Create via Hetzner Cloud Console

  1. Log in to cloud.hetzner.com
  2. ServersAdd Server
  3. Select location: Nuremberg, Falkenstein, or Helsinki (EU)
  4. Select image: Ubuntu 24.04 (LTS)
  5. Select type: Shared vCPUARM64 (Ampere) → Choose CAX tier
  6. Add your SSH key (create one first if needed)
  7. Create server

Or via Hetzner CLI

# Install hcloud CLI
brew install hcloud  # macOS
# or download from github.com/hetznercloud/cli

# Create API token at console.hetzner.cloud
hcloud context create my-project

# Create SSH key
hcloud ssh-key create --name my-key --public-key-from-file ~/.ssh/id_ed25519.pub

# Create ARM server
hcloud server create \
  --name my-arm-server \
  --type cax21 \
  --image ubuntu-24.04 \
  --location nbg1 \
  --ssh-key my-key

Initial Server Setup

# Connect to server
ssh root@your-server-ip

# Update system
apt update && apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
usermod -aG docker $USER
newgrp docker

# Verify ARM architecture
uname -m
# Expected: aarch64

docker info | grep Architecture
# Expected: aarch64

Step 3: Verify ARM64 Image Support

Before pulling any image, check architecture support:

# Method 1: Check Docker Hub/registry
docker manifest inspect nextcloud:latest | grep -A1 '"architecture"'
# Look for "arm64" or "aarch64" in the output

# Method 2: Try pulling — Docker will fail gracefully if no ARM image exists
docker pull nextcloud:latest
# On ARM, Docker automatically selects the arm64 variant if available

Alternatively, check on Docker Hub:

  1. Visit hub.docker.com/r/[image-name]
  2. Click "Tags" → find latest
  3. Click the latest tag → look for linux/arm64 in the Supported Platforms list

Step 4: Deploy Your Stack

Example Docker Compose for a productivity stack on CAX21:

services:
  # Document management
  docmost:
    image: docmost/docmost:latest
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      APP_URL: https://docs.yourdomain.com
      APP_SECRET: ${DOCMOST_SECRET}
      DATABASE_URL: postgresql://docmost:${DB_PASS}@db:5432/docmost
      REDIS_URL: redis://redis:6379
    depends_on:
      - db
      - redis
    restart: unless-stopped

  # Uptime monitoring
  uptime-kuma:
    image: louislam/uptime-kuma:1
    ports:
      - "127.0.0.1:3001:3001"
    volumes:
      - uptime_data:/app/data
    restart: unless-stopped

  # Scheduling
  rallly:
    image: lukevella/rallly:latest
    ports:
      - "127.0.0.1:3002:3000"
    env_file: rallly.env
    depends_on:
      - db
    restart: unless-stopped

  # Shared database
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD: ${DB_PASS}
    volumes:
      - pg_data:/var/lib/postgresql/data
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data
    restart: unless-stopped

volumes:
  uptime_data:
  pg_data:
  redis_data:

This stack runs comfortably on a CAX21 (4 cores, 8GB RAM).

Step 5: Configure HTTPS with Caddy

Caddy is the simplest reverse proxy for ARM self-hosting — no manual SSL management:

# Install Caddy
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list
apt update && apt install caddy

/etc/caddy/Caddyfile:

docs.yourdomain.com {
    reverse_proxy localhost:3000
}

status.yourdomain.com {
    reverse_proxy localhost:3001
}

rallly.yourdomain.com {
    reverse_proxy localhost:3002
}
systemctl restart caddy

Step 6: Set Up Firewall

# Install UFW
apt install -y ufw

# Default deny incoming
ufw default deny incoming
ufw default allow outgoing

# Allow SSH
ufw allow 22

# Allow HTTP and HTTPS (Caddy)
ufw allow 80
ufw allow 443

# Enable firewall
ufw enable

# Verify
ufw status

Step 7: Monitor Resource Usage

Check Current Usage

# Overall system
htop  # Install: apt install htop

# Docker containers
docker stats --no-stream

# Disk usage
df -h
docker system df

Resource Guidelines for CAX Series

ServiceTypical RAM UsageTypical CPU
Docmost300-500MB1-5% idle
n8n200-400MB1-5% idle
PostgreSQL100-300MB2-10% load
Uptime Kuma50-100MB<1% idle
Caddy20-50MB<1% idle
Ollama (7B, idle)4-8GB<5% idle

A CAX21 (8GB RAM) comfortably runs: Docmost + Rallly + Uptime Kuma + PostgreSQL + Redis + Caddy with headroom to spare.

Adding Ollama with a 7B model requires the CAX31 (16GB RAM).

Step 8: Backup Strategy

Database Backups

# Automated PostgreSQL backup (add to crontab)
cat > /opt/backup.sh << 'EOF'
#!/bin/bash
docker exec postgres pg_dumpall -U postgres | gzip > /opt/backups/pg-$(date +%Y%m%d).sql.gz
find /opt/backups -name "pg-*.sql.gz" -mtime +7 -delete
EOF
chmod +x /opt/backup.sh
(crontab -l 2>/dev/null; echo "0 2 * * * /opt/backup.sh") | crontab -

Off-Site Backup with Rclone

# Install rclone
curl https://rclone.org/install.sh | bash

# Configure Backblaze B2 or S3
rclone config

# Sync backups off-site
rclone sync /opt/backups b2:your-backup-bucket/server-name/

Performance Expectations

Based on real-world self-hosting workloads on CAX servers:

CAX11 (2 cores, 4GB)

  • Handles 50-100 concurrent users for simple web apps
  • PostgreSQL: Comfortable for single-app databases
  • Startup time: Similar to CPX11

CAX21 (4 cores, 8GB)

  • Handles 200-500 concurrent users
  • Can run 5-10 containerized services simultaneously
  • Ollama: Runs 3B models at acceptable speed

CAX31 (8 cores, 16GB)

  • Handles 500-1000 concurrent users
  • Full productivity stack (docs, wiki, scheduling, monitoring)
  • Ollama: Runs 7B models comfortably

Cost Comparison: ARM vs x86 vs Cloud

For a 3-service stack (wiki, scheduling, monitoring):

OptionSpecsMonthlyAnnual
Hetzner CAX21 (ARM)4 cores, 8GB~$6~$72
Hetzner CPX21 (x86)3 cores, 4GB~$6.50~$78
DigitalOcean 4GB2 cores, 4GB$24$288
AWS t3.medium2 cores, 4GB~$30~$360
SaaS equivalents$150+$1,800+

Hetzner ARM delivers more resources for less money than any comparable cloud provider. For self-hosters, it's the obvious choice when ARM compatibility is confirmed.

Find What to Self-Host

Browse all self-hosted alternatives on OSSAlt — find open source tools for every use case with deployment guides optimized for Hetzner ARM and other self-hosting environments.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.