Skip to main content

Open-source alternatives guide

How to Self-Host n8n 2026

Self-host n8n workflow automation on Docker in 2026. Replace Zapier ($50+/month) with a visual automation platform — 400+ integrations, webhooks, code nodes.

·OSSAlt Team
Share:

TL;DR

n8n is a visual workflow automation platform with 50K GitHub stars. It connects 400+ services, runs JavaScript/Python code inline, and handles webhooks, cron schedules, and event-based triggers. Self-hosted n8n runs unlimited workflows for server cost only ($6/month). Zapier charges $20–$50+/month for limited tasks. For developers who want automation power without task/run limits: n8n is the answer.

Key Takeaways

  • n8n: ~50K GitHub stars, TypeScript, Sustainable Use License (free for self-hosting), 400+ nodes
  • Zapier comparison: n8n has no task limits when self-hosted; Zapier charges per task
  • Code nodes: Run JavaScript or Python inline — not possible in Zapier/Make
  • Webhook triggers: Instant HTTP webhooks, no polling delays
  • Setup: Docker + SQLite (default) or Postgres — 10 minutes
  • Use cases: API integrations, data pipelines, DevOps automation, notifications, ETL

n8n vs Zapier vs Make

Featuren8n (self-hosted)ZapierMake (Integromat)
Cost~$6/month (server)$20–600+/month$9–100+/month
Task limitsNone (self-hosted)750–2M tasks/month1K–800K ops/month
Code execution✅ JavaScript + PythonLimited
Custom HTTP calls
Webhook triggers
Integrations400+6,000+1,500+
Visual builder
Air-gapped deployment
Open sourcePartial (SUL)NoNo

When Zapier/Make beats n8n: If you need 6,000+ native integrations and don't want to self-host. n8n's HTTP Request node covers any API without a dedicated node, but requires more setup.


Server Requirements

  • Minimum: 512MB RAM (SQLite backend, few workflows)
  • Recommended: 1–2GB RAM with Postgres (production workloads)
  • VPS: Hetzner CX22 (€4.35/month) works fine

Part 1: Docker Setup

Option A: Quick Start (SQLite — Personal Use)

docker run -it --rm \
  --name n8n \
  -p 5678:5678 \
  -v ~/.n8n:/home/node/.n8n \
  n8nio/n8n

Option B: Docker Compose with PostgreSQL (Production)

# docker-compose.yml
version: '3.8'

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      # Required:
      N8N_HOST: "n8n.yourdomain.com"
      N8N_PORT: 5678
      N8N_PROTOCOL: "https"
      WEBHOOK_URL: "https://n8n.yourdomain.com/"

      # Database:
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: n8n
      DB_POSTGRESDB_USER: n8n
      DB_POSTGRESDB_PASSWORD: "${POSTGRES_PASSWORD}"

      # Security:
      N8N_BASIC_AUTH_ACTIVE: "true"
      N8N_BASIC_AUTH_USER: "${N8N_USERNAME}"
      N8N_BASIC_AUTH_PASSWORD: "${N8N_PASSWORD}"

      # Execution:
      EXECUTIONS_DATA_SAVE_ON_SUCCESS: all
      EXECUTIONS_DATA_SAVE_ON_ERROR: all
      EXECUTIONS_DATA_MAX_AGE: 336   # Keep 14 days

      # Email for notifications:
      N8N_EMAIL_MODE: smtp
      N8N_SMTP_HOST: "${SMTP_HOST}"
      N8N_SMTP_PORT: 587
      N8N_SMTP_USER: "${SMTP_USER}"
      N8N_SMTP_PASS: "${SMTP_PASSWORD}"
      N8N_SMTP_SENDER: "n8n@yourdomain.com"

    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - postgres

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: n8n
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  n8n_data:
  postgres_data:
# .env
POSTGRES_PASSWORD=strong-password-here
N8N_USERNAME=admin
N8N_PASSWORD=your-n8n-password
SMTP_HOST=smtp.yourdomain.com
SMTP_USER=noreply@yourdomain.com
SMTP_PASSWORD=smtp-password
docker compose up -d

Part 2: HTTPS with Caddy

n8n.yourdomain.com {
    reverse_proxy localhost:5678

    # WebSocket support for real-time updates:
    @websocket {
        header Connection *Upgrade*
        header Upgrade websocket
    }
    handle @websocket {
        reverse_proxy localhost:5678
    }
}

Part 3: Your First Workflow

Visit https://n8n.yourdomain.com and log in.

Example: Webhook → HTTP Request → Slack Notification

This workflow: receives a webhook, calls an API, and sends results to Slack.

  1. Add a Webhook node:

    • Click + → Search "Webhook"
    • Method: GET or POST
    • Copy the webhook URL shown: https://n8n.yourdomain.com/webhook/your-unique-id
  2. Add HTTP Request node (connect to Webhook):

    • URL: https://api.github.com/repos/{{$json.repo}}/issues
    • Authentication: None (or add GitHub token for private repos)
    • Method: GET
  3. Add Slack node (connect to HTTP Request):

    • Add your Slack OAuth credential
    • Channel: #alerts
    • Message: {{ $json.length }} new issues in {{ $node.Webhook.json.repo }}
  4. Activate the workflow (toggle in top right)

Test: Send a POST to your webhook URL:

curl -X POST https://n8n.yourdomain.com/webhook/your-id \
  -H "Content-Type: application/json" \
  -d '{"repo": "your-org/your-repo"}'

Part 4: Schedule-Based Workflows

Run workflows on a cron schedule:

  1. Add Schedule Trigger node instead of Webhook
  2. Set interval: Every day at 9am, every hour, every Monday, etc.
  3. Use cron expression for custom schedules: 0 9 * * 1-5 (9am weekdays)

Example: Daily Slack digest of GitHub PRs:

Schedule (9am daily)
  → HTTP Request (GitHub API: list open PRs)
  → Code (format PR list as message)
  → Slack (post to #dev-standup)

Part 5: Code Node (JavaScript / Python)

The Code node is n8n's superpower — execute arbitrary code inline:

// Transform and filter data using JavaScript:
const items = $input.all();

return items
  .filter(item => item.json.status === 'open')
  .map(item => ({
    json: {
      id: item.json.id,
      title: item.json.title,
      url: `https://github.com/issues/${item.json.number}`,
      age_days: Math.floor(
        (Date.now() - new Date(item.json.created_at)) / 86400000
      ),
    }
  }));
# Python code node:
issues = [item["json"] for item in _input.all()]
stale = [i for i in issues if i.get("age_days", 0) > 14]
return [{"json": i} for i in stale]

Part 6: Common Workflow Patterns

API Webhook → Database

Webhook (receive form submission)
  → Set (extract relevant fields)
  → Postgres (INSERT INTO leads ...)
  → Email (send confirmation)

GitHub Actions Notification

Webhook (GitHub Actions webhook)
  → If (check job.status == 'failure')
  → Slack (notify #ci-alerts: "Build failed: {{ $json.workflow }}")

Daily Report

Schedule (6am daily)
  → HTTP Request (fetch yesterday's data from your API)
  → Code (calculate metrics, format report)
  → Email (send to team@company.com)

File Processing

FTP Trigger (new file detected)
  → Read Binary File
  → HTTP Request (upload to S3)
  → Postgres (log upload record)
  → Slack (notify #uploads)

Part 7: Sharing and Credentials

Credential Management

n8n stores credentials encrypted. Add them once, reuse across workflows:

  1. Credentials → New → Select service (Slack, GitHub, Postgres, etc.)
  2. Enter OAuth tokens or API keys
  3. Credentials are encrypted in the database — never visible in workflow JSON

Export and Import Workflows

# Export all workflows:
n8n export:workflow --all --output=workflows-backup.json

# Import:
n8n import:workflow --input=workflows-backup.json

Or use the UI: Workflows → Export / Import.


Part 8: Scaling

Queue Mode (Multiple Workers)

For high-volume workflows:

environment:
  EXECUTIONS_MODE: queue
  QUEUE_BULL_REDIS_HOST: redis
  QUEUE_BULL_REDIS_PORT: 6379

Add Redis and multiple n8n worker instances for parallel execution.

Concurrency Limits

environment:
  N8N_CONCURRENCY_PRODUCTION_LIMIT: 10  # Max parallel executions

Maintenance

# Update n8n:
docker compose pull
docker compose up -d

# Check execution logs:
docker compose logs -f n8n

# Backup database:
docker exec n8n-postgres pg_dump -U n8n n8n | \
  gzip > n8n-backup-$(date +%Y%m%d).sql.gz

Cost Comparison

ServiceMonthly CostTask/Run Limits
Zapier Starter$20/month750 tasks
Zapier Professional$50/month2,000 tasks
Make Core$9/month10,000 ops
n8n self-hosted~$6/monthUnlimited
n8n Cloud (managed)$20/month2,500 executions

Compare all open source Zapier alternatives at OSSAlt.com/alternatives/zapier.

See open source alternatives to n8n on OSSAlt.

Monitoring and Operational Health

Deploying a self-hosted service without monitoring is running blind. At minimum, set up three layers: uptime monitoring, resource monitoring, and log retention.

Uptime monitoring with Uptime Kuma gives you HTTP endpoint checks every 30-60 seconds with alerts to Telegram, Slack, email, or webhook. Create a monitor for your primary application URL and any API health endpoints. The status page feature lets you communicate incidents to users without custom tooling.

Resource monitoring tells you when a container is leaking memory or when disk is filling up. Prometheus + Grafana is the standard self-hosted monitoring stack — Prometheus scrapes container metrics via cAdvisor, Grafana visualizes them with pre-built Docker dashboards. Set alerts for memory above 80% and disk above 75%; both give you time to act before they become incidents.

Log retention: Docker container logs are ephemeral by default. Add logging: driver: json-file; options: max-size: 100m; max-file: 3 to your docker-compose.yml to limit log growth and retain recent logs for debugging. For centralized log search across multiple containers, Loki integrates with the same Grafana instance.

Backup discipline: Schedule automated backups of your Docker volumes using Duplicati or Restic. Back up to remote storage (Backblaze B2 or Cloudflare R2 cost $0.006/GB/month). Run a restore drill monthly — a backup that has never been tested is not a reliable backup. Your restore procedure documentation should live somewhere accessible from outside the failed server.

Update strategy: Pin Docker image versions in your compose file rather than using latest. Create a monthly maintenance window to review changelogs and update images. Major version updates often require running migration scripts before the new container starts — check the release notes before pulling.

n8n becomes more powerful when connected to other self-hosted tools. Authentik provides SSO for your n8n instance, letting team members log in with existing accounts rather than managing separate n8n credentials. Uptime Kuma monitors n8n's availability and can trigger webhook alerts to your n8n instance when other services go down — creating automated incident response workflows.

Network Security and Hardening

Self-hosted services exposed to the internet require baseline hardening. The default Docker networking model exposes container ports directly — without additional configuration, any open port is accessible from anywhere.

Firewall configuration: Use ufw (Uncomplicated Firewall) on Ubuntu/Debian or firewalld on RHEL-based systems. Allow only ports 22 (SSH), 80 (HTTP redirect), and 443 (HTTPS). Block all other inbound ports. Docker bypasses ufw's OUTPUT rules by default — install the ufw-docker package or configure Docker's iptables integration to prevent containers from opening ports that bypass your firewall rules.

SSH hardening: Disable password authentication and root login in /etc/ssh/sshd_config. Use key-based authentication only. Consider changing the default SSH port (22) to a non-standard port to reduce brute-force noise in your logs.

Fail2ban: Install fail2ban to automatically ban IPs that make repeated failed authentication attempts. Configure jails for SSH, Nginx, and any application-level authentication endpoints.

TLS/SSL: Use Let's Encrypt certificates via Certbot or Traefik's automatic ACME integration. Never expose services over HTTP in production. Configure HSTS headers to prevent protocol downgrade attacks. Check your SSL configuration with SSL Labs' server test — aim for an A or A+ rating.

Container isolation: Avoid running containers as root. Add user: "1000:1000" to your docker-compose.yml service definitions where the application supports non-root execution. Use read-only volumes (volumes: - /host/path:/container/path:ro) for configuration files the container only needs to read.

Secrets management: Never put passwords and API keys directly in docker-compose.yml files committed to version control. Use Docker secrets, environment files (.env), or a secrets manager like Vault for sensitive configuration. Add .env to your .gitignore before your first commit.

Production Deployment Checklist

Before treating any self-hosted service as production-ready, work through this checklist. Each item represents a class of failure that will eventually affect your service if left unaddressed.

Infrastructure

  • Server OS is running latest security patches (apt upgrade / dnf upgrade)
  • Firewall configured: only ports 22, 80, 443 open
  • SSH key-only authentication (password auth disabled)
  • Docker and Docker Compose are current stable versions
  • Swap space configured (at minimum equal to RAM for <4GB servers)

Application

  • Docker image version pinned (not latest) in docker-compose.yml
  • Data directories backed by named volumes (not bind mounts to ephemeral paths)
  • Environment variables stored in .env file (not hardcoded in compose)
  • Container restart policy set to unless-stopped or always
  • Health check configured in Compose or Dockerfile

Networking

  • SSL certificate issued and auto-renewal configured
  • HTTP requests redirect to HTTPS
  • Domain points to server IP (verify with dig +short your.domain)
  • Reverse proxy (Nginx/Traefik) handles SSL termination

Monitoring and Backup

  • Uptime monitoring configured with alerting
  • Automated daily backup of Docker volumes to remote storage
  • Backup tested with a successful restore drill
  • Log retention configured (no unbounded log accumulation)

Access Control

  • Default admin credentials changed
  • Email confirmation configured if the app supports it
  • User registration disabled if the service is private
  • Authentication middleware added if the service lacks native login

Getting Started

The best time to set up monitoring and backups is before you need them. Deploy your service, configure it, and immediately add it to your monitoring stack and backup schedule. These three steps — deploy, monitor, backup — are the complete foundation of a reliable self-hosted service. Everything else is incremental improvement.

n8n's workflow automation becomes most valuable as the connective tissue of your self-hosted stack. Each new self-hosted service you deploy creates potential automation opportunities — Gitea webhooks triggering notifications, Plausible Analytics events creating records, Uptime Kuma alerts routing to PagerDuty or Slack. The investment in learning n8n pays back across every service you add afterward. Start with one workflow that saves 15 minutes per week; the operational overhead of n8n itself is minimal once deployed, and the compounding value of workflow automation accumulates over time.

n8n's self-hosted deployment includes the full workflow editor, execution history, and credential management. The webhook node creates HTTP endpoints that external services can call to trigger workflows, making it useful as an integration hub for your entire self-hosted stack. n8n executions are logged with full input/output for debugging — when a workflow fails, the execution history shows exactly which node failed and what data it received. Configure webhook authentication and IP allowlisting for any webhook URLs you expose publicly, as these endpoints provide direct access to your workflow triggers and should be treated with the same care as any external API endpoint.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.