Skip to main content

Open-source alternatives guide

How to Self-Host Activepieces 2026

Self-host Activepieces in 2026. MIT license, ~10K stars, TypeScript — open source Zapier alternative with 100+ pieces. Docker Compose setup, flow builder.

·OSSAlt Team
Share:

TL;DR

Activepieces (MIT, ~10K GitHub stars, TypeScript) is a fully open source Zapier alternative — no usage limits, no locked features, 100% MIT licensed. Zapier charges $29.99/month for 750 tasks. Activepieces self-hosted gives you unlimited flows, unlimited runs, and all 100+ integrations free. The visual flow builder is clean, each step is a "piece" (Activepieces terminology for integration), and you can write custom TypeScript pieces to add any integration.

Key Takeaways

  • Activepieces: MIT, ~10K stars, TypeScript — 100% open source, no feature gating
  • 100+ pieces: Slack, Gmail, GitHub, Notion, PostgreSQL, HTTP requests, and more
  • Unlimited flows: No task limits on self-hosted — limited only by your hardware
  • Custom pieces: Write TypeScript to add any integration not in the catalog
  • Simple Docker: Single docker compose up -d to get started
  • vs n8n: Activepieces is simpler/cleaner UI; n8n has more integrations and code flexibility

Activepieces vs n8n vs Zapier

FeatureActivepiecesn8nZapier
LicenseMITSustainable UseProprietary
GitHub Stars~10K~51K
Cost (self-hosted)FreeFree$299/mo
Integrations100+400+6000+
Code nodesTypeScriptJS + PythonLimited
Visual builderExcellentGoodExcellent
AI/LLM nodesYes (basic)Yes (LangChain)Limited
Custom integrationsTypeScript piecesCustom nodesNo
Learning curveLowMediumLow
Best forSimple automationsComplex workflowsHosted simplicity

Part 1: Docker Setup

# docker-compose.yml
services:
  activepieces:
    image: activepieces/activepieces:latest
    container_name: activepieces
    restart: unless-stopped
    ports:
      - "8080:80"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    environment:
      AP_ENGINE_EXECUTABLE_PATH: dist/packages/engine/main.js
      AP_ENVIRONMENT: prod
      AP_FRONTEND_URL: "https://flows.yourdomain.com"
      AP_WEBHOOK_TIMEOUT_SECONDS: 30
      AP_MAX_FILE_SIZE_MB: 10
      AP_TELEMETRY_ENABLED: "false"
      AP_SIGN_UP_ENABLED: "true"   # Disable after first user setup
      # Database:
      AP_POSTGRES_DATABASE: activepieces
      AP_POSTGRES_HOST: postgres
      AP_POSTGRES_PORT: 5432
      AP_POSTGRES_USERNAME: activepieces
      AP_POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
      AP_POSTGRES_USE_SSL: "false"
      # Redis:
      AP_REDIS_HOST: redis
      AP_REDIS_PORT: 6379
      # Encryption:
      AP_ENCRYPTION_KEY: "${ENCRYPTION_KEY}"
      AP_JWT_SECRET: "${JWT_SECRET}"

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: activepieces
      POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
      POSTGRES_DB: activepieces
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U activepieces"]
      interval: 10s
      start_period: 20s

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:
# .env
POSTGRES_PASSWORD=your-secure-db-password
ENCRYPTION_KEY=$(openssl rand -hex 16)   # Must be exactly 32 chars hex
JWT_SECRET=$(openssl rand -hex 32)

docker compose up -d

Visit https://flows.yourdomain.com → create admin account.

Part 2: HTTPS with Caddy

flows.yourdomain.com {
    reverse_proxy localhost:8080
}

Part 3: Building Flows

A flow in Activepieces is:

  • One trigger (webhook, schedule, or app event)
  • One or more steps (actions from pieces)

Create your first flow

  1. + New Flow
  2. Click Trigger → choose trigger type:
    • Webhook: Gets a unique URL for external services to call
    • Schedule: Cron expression (every hour, daily, etc.)
    • App trigger: Event from a specific app (new GitHub issue, Slack message, etc.)
  3. Click + to add steps
  4. Each step is a piece — choose app + action
  5. Reference data from previous steps using {{step_1.value}}

Part 4: Example Flows

GitHub stars → Slack notification

Trigger: Webhook (configure on GitHub as push event webhook)
  OR
Trigger: GitHub — New Star on Repository
  → Repo: your-org/your-repo

Step 1: Slack — Send Message
  Channel: #metrics
  Message: "New star! {{ trigger.stargazer.login }} starred {{ trigger.repository.full_name }}
            Total stars: {{ trigger.repository.stargazers_count }}"

Form → Google Sheets + email

Trigger: Webhook (POST from your contact form)

Step 1: Data transformation (using Code piece)
  Code: return {
    name: trigger.body.name,
    email: trigger.body.email,
    timestamp: new Date().toISOString()
  };

Step 2: Google Sheets — Insert Row
  Spreadsheet: "Contact Form Submissions"
  Sheet: "Leads"
  Values: {{ step1.name }}, {{ step1.email }}, {{ step1.timestamp }}

Step 3: Gmail — Send Email
  To: sales@yourcompany.com
  Subject: "New contact form submission from {{ step1.name }}"
  Body: "Name: {{ step1.name }}\nEmail: {{ step1.email }}"

Scheduled daily report

Trigger: Schedule — 0 9 * * 1-5 (weekdays at 9 AM)

Step 1: HTTP Request — GET https://yourapi.com/metrics
  Headers: Authorization: Bearer {{ connections.myapi.token }}

Step 2: Code — Format metrics
  Code: const data = step1.body;
        return {
          summary: `Users: ${data.users}, Revenue: $${data.revenue}`,
          html: `<b>Users:</b> ${data.users}<br><b>Revenue:</b> $${data.revenue}`
        };

Step 3: Slack — Send Message
  Channel: #daily-metrics
  Message: "Daily Report\n{{ step2.summary }}"

Part 5: Webhook Triggers

# Your flow's webhook URL (from the trigger config panel):
https://flows.yourdomain.com/api/v1/webhooks/FLOW_ID

# Test a webhook:
curl -X POST https://flows.yourdomain.com/api/v1/webhooks/FLOW_ID \
  -H "Content-Type: application/json" \
  -d '{"event": "payment.completed", "amount": 99.99, "customer": "alice@example.com"}'

# Use as Stripe webhook:
# Stripe Dashboard → Developers → Webhooks → Add endpoint
# URL: https://flows.yourdomain.com/api/v1/webhooks/FLOW_ID
# Events: payment_intent.succeeded

# Use as GitHub webhook:
# Repo Settings → Webhooks → Add webhook
# Payload URL: https://flows.yourdomain.com/api/v1/webhooks/FLOW_ID
# Content type: application/json

Part 6: Custom TypeScript Pieces

If you need an integration that doesn't exist:

// packages/pieces/custom/my-service/src/lib/my-service.piece.ts
import { createPiece, PieceAuth } from '@activepieces/pieces-framework';
import { myServiceSendMessage } from './actions/send-message';
import { myServiceNewEvent } from './triggers/new-event';

export const MyServiceAuth = PieceAuth.SecretText({
  displayName: 'API Key',
  description: 'Your service API key',
  required: true,
});

export const myService = createPiece({
  displayName: 'My Service',
  auth: MyServiceAuth,
  minimumSupportedRelease: '0.20.0',
  logoUrl: 'https://yourservice.com/logo.png',
  authors: ['yourname'],
  actions: [myServiceSendMessage],
  triggers: [myServiceNewEvent],
});
// packages/pieces/custom/my-service/src/lib/actions/send-message.ts
import { createAction, Property } from '@activepieces/pieces-framework';
import { MyServiceAuth } from '../my-service.piece';

export const myServiceSendMessage = createAction({
  name: 'send_message',
  auth: MyServiceAuth,
  displayName: 'Send Message',
  description: 'Send a message via My Service',
  props: {
    recipient: Property.ShortText({ displayName: 'Recipient', required: true }),
    message: Property.LongText({ displayName: 'Message', required: true }),
  },
  async run(context) {
    const { recipient, message } = context.propsValue;
    const apiKey = context.auth;

    const response = await fetch('https://api.yourservice.com/messages', {
      method: 'POST',
      headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' },
      body: JSON.stringify({ to: recipient, body: message }),
    });

    return response.json();
  },
});

Part 7: Connections (Credentials)

Manage credentials centrally in Connections:

  1. Connections → + New Connection
  2. Select a piece type (GitHub, Slack, Google, etc.)
  3. OAuth2 pieces: click Connect → authenticate
  4. API key pieces: enter your key
  5. All flows in your workspace can reuse connections

Maintenance

# Update Activepieces:
docker compose pull
docker compose up -d

# Backup database:
docker exec activepieces-postgres-1 pg_dump -U activepieces activepieces \
  | gzip > activepieces-backup-$(date +%Y%m%d).sql.gz

# Logs:
docker compose logs -f activepieces

# Disable new signups after admin setup:
# Set AP_SIGN_UP_ENABLED=false in docker-compose.yml then restart

Why Self-Host Activepieces

Activepieces occupies a specific niche: fully MIT-licensed automation with a clean UI, no feature gating, and a lower learning curve than n8n. The MIT license is significant — it means no usage restrictions, no commercial licensing concerns, and no risk of the Sustainable Use License restrictions that n8n carries.

On cost: Zapier's Starter plan ($29.99/month, 750 tasks) gets consumed quickly. A flow that sends a Slack notification for every GitHub commit uses 1 task per commit. An active team pushing 50 commits/day across 5 repos hits 1,500 tasks/month on that single workflow alone — already over the Starter limit. The Professional plan ($73.50/month, 2,000 tasks) runs out almost as fast if you have multiple active workflows. Activepieces self-hosted removes task counting entirely.

The visual flow builder is genuinely excellent. Where n8n's interface can feel overwhelming to newcomers with its complex node system, Activepieces' step-based builder is approachable for non-technical team members. Marketing folks can build and modify Slack notification flows. Ops teams can wire up webhook-to-spreadsheet automations. The learning curve is shallow.

Custom TypeScript pieces solve the integration gap. When Activepieces doesn't have a specific piece for your internal API or niche SaaS tool, you can build it in TypeScript using the pieces framework. The development experience is good — hot reload, TypeScript types, a clear structure. Once built, your custom piece appears in the UI alongside official pieces.

When NOT to self-host Activepieces: If you need n8n's depth — complex multi-branch logic, Python code nodes, LangChain AI agents, or 400+ integrations — Activepieces may be too limited. It shines for straightforward automation tasks but struggles with highly complex workflow logic. Also, 100+ pieces is significantly fewer than Zapier's 6,000+ — verify your required integrations before committing.

Prerequisites

Activepieces requires PostgreSQL and Redis in addition to the main application. The infrastructure is more demanding than a single-container setup but well within reach of a modest VPS.

Server specs: 2 vCPUs and 2GB RAM is the comfortable minimum. Activepieces, PostgreSQL, and Redis together use about 1-1.5GB RAM at idle. Workflow execution spikes CPU when flows run. For teams running frequent automations, 4 vCPUs and 4GB RAM gives comfortable headroom. Check our VPS comparison for self-hosters — the $8-12/month tier from most providers fits Activepieces well.

PostgreSQL is required: Unlike some self-hosted tools that offer a SQLite fallback, Activepieces requires PostgreSQL. The Docker Compose setup handles this automatically. If you're already running PostgreSQL for another service, you can create a separate database instead of adding another PostgreSQL container.

Redis for flow execution: Redis handles the job queue for flow execution. It's lightweight (typically under 100MB RAM) and requires no special configuration beyond the defaults in the Docker Compose setup.

Domain and HTTPS: Webhook triggers require your Activepieces URL to be publicly accessible over HTTPS. External services (GitHub, Stripe, etc.) need to reach your webhook endpoint. Configure your domain, SSL (via Caddy), and set AP_FRONTEND_URL to your public HTTPS URL before testing webhooks.

Encryption key constraint: The AP_ENCRYPTION_KEY must be exactly 32 characters (16 bytes as hex). Generate it with openssl rand -hex 16 — this produces exactly 32 hex characters. If the key is the wrong length, Activepieces fails to start.

Skill level: Beginner. The Docker Compose setup is clean, and the web UI is intuitive. Initial setup takes 15-20 minutes.

Production Security Hardening

Activepieces holds credentials for every service it connects to — your Slack token, GitHub API key, Google OAuth refresh token, database passwords. These are encrypted at rest using your AP_ENCRYPTION_KEY, but you still need to protect the running system. Follow the self-hosting security checklist and apply these measures:

Firewall (UFW): Port 8080 should never be exposed directly. Route through Caddy for HTTPS.

sudo ufw default deny incoming
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Disable signups immediately: After creating your admin account, set AP_SIGN_UP_ENABLED: "false" and restart. An open signup page allows anyone to create an account on your automation server.

Secrets management: Store all secrets in .env and add it to .gitignore immediately:

# .env (never commit this)
POSTGRES_PASSWORD=your-strong-db-password
ENCRYPTION_KEY=your-32-char-hex-key
JWT_SECRET=your-jwt-secret
echo ".env" >> .gitignore

Webhook authentication: By default, Activepieces webhook URLs are security-through-obscurity (a long random ID). For production webhooks receiving sensitive data, add webhook signature verification using the Code piece to validate request signatures from services like Stripe or GitHub.

Disable SSH password authentication: Edit /etc/ssh/sshd_config: set PasswordAuthentication no and PermitRootLogin no. Restart: sudo systemctl restart ssh.

Automatic security updates:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

Regular backups: Your flows and connections live in PostgreSQL. A database dump is all you need to restore your entire Activepieces setup. See automated server backups with restic for automated daily backups to object storage — configure it to dump PostgreSQL and upload the result.

Troubleshooting Common Issues

Flows trigger but steps fail with "Connection not found"

This usually means the connection (credential) used in a step was deleted, expired, or belongs to a different workspace. Go to Connections and verify the connection exists and is valid. For OAuth2 connections, tokens expire — reconnect by clicking the connection and re-authenticating. If you recently migrated or restored from backup, connections may need to be re-created since encrypted credentials may not survive a key change.

Webhook flow doesn't trigger from external service

First test the webhook works locally: curl -X POST https://flows.yourdomain.com/api/v1/webhooks/FLOW_ID -d '{}'. If that works but GitHub/Stripe/etc. doesn't, the issue is on the external service side — check their webhook delivery logs for errors. Common problems: your URL isn't publicly reachable (VPN/firewall blocking), the webhook expects a specific content type, or the service requires HTTPS with a trusted certificate.

Encryption key errors preventing startup

If you see encryption-related errors in docker compose logs activepieces, the AP_ENCRYPTION_KEY is the likely culprit. It must be exactly 32 hex characters (16 bytes). Verify with echo -n "your-key" | wc -c. If you changed the key after setup, all stored credentials are invalid and need to be re-entered. Never change the encryption key on a running production instance.

Flows queued but not executing

Redis is responsible for the job queue. If Redis is unavailable, flows queue but never run. Check docker compose ps — Redis should be in a running state. Also check Activepieces logs for Redis connection errors. If Redis crashed and lost queue data, manually re-trigger failed flows from the execution history.

High memory or CPU during flow execution

Each flow execution spawns a worker process. Many concurrent executions strain the server. If you're hitting resource limits, stagger your scheduled flows to avoid simultaneous execution. Also check for runaway flows that loop infinitely — a misconfigured flow that keeps triggering itself can exhaust resources quickly. Set execution timeout limits in Activepieces settings.

Choosing Between Activepieces and n8n

The choice between Activepieces and n8n comes down to use case complexity and team composition. Activepieces was deliberately designed to be approachable — its step-based flow model is conceptually simpler than n8n's node graph, and the TypeScript piece system makes custom integrations straightforward for developers already writing TypeScript.

If your automation use cases are primarily notification-and-trigger patterns (new Slack message triggers a database write, GitHub webhook fires an email, schedule runs a data export), Activepieces handles these cleanly with less configuration overhead. The UI is cleaner and less intimidating for non-developers who need to maintain automations.

For more complex scenarios — multi-branch conditional logic, Python data processing, LangChain AI agent chains, or a need for 400+ pre-built integrations — n8n is the better fit. Both tools are free to self-host, so there's no cost reason to compromise. If your team has mixed technical backgrounds and handles mostly straightforward automation, Activepieces is the pragmatic choice. If your engineering team runs complex data pipelines and needs code-first flexibility, n8n serves better. See the best open source automation tools for a side-by-side comparison including Windmill, Temporal, and other options.

The MIT license is worth emphasizing once more. Unlike n8n's Sustainable Use License (which requires a commercial license if you embed n8n in a product), Activepieces is fully MIT — use it however you want, modify it, embed it, build products with it. For startups building automation directly into their product offering, this particular licensing difference matters significantly over the long term.

See all open source automation tools at OSSAlt.com/categories/automation.

See open source alternatives to Zapier on OSSAlt.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.