Docker Compose Templates: Self-Hosted Tools 2026
Docker Compose Templates for Every Self-Hosted Tool
Stop digging through documentation. Here's a copy-paste Docker Compose template for every popular self-hosted tool. Configure your .env, run docker compose up -d, done.
How to Use
- Copy the template for your tool
- Create a
.envfile with your settings - Run
docker compose up -d - Set up reverse proxy (Caddy recommended)
Reverse proxy template (used by all tools below):
# /etc/caddy/Caddyfile
app.yourdomain.com {
reverse_proxy localhost:PORT
}
Team Communication
Mattermost
services:
mattermost:
image: mattermost/mattermost-team-edition:latest
restart: unless-stopped
ports:
- "8065:8065"
volumes:
- mm_config:/mattermost/config
- mm_data:/mattermost/data
- mm_logs:/mattermost/logs
- mm_plugins:/mattermost/plugins
environment:
- MM_SQLSETTINGS_DRIVERNAME=postgres
- MM_SQLSETTINGS_DATASOURCE=postgres://${DB_USER}:${DB_PASS}@db:5432/${DB_NAME}?sslmode=disable
- MM_SERVICESETTINGS_SITEURL=https://${DOMAIN}
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
mm_config:
mm_data:
mm_logs:
mm_plugins:
db_data:
Rocket.Chat
services:
rocketchat:
image: registry.rocket.chat/rocketchat/rocket.chat:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
- ROOT_URL=https://${DOMAIN}
- MONGO_URL=mongodb://mongo:27017/rocketchat
- MONGO_OPLOG_URL=mongodb://mongo:27017/local
depends_on:
- mongo
mongo:
image: mongo:6
restart: unless-stopped
volumes:
- mongo_data:/data/db
command: --oplogSize 128 --replSet rs0
volumes:
mongo_data:
Knowledge Base & Documentation
Outline
services:
outline:
image: outlinewiki/outline:latest
restart: unless-stopped
ports:
- "3000:3000"
env_file: .env
depends_on:
- db
- redis
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=outline
- POSTGRES_USER=outline
- POSTGRES_PASSWORD=${DB_PASS}
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis_data:/data
volumes:
db_data:
redis_data:
BookStack
services:
bookstack:
image: lscr.io/linuxserver/bookstack:latest
restart: unless-stopped
ports:
- "6875:80"
volumes:
- bookstack_data:/config
environment:
- APP_URL=https://${DOMAIN}
- DB_HOST=db
- DB_PORT=3306
- DB_USER=bookstack
- DB_PASS=${DB_PASS}
- DB_DATABASE=bookstack
depends_on:
- db
db:
image: mariadb:11
restart: unless-stopped
volumes:
- db_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASS}
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=${DB_PASS}
volumes:
bookstack_data:
db_data:
Analytics
Plausible
services:
plausible:
image: ghcr.io/plausible/community-edition:latest
restart: unless-stopped
ports:
- "8000:8000"
env_file: .env
depends_on:
- db
- clickhouse
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${DB_PASS}
clickhouse:
image: clickhouse/clickhouse-server:latest
restart: unless-stopped
volumes:
- ch_data:/var/lib/clickhouse
volumes:
db_data:
ch_data:
Umami
services:
umami:
image: ghcr.io/umami-software/umami:postgresql-latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://umami:${DB_PASS}@db:5432/umami
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=umami
- POSTGRES_USER=umami
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
db_data:
Monitoring
Uptime Kuma
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- kuma_data:/app/data
volumes:
kuma_data:
Grafana + Prometheus
services:
grafana:
image: grafana/grafana:latest
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASS}
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
ports:
- "9090:9090"
volumes:
- prom_data:/prometheus
- ./prometheus.yml:/etc/prometheus/prometheus.yml
node-exporter:
image: prom/node-exporter:latest
restart: unless-stopped
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
volumes:
grafana_data:
prom_data:
Automation
n8n
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
environment:
- N8N_HOST=${DOMAIN}
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${DOMAIN}/
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=db
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASS}
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
n8n_data:
db_data:
Authentication
Keycloak
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
restart: unless-stopped
ports:
- "8080:8080"
environment:
- KC_DB=postgres
- KC_DB_URL_HOST=db
- KC_DB_URL_DATABASE=keycloak
- KC_DB_USERNAME=keycloak
- KC_DB_PASSWORD=${DB_PASS}
- KC_HOSTNAME=${DOMAIN}
- KC_PROXY_HEADERS=xforwarded
- KC_HTTP_ENABLED=true
- KEYCLOAK_ADMIN=${ADMIN_USER}
- KEYCLOAK_ADMIN_PASSWORD=${ADMIN_PASS}
command: start
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=keycloak
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
db_data:
Password Management
Vaultwarden
services:
vaultwarden:
image: vaultwarden/server:latest
restart: unless-stopped
ports:
- "8080:80"
volumes:
- vw_data:/data
environment:
- DOMAIN=https://${DOMAIN}
- SIGNUPS_ALLOWED=${SIGNUPS_ALLOWED:-false}
- ADMIN_TOKEN=${ADMIN_TOKEN}
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=587
- SMTP_SECURITY=starttls
- SMTP_USERNAME=${SMTP_USER}
- SMTP_PASSWORD=${SMTP_PASS}
- SMTP_FROM=${SMTP_FROM}
volumes:
vw_data:
Customer Support
Chatwoot
services:
chatwoot:
image: chatwoot/chatwoot:latest
restart: unless-stopped
ports:
- "3000:3000"
env_file: .env
command: bundle exec rails s -p 3000 -b 0.0.0.0
depends_on:
- db
- redis
sidekiq:
image: chatwoot/chatwoot:latest
restart: unless-stopped
env_file: .env
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- db
- redis
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=chatwoot
- POSTGRES_USER=chatwoot
- POSTGRES_PASSWORD=${DB_PASS}
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis_data:/data
volumes:
db_data:
redis_data:
Listmonk
services:
listmonk:
image: listmonk/listmonk:latest
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- ./config.toml:/listmonk/config.toml
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=listmonk
- POSTGRES_USER=listmonk
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
db_data:
Git Hosting
Gitea
services:
gitea:
image: gitea/gitea:latest
restart: unless-stopped
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea_data:/data
environment:
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=${DB_PASS}
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gitea
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
gitea_data:
db_data:
Search
Meilisearch
services:
meilisearch:
image: getmeili/meilisearch:latest
restart: unless-stopped
ports:
- "7700:7700"
volumes:
- meili_data:/meili_data
environment:
- MEILI_MASTER_KEY=${MASTER_KEY}
- MEILI_ENV=production
volumes:
meili_data:
.env Template
Create a .env file for each service:
# Domain
DOMAIN=app.yourdomain.com
# Database
DB_NAME=myapp
DB_USER=myapp
DB_PASS=CHANGE_ME_GENERATED_PASSWORD
DB_ROOT_PASS=CHANGE_ME_ROOT_PASSWORD
# Admin
ADMIN_USER=admin
ADMIN_PASS=CHANGE_ME_ADMIN_PASSWORD
# SMTP
SMTP_HOST=smtp.resend.com
SMTP_USER=resend
SMTP_PASS=re_your_api_key
SMTP_FROM=app@yourdomain.com
# Secrets (generate with: openssl rand -hex 32)
SECRET_KEY=CHANGE_ME_64_CHAR_HEX
ENCRYPTION_KEY=CHANGE_ME_64_CHAR_HEX
ADMIN_TOKEN=CHANGE_ME_64_CHAR_HEX
MASTER_KEY=CHANGE_ME_64_CHAR_HEX
Generate all secrets at once:
echo "SECRET_KEY=$(openssl rand -hex 32)"
echo "ENCRYPTION_KEY=$(openssl rand -hex 32)"
echo "ADMIN_TOKEN=$(openssl rand -hex 32)"
echo "MASTER_KEY=$(openssl rand -hex 32)"
echo "DB_PASS=$(openssl rand -hex 16)"
echo "DB_ROOT_PASS=$(openssl rand -hex 16)"
echo "ADMIN_PASS=$(openssl rand -base64 24)"
Why Docker Compose Is the Best Way to Self-Host in 2026
If you have been running self-hosted tools for any length of time, you have almost certainly wrestled with the gap between "this tool runs in Docker" and "this tool is actually running reliably on your server." Docker Compose closes that gap more cleanly than any other approach available to small and medium teams today.
The core advantage is repeatability. A docker-compose.yml file is a complete, auditable declaration of how a service runs: which image, which ports, which volumes, which environment variables, which dependencies. When something breaks three months from now, you have the exact specification in version control. When you want to move a service to a new server, you copy two files and run one command.
Kubernetes offers more power but requires significantly more operational complexity to set up and maintain. Nomad and similar tools occupy a useful middle ground but lack the ecosystem of pre-built compose files that Docker Compose has accumulated over years of community use. For teams without dedicated infrastructure engineers, Docker Compose on a well-provisioned VPS is almost always the right call.
The templates in this guide follow a consistent pattern: named volumes instead of bind mounts (for easier backup and migration), restart: unless-stopped for automatic recovery after reboots, and externalized secrets via .env files so credentials never live in version control. This is the pattern used by virtually every production self-hosted deployment.
If you are evaluating which platform-level tool to use for managing these Docker Compose deployments, the Coolify vs CapRover vs Dokku comparison covers the tradeoffs. Coolify wraps Docker Compose natively and provides a UI for managing services, SSL, and deployments — worth considering once you have more than five or six services running.
Organizing Your Services: Production Patterns
The .env template above shows the basics, but production setups need a few additional patterns.
Shared databases. Running a separate PostgreSQL container per service wastes RAM and makes backup more complex. The better pattern is one shared PostgreSQL and Redis instance, with each service using a separate database. This means your docker-compose.yml files reference an external database rather than spinning their own.
On the security front, never use the same database user across services. Create a dedicated user with permissions scoped to a single database for each tool. If one service is compromised, the attacker has access to that service's data only — not your entire PostgreSQL instance.
Reverse proxy first. Caddy is the recommended reverse proxy for self-hosted setups because it handles Let's Encrypt certificate provisioning and renewal automatically with essentially zero configuration. The pattern is: all services listen on localhost-only ports (no public exposure), and Caddy proxies named subdomains to each service. This means you only expose ports 80 and 443 on the public interface.
Named volumes and backup. Every stateful service in the templates above uses named Docker volumes. This makes backup straightforward with a simple pattern: stop the container, use docker run with an Alpine image to tar the volume, upload to object storage, restart the container. A cron job running this at 2 AM gives you daily backups of every service.
Environment segregation. Keep a separate directory per service with its own .env and docker-compose.yml. Avoid using a monolithic compose file for all services — it becomes hard to restart individual services and makes it easy to accidentally affect unrelated tools when editing configuration.
Common Deployment Mistakes and How to Avoid Them
Not setting resource limits. Without CPU and memory limits, a single misbehaving service can starve the entire host. Add mem_limit and cpus constraints to any service that handles variable workloads — especially analytics tools that process data in background jobs.
Logging to disk without rotation. By default, Docker writes container logs to disk without rotation. A high-traffic service logging verbosely can fill your disk within days. Add the following to your compose files or set it as the global Docker daemon default in /etc/docker/daemon.json:
The logging driver configuration accepts max-size (e.g., "50m") and max-file (e.g., "5") parameters that keep log files from growing unbounded.
Not using health checks. Docker Compose supports healthcheck blocks that let orchestration tools verify a service is actually responding before marking it healthy. Without health checks, a container that starts but immediately hangs will show as "Up" indefinitely. Adding a simple HTTP health check to web services saves significant debugging time.
Forgetting SMTP. Almost every tool in the templates above sends email notifications — password resets, alerts, user invitations. Set up transactional email (Amazon SES at $0.10/1,000 emails, or a similar service) before you need it. Self-hosting your own mail server is possible but adds significant operational complexity; a managed SMTP relay is almost always the better choice for notification email.
Pulling latest in production. The templates above use latest tags for simplicity, but production deployments should pin specific image versions. Pin to the current stable version, review changelogs before upgrades, and upgrade intentionally rather than having docker compose pull silently change behavior.
Choosing the Right Tools for Each Category
The templates in this guide cover the most commonly self-hosted categories, but the decision of which tool to run in each category deserves its own analysis. For backend-as-a-service options that work well with Docker Compose deployments, the Supabase vs Appwrite comparison is a good starting point. For analytics, you have several solid options covered in the best open source analytics tools roundup.
The right answer depends on your team's familiarity, your data model, and your compliance requirements. What the Docker Compose templates above give you is the deployment layer — the operational pattern for running any of these tools reliably on your own infrastructure. Pick the tools that fit your use case, apply the templates, and you have a self-hosted stack that costs a fraction of the SaaS equivalent.
Networking Between Services
One of the most common Docker Compose configuration issues is inter-service networking. When multiple services are defined in the same docker-compose.yml file, they can communicate with each other using the service name as a hostname. This is why the database configuration in the Mattermost template uses db:5432 rather than localhost:5432 — Docker Compose creates an internal network and registers each service by its name as a DNS entry on that network.
When services are in separate compose files (the recommended pattern for keeping configurations manageable), they need to share a Docker network. Define an external network and reference it in each compose file. This allows, for example, your Grafana compose file to reach the PostgreSQL container defined in a separate compose file, without exposing PostgreSQL on a public port.
The reverse proxy sits at the boundary between the internal Docker network and the public internet. Only Caddy (or your chosen reverse proxy) should have ports bound to the host's public interface. All other services bind to internal ports only — this is what the 127.0.0.1:PORT:PORT notation achieves when you want to further restrict which host interfaces are exposed.
Monitoring Your Self-Hosted Stack
Running 10 services without monitoring is flying blind. At minimum, deploy Uptime Kuma to monitor HTTP endpoints for each service — it sends alerts when any service stops responding and provides a public status page if you want to communicate service health to users.
For deeper observability, the Grafana and Prometheus template above provides infrastructure metrics: CPU, memory, disk, and network for your host servers. The node-exporter service exposes these metrics, Prometheus scrapes them on a regular interval, and Grafana provides dashboards and alerting on the collected data.
Docker-specific metrics (container CPU and memory usage, container restarts, network traffic per container) can be added via cAdvisor, which runs as a container and exposes metrics in the Prometheus format. Adding cAdvisor to your monitoring stack gives you the ability to identify which specific service is consuming unexpected resources, which is the first diagnostic step for most production performance issues.
For a complete picture of how monitoring fits into a bootstrapped self-hosted stack — alongside the communication, analytics, and productivity tools — the free open source SaaS stack guide shows how to prioritize which tools to deploy first and how they fit together into a coherent infrastructure at minimal cost.
Keeping Secrets Out of Docker Compose Files
The .env file approach shown in this guide is secure when managed correctly, but there are failure modes worth understanding.
The most common security mistake is committing a .env file to a version control repository. Every secret in that file — database passwords, SMTP credentials, admin tokens — becomes permanently part of the repository history. Add .env to your .gitignore from the beginning of every project. Commit a .env.example file with placeholder values and document which values are required, but never commit real credentials.
The second common mistake is using weak secrets. The secret generation commands in the template above use openssl rand -hex 32, which generates 256 bits of random data. This is appropriate for secrets like JWT signing keys, encryption keys, and admin tokens. Do not use short passwords, dictionary words, or any predictable values for these. A compromised admin token for Mattermost or Chatwoot gives an attacker access to all your communications and customer support data.
For teams running many services, a self-hosted secret manager eliminates the need to manually track which secrets go in which .env files. Infisical (open source) and self-hosted HashiCorp Vault both provide centralized secret management with audit trails, secret rotation, and per-service access control. These are worth deploying when you have more than 10 services or more than 5 engineers managing infrastructure.
The startup open source stack guide covers the security posture setup for a self-hosted stack holistically — including secret management alongside the broader deployment workflow for teams scaling from one to many services.
Upgrade Strategy for Long-Running Deployments
Self-hosted tools run indefinitely on the same server, and without a deliberate upgrade strategy they fall behind on security patches and new features. The recommended approach is monthly upgrades for most tools and immediate upgrades for security-critical CVEs.
Monthly upgrade workflow: pull updated Docker images (docker compose pull), review the changelog for breaking changes, test the new version on a development instance if the changelog mentions database migrations or configuration changes, then deploy to production with docker compose up -d. The entire process takes 15–30 minutes for a typical service.
Security-critical upgrades need faster turnaround. Follow the GitHub Security Advisories for the tools you run — subscribe to the repository notifications for each project. When a critical CVE is disclosed, upgrade within 24–48 hours rather than waiting for the next maintenance window.
Database migrations during upgrades deserve special care. Some tools run database migrations automatically on startup, others require running a migration command explicitly. Read the upgrade guide for any version that shows a major version bump (v1 to v2, v2 to v3). Always take a database backup immediately before an upgrade with database migrations. If the upgrade fails, the backup allows restoration to the pre-upgrade state without data loss.
The complete business stack guide covers the full operational picture of running 20 services — including upgrade scheduling, backup strategies, and monitoring for a production-grade self-hosted environment.
Find the right self-hosted tool for your needs on OSSAlt — features, Docker support, and deployment guides side by side.