Back Up Your Self-Hosted Services 2026
How to Back Up Your Self-Hosted Services Automatically
The #1 risk of self-hosting is data loss. No SaaS vendor is handling backups for you. Here's a complete, automated backup strategy that protects everything you self-host.
The 3-2-1 Backup Rule
- 3 copies of your data
- 2 different storage types
- 1 copy off-site
For self-hosting:
- Primary: Live data on your VPS
- Local backup: Compressed archives on the same VPS (different disk)
- Off-site: Synced to S3, another VPS, or local NAS
What to Back Up
| Category | What | How Often | Priority |
|---|---|---|---|
| Databases | PostgreSQL, MySQL, SQLite | Daily | Critical |
| File uploads | User files, attachments, images | Daily | Critical |
| Configuration | Docker Compose, .env files, config.toml | On change | High |
| Secrets | Encryption keys, API keys, certs | On change | Critical |
| Docker volumes | App data not in databases | Daily | Medium |
| Cron jobs | Backup scripts, scheduled tasks | On change | Low |
DO NOT back up: Docker images (re-pullable), temporary files, logs older than 7 days.
Database Backup Scripts
PostgreSQL (Most common for self-hosted tools)
#!/bin/bash
# backup-postgres.sh
BACKUP_DIR="/backups/postgres"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
# Dump all databases from a shared PostgreSQL
docker exec shared-postgres pg_dumpall -U postgres | gzip > $BACKUP_DIR/all-$DATE.sql.gz
# Or dump individual databases
for DB in mattermost outline plane keycloak chatwoot n8n listmonk; do
docker exec shared-postgres pg_dump -U postgres $DB | gzip > $BACKUP_DIR/$DB-$DATE.sql.gz
done
# Remove backups older than 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
echo "[$(date)] PostgreSQL backup completed" >> /var/log/backups.log
SQLite (PocketBase, Uptime Kuma, Vaultwarden)
#!/bin/bash
# backup-sqlite.sh
BACKUP_DIR="/backups/sqlite"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
# Vaultwarden (CRITICAL — password vault)
docker run --rm -v vw-data:/data -v $BACKUP_DIR:/backup alpine \
cp /data/db.sqlite3 /backup/vaultwarden-$DATE.db
# Uptime Kuma
docker run --rm -v uptime-kuma:/data -v $BACKUP_DIR:/backup alpine \
cp /data/kuma.db /backup/uptime-kuma-$DATE.db
# PocketBase
cp /opt/pocketbase/pb_data/data.db $BACKUP_DIR/pocketbase-$DATE.db
# Compress all
gzip $BACKUP_DIR/*-$DATE.db
find $BACKUP_DIR -name "*.db.gz" -mtime +30 -delete
MySQL/MariaDB
#!/bin/bash
# backup-mysql.sh
BACKUP_DIR="/backups/mysql"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
docker exec nextcloud-db mysqldump -u root -p'password' --all-databases | gzip > $BACKUP_DIR/all-$DATE.sql.gz
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
File Backup Scripts
Docker Volumes
#!/bin/bash
# backup-volumes.sh
BACKUP_DIR="/backups/volumes"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
# Backup specific Docker volumes
declare -A VOLUMES=(
["nextcloud"]="nextcloud_data"
["mattermost"]="mattermost_data"
["outline"]="minio_data"
["chatwoot"]="chatwoot_storage"
)
for NAME in "${!VOLUMES[@]}"; do
VOL=${VOLUMES[$NAME]}
docker run --rm -v $VOL:/data -v $BACKUP_DIR:/backup alpine \
tar czf /backup/$NAME-$DATE.tar.gz -C /data .
done
find $BACKUP_DIR -name "*.tar.gz" -mtime +14 -delete
Configuration Files
#!/bin/bash
# backup-config.sh
BACKUP_DIR="/backups/config"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
# Backup all compose files and environment configs
tar czf $BACKUP_DIR/configs-$DATE.tar.gz \
/opt/*/docker-compose.yml \
/opt/*/.env \
/opt/*/config.toml \
/etc/caddy/Caddyfile \
/etc/systemd/system/pocketbase.service
find $BACKUP_DIR -name "configs-*.tar.gz" -mtime +90 -delete
Off-Site Sync with rclone
Set Up rclone
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure remote (S3 example)
rclone config
# Name: s3backup
# Type: s3
# Provider: AWS/Wasabi/Backblaze/MinIO
# Access key, secret key, region, bucket
Sync Backups Off-Site
#!/bin/bash
# sync-offsite.sh
# Sync all local backups to S3
rclone sync /backups s3backup:my-server-backups/ \
--transfers 4 \
--progress \
--log-file /var/log/rclone-backup.log
echo "[$(date)] Off-site sync completed" >> /var/log/backups.log
Recommended Off-Site Storage
| Provider | Cost | Notes |
|---|---|---|
| Backblaze B2 | $0.005/GB/month | Cheapest. 10 GB free |
| Wasabi | $0.007/GB/month | No egress fees |
| AWS S3 Glacier | $0.004/GB/month | Cheapest for archival |
| Hetzner Storage Box | €3.50/month (1 TB) | EU, SFTP/rclone |
| Another VPS | €3.30+/month | Full control |
100 GB of backups costs ~$0.50-0.70/month on Backblaze B2 or Wasabi.
The Master Backup Script
#!/bin/bash
# master-backup.sh — runs all backup scripts
set -e
LOG="/var/log/backups.log"
echo "========================================" >> $LOG
echo "[$(date)] Starting full backup" >> $LOG
# 1. Database backups
/opt/scripts/backup-postgres.sh
/opt/scripts/backup-sqlite.sh
# 2. File backups
/opt/scripts/backup-volumes.sh
# 3. Config backup (weekly)
if [ "$(date +%u)" = "1" ]; then
/opt/scripts/backup-config.sh
fi
# 4. Sync off-site
/opt/scripts/sync-offsite.sh
# 5. Health check — notify if backup succeeds
curl -s "https://status.yourdomain.com/api/push/BACKUP_TOKEN?status=up&msg=OK"
echo "[$(date)] Full backup completed" >> $LOG
Schedule with Cron
# Edit crontab
crontab -e
# Daily full backup at 3 AM
0 3 * * * /opt/scripts/master-backup.sh 2>&1 | tee -a /var/log/backups.log
# Hourly database backup for critical services
0 * * * * /opt/scripts/backup-postgres.sh 2>&1 | tee -a /var/log/backups.log
Retention Policy
| Data Type | Local Retention | Off-Site Retention |
|---|---|---|
| Database dumps | 30 days | 90 days |
| File backups | 14 days | 30 days |
| Config backups | 90 days | 1 year |
| Vaultwarden | 90 days | 1 year |
Testing Restores
Backups are worthless if you can't restore. Test quarterly:
# 1. Spin up a test PostgreSQL container
docker run -d --name test-restore -e POSTGRES_PASSWORD=test postgres:16-alpine
# 2. Restore a backup
gunzip -c /backups/postgres/outline-20260308.sql.gz | \
docker exec -i test-restore psql -U postgres
# 3. Verify data
docker exec test-restore psql -U postgres -c "SELECT count(*) FROM documents;"
# 4. Clean up
docker stop test-restore && docker rm test-restore
Disaster Recovery Checklist
If your server dies, here's how to recover:
- Provision new VPS (same specs or bigger)
- Install Docker and Caddy
- Restore config files from off-site backup
- Create Docker volumes
- Restore databases from latest dump
- Restore file volumes from latest archive
- Start Docker Compose services
- Update DNS to new server IP
- Verify all services
- Update backup scripts for new server
Recovery time objective: 1-2 hours with a tested recovery plan.
Monitoring Your Backups
Use Uptime Kuma push monitors:
- Create Push monitors for each backup script
- Add
curlto the end of each script (as shown in master backup) - If a backup doesn't push within the expected interval, you get alerted
Alert on:
- Backup script didn't complete
- Off-site sync failed
- Disk space below 20%
- Backup file size is suspiciously small (corruption)
Find the best self-hosting tools and guides on OSSAlt — complete deployment and backup strategies side by side.
Backup Strategy for Self-Hosted Services
A self-hosted service without an automated backup strategy is a liability, not an asset. Disk failures, accidental deletions, and misconfigured updates happen — the question is whether you can recover when they do.
What to back up: Docker named volumes contain all persistent application state — databases, uploaded files, configuration. Your docker-compose.yml and any .env files with secrets are equally critical. Together, these are the complete recovery artifact.
Automated encrypted backups with Duplicati can back up Docker volumes to Backblaze B2, Cloudflare R2, or any S3-compatible storage on a daily schedule. Duplicati encrypts before upload and performs incremental backups — only changed blocks are transferred, keeping storage costs low.
Remote storage costs: Backblaze B2 charges $0.006/GB/month. Cloudflare R2 offers zero egress fees. For most single-server self-hosting setups, 30 days of backup retention costs under $3/month.
Testing restores: A backup that has never been tested is not a reliable backup. Monthly restore drills are the minimum — spin up the backup on a different server, restore the volumes, and verify the application functions correctly. This surfaces backup corruption, missing files, and procedure gaps before they matter.
Database-specific backup: SQL databases (PostgreSQL, MySQL) should be dumped with pg_dump or mysqldump rather than copying raw data files, which may be in an inconsistent state. Schedule daily dumps as separate backup artifacts so you can restore to any day without restoring all volumes.
Monitoring backup success: Uptime Kuma supports heartbeat monitoring — Duplicati can ping a URL after each successful backup, and Uptime Kuma alerts you if the heartbeat is missed, giving you backup failure detection without building custom alerting.
Related Self-Hosting Guides
A reliable backup system is one component of a complete self-hosting stack. For service health monitoring alongside backup heartbeat tracking, Uptime Kuma monitors all your self-hosted services and can alert when backup heartbeats are missed. For a complete deployment platform that simplifies managing multiple services together, Coolify provides container management with built-in SSL and domain handling.
Network Security and Hardening
Self-hosted services exposed to the internet require baseline hardening. The default Docker networking model exposes container ports directly — without additional configuration, any open port is accessible from anywhere.
Firewall configuration: Use ufw (Uncomplicated Firewall) on Ubuntu/Debian or firewalld on RHEL-based systems. Allow only ports 22 (SSH), 80 (HTTP redirect), and 443 (HTTPS). Block all other inbound ports. Docker bypasses ufw's OUTPUT rules by default — install the ufw-docker package or configure Docker's iptables integration to prevent containers from opening ports that bypass your firewall rules.
SSH hardening: Disable password authentication and root login in /etc/ssh/sshd_config. Use key-based authentication only. Consider changing the default SSH port (22) to a non-standard port to reduce brute-force noise in your logs.
Fail2ban: Install fail2ban to automatically ban IPs that make repeated failed authentication attempts. Configure jails for SSH, Nginx, and any application-level authentication endpoints.
TLS/SSL: Use Let's Encrypt certificates via Certbot or Traefik's automatic ACME integration. Never expose services over HTTP in production. Configure HSTS headers to prevent protocol downgrade attacks. Check your SSL configuration with SSL Labs' server test — aim for an A or A+ rating.
Container isolation: Avoid running containers as root. Add user: "1000:1000" to your docker-compose.yml service definitions where the application supports non-root execution. Use read-only volumes (volumes: - /host/path:/container/path:ro) for configuration files the container only needs to read.
Secrets management: Never put passwords and API keys directly in docker-compose.yml files committed to version control. Use Docker secrets, environment files (.env), or a secrets manager like Vault for sensitive configuration. Add .env to your .gitignore before your first commit.
Production Deployment Checklist
Before treating any self-hosted service as production-ready, work through this checklist. Each item represents a class of failure that will eventually affect your service if left unaddressed.
Infrastructure
- Server OS is running latest security patches (
apt upgrade/dnf upgrade) - Firewall configured: only ports 22, 80, 443 open
- SSH key-only authentication (password auth disabled)
- Docker and Docker Compose are current stable versions
- Swap space configured (at minimum equal to RAM for <4GB servers)
Application
- Docker image version pinned (not
latest) in docker-compose.yml - Data directories backed by named volumes (not bind mounts to ephemeral paths)
- Environment variables stored in
.envfile (not hardcoded in compose) - Container restart policy set to
unless-stoppedoralways - Health check configured in Compose or Dockerfile
Networking
- SSL certificate issued and auto-renewal configured
- HTTP requests redirect to HTTPS
- Domain points to server IP (verify with
dig +short your.domain) - Reverse proxy (Nginx/Traefik) handles SSL termination
Monitoring and Backup
- Uptime monitoring configured with alerting
- Automated daily backup of Docker volumes to remote storage
- Backup tested with a successful restore drill
- Log retention configured (no unbounded log accumulation)
Access Control
- Default admin credentials changed
- Email confirmation configured if the app supports it
- User registration disabled if the service is private
- Authentication middleware added if the service lacks native login
Conclusion and Getting Started
The self-hosting ecosystem has matured dramatically. What required significant Linux expertise in 2015 is now achievable for any developer comfortable with Docker Compose and a basic understanding of DNS. The tools have gotten better, the documentation has improved, and the community has built enough tutorials that most common configurations have been solved publicly.
The operational overhead that remains is real but manageable. A stable self-hosted service — one that is properly monitored, backed up, and kept updated — requires roughly 30-60 minutes of attention per month once the initial deployment is complete. That time investment is justified for services where data ownership, cost savings, or customization requirements make the cloud alternative unsuitable.
Start with one service. Trying to migrate your entire stack to self-hosted infrastructure at once is a recipe for an overwhelming weekend project that doesn't get finished. Pick the service where the cloud alternative is most expensive or where data ownership matters most, run it for 30 days, and then evaluate whether to expand.
Build your operational foundation before adding services. Get monitoring, backup, and SSL configured correctly for your first service before adding a second. These cross-cutting concerns become easier to extend to new services once the pattern is established, and much harder to retrofit to a fleet of services that were deployed without them.
Treat this like a product. Your self-hosted services have users (even if that's just you). Write a runbook. Document the restore procedure. Create a status page. These practices don't take long but they transform self-hosting from a series of experiments into reliable infrastructure you can depend on.
The community around self-hosted software is active and helpful. Reddit's r/selfhosted, the Awesome-Selfhosted GitHub list, and Discord servers for specific applications all have people who have already solved the problem you're encountering. The configuration questions that feel unique usually aren't.
Backup discipline is not about implementing the perfect system — it is about having any system and actually running it. A daily backup that runs and succeeds is infinitely more valuable than a theoretically perfect backup architecture that hasn't been set up. Start with the simplest thing that works: a cron job that runs duplicati-cli or restic backup nightly and copies the result to Backblaze B2. Verify it ran the next morning. Add complexity only when you have a concrete problem that simplicity can't solve. The self-hosting community has collectively learned that most backup failures are failures of consistency, not failures of technology.