How to Back Up Your Self-Hosted Services Automatically
·OSSAlt Team
backupself-hostingdockerdisaster-recoveryguide
How to Back Up Your Self-Hosted Services Automatically
The #1 risk of self-hosting is data loss. No SaaS vendor is handling backups for you. Here's a complete, automated backup strategy that protects everything you self-host.
The 3-2-1 Backup Rule
- 3 copies of your data
- 2 different storage types
- 1 copy off-site
For self-hosting:
- Primary: Live data on your VPS
- Local backup: Compressed archives on the same VPS (different disk)
- Off-site: Synced to S3, another VPS, or local NAS
What to Back Up
| Category | What | How Often | Priority |
|---|---|---|---|
| Databases | PostgreSQL, MySQL, SQLite | Daily | Critical |
| File uploads | User files, attachments, images | Daily | Critical |
| Configuration | Docker Compose, .env files, config.toml | On change | High |
| Secrets | Encryption keys, API keys, certs | On change | Critical |
| Docker volumes | App data not in databases | Daily | Medium |
| Cron jobs | Backup scripts, scheduled tasks | On change | Low |
DO NOT back up: Docker images (re-pullable), temporary files, logs older than 7 days.
Database Backup Scripts
PostgreSQL (Most common for self-hosted tools)
#!/bin/bash
# backup-postgres.sh
BACKUP_DIR="/backups/postgres"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
# Dump all databases from a shared PostgreSQL
docker exec shared-postgres pg_dumpall -U postgres | gzip > $BACKUP_DIR/all-$DATE.sql.gz
# Or dump individual databases
for DB in mattermost outline plane keycloak chatwoot n8n listmonk; do
docker exec shared-postgres pg_dump -U postgres $DB | gzip > $BACKUP_DIR/$DB-$DATE.sql.gz
done
# Remove backups older than 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
echo "[$(date)] PostgreSQL backup completed" >> /var/log/backups.log
SQLite (PocketBase, Uptime Kuma, Vaultwarden)
#!/bin/bash
# backup-sqlite.sh
BACKUP_DIR="/backups/sqlite"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
# Vaultwarden (CRITICAL — password vault)
docker run --rm -v vw-data:/data -v $BACKUP_DIR:/backup alpine \
cp /data/db.sqlite3 /backup/vaultwarden-$DATE.db
# Uptime Kuma
docker run --rm -v uptime-kuma:/data -v $BACKUP_DIR:/backup alpine \
cp /data/kuma.db /backup/uptime-kuma-$DATE.db
# PocketBase
cp /opt/pocketbase/pb_data/data.db $BACKUP_DIR/pocketbase-$DATE.db
# Compress all
gzip $BACKUP_DIR/*-$DATE.db
find $BACKUP_DIR -name "*.db.gz" -mtime +30 -delete
MySQL/MariaDB
#!/bin/bash
# backup-mysql.sh
BACKUP_DIR="/backups/mysql"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR
docker exec nextcloud-db mysqldump -u root -p'password' --all-databases | gzip > $BACKUP_DIR/all-$DATE.sql.gz
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
File Backup Scripts
Docker Volumes
#!/bin/bash
# backup-volumes.sh
BACKUP_DIR="/backups/volumes"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
# Backup specific Docker volumes
declare -A VOLUMES=(
["nextcloud"]="nextcloud_data"
["mattermost"]="mattermost_data"
["outline"]="minio_data"
["chatwoot"]="chatwoot_storage"
)
for NAME in "${!VOLUMES[@]}"; do
VOL=${VOLUMES[$NAME]}
docker run --rm -v $VOL:/data -v $BACKUP_DIR:/backup alpine \
tar czf /backup/$NAME-$DATE.tar.gz -C /data .
done
find $BACKUP_DIR -name "*.tar.gz" -mtime +14 -delete
Configuration Files
#!/bin/bash
# backup-config.sh
BACKUP_DIR="/backups/config"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
# Backup all compose files and environment configs
tar czf $BACKUP_DIR/configs-$DATE.tar.gz \
/opt/*/docker-compose.yml \
/opt/*/.env \
/opt/*/config.toml \
/etc/caddy/Caddyfile \
/etc/systemd/system/pocketbase.service
find $BACKUP_DIR -name "configs-*.tar.gz" -mtime +90 -delete
Off-Site Sync with rclone
Set Up rclone
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure remote (S3 example)
rclone config
# Name: s3backup
# Type: s3
# Provider: AWS/Wasabi/Backblaze/MinIO
# Access key, secret key, region, bucket
Sync Backups Off-Site
#!/bin/bash
# sync-offsite.sh
# Sync all local backups to S3
rclone sync /backups s3backup:my-server-backups/ \
--transfers 4 \
--progress \
--log-file /var/log/rclone-backup.log
echo "[$(date)] Off-site sync completed" >> /var/log/backups.log
Recommended Off-Site Storage
| Provider | Cost | Notes |
|---|---|---|
| Backblaze B2 | $0.005/GB/month | Cheapest. 10 GB free |
| Wasabi | $0.007/GB/month | No egress fees |
| AWS S3 Glacier | $0.004/GB/month | Cheapest for archival |
| Hetzner Storage Box | €3.50/month (1 TB) | EU, SFTP/rclone |
| Another VPS | €3.30+/month | Full control |
100 GB of backups costs ~$0.50-0.70/month on Backblaze B2 or Wasabi.
The Master Backup Script
#!/bin/bash
# master-backup.sh — runs all backup scripts
set -e
LOG="/var/log/backups.log"
echo "========================================" >> $LOG
echo "[$(date)] Starting full backup" >> $LOG
# 1. Database backups
/opt/scripts/backup-postgres.sh
/opt/scripts/backup-sqlite.sh
# 2. File backups
/opt/scripts/backup-volumes.sh
# 3. Config backup (weekly)
if [ "$(date +%u)" = "1" ]; then
/opt/scripts/backup-config.sh
fi
# 4. Sync off-site
/opt/scripts/sync-offsite.sh
# 5. Health check — notify if backup succeeds
curl -s "https://status.yourdomain.com/api/push/BACKUP_TOKEN?status=up&msg=OK"
echo "[$(date)] Full backup completed" >> $LOG
Schedule with Cron
# Edit crontab
crontab -e
# Daily full backup at 3 AM
0 3 * * * /opt/scripts/master-backup.sh 2>&1 | tee -a /var/log/backups.log
# Hourly database backup for critical services
0 * * * * /opt/scripts/backup-postgres.sh 2>&1 | tee -a /var/log/backups.log
Retention Policy
| Data Type | Local Retention | Off-Site Retention |
|---|---|---|
| Database dumps | 30 days | 90 days |
| File backups | 14 days | 30 days |
| Config backups | 90 days | 1 year |
| Vaultwarden | 90 days | 1 year |
Testing Restores
Backups are worthless if you can't restore. Test quarterly:
# 1. Spin up a test PostgreSQL container
docker run -d --name test-restore -e POSTGRES_PASSWORD=test postgres:16-alpine
# 2. Restore a backup
gunzip -c /backups/postgres/outline-20260308.sql.gz | \
docker exec -i test-restore psql -U postgres
# 3. Verify data
docker exec test-restore psql -U postgres -c "SELECT count(*) FROM documents;"
# 4. Clean up
docker stop test-restore && docker rm test-restore
Disaster Recovery Checklist
If your server dies, here's how to recover:
- Provision new VPS (same specs or bigger)
- Install Docker and Caddy
- Restore config files from off-site backup
- Create Docker volumes
- Restore databases from latest dump
- Restore file volumes from latest archive
- Start Docker Compose services
- Update DNS to new server IP
- Verify all services
- Update backup scripts for new server
Recovery time objective: 1-2 hours with a tested recovery plan.
Monitoring Your Backups
Use Uptime Kuma push monitors:
- Create Push monitors for each backup script
- Add
curlto the end of each script (as shown in master backup) - If a backup doesn't push within the expected interval, you get alerted
Alert on:
- Backup script didn't complete
- Off-site sync failed
- Disk space below 20%
- Backup file size is suspiciously small (corruption)
Find the best self-hosting tools and guides on OSSAlt — complete deployment and backup strategies side by side.