Open-source alternatives guide
Automated Server Backups with Restic and Rclone 2026
Set up automated server backups in 2026 using Restic (deduplication, encryption) and Rclone (cloud sync). Complete setup for VPS backup to Backblaze B2, S3.
TL;DR
Restic + Rclone is the gold standard for self-hosted server backups in 2026. Restic handles encrypted, deduplicated snapshots — perfect for VPS data, Docker volumes, and config files. Rclone syncs the backup repository to any cloud storage (Backblaze B2, S3, Wasabi, SFTP). Together: automated encrypted offsite backups for ~$3/month in storage costs. Never lose data again.
Key Takeaways
- Restic: BSD 2-Clause, ~26K stars, Go — encrypted, deduplicated, compressed backups
- Rclone: MIT, ~46K stars, Go — syncs to 70+ cloud storage providers
- Deduplication: Restic only stores changed chunks — a 10GB backup with 1GB changed uses ~1GB of space
- Encryption: AES-256, ChaCha20-Poly1305 — all data encrypted before leaving your server
- 3-2-1 rule: 3 copies, 2 different media, 1 offsite — Restic + B2 achieves this easily
- Cost: Backblaze B2 at $0.006/GB/month — 100GB = $0.60/month
Why Restic + Rclone?
| Tool | Role | Alternative |
|---|---|---|
| Restic | Create snapshots (dedup, encrypt, verify) | Borg, Duplicati, BorgBase |
| Rclone | Sync repo to cloud storage | aws s3 sync, rclone |
Restic vs Borg: Both are excellent. Restic is simpler to set up, supports more backends natively, and has better cross-platform support. Borg is faster and has a larger community in the Linux server space. Either works.
Restic directly supports cloud backends (S3, B2, SFTP, etc.) without Rclone — but Rclone gives you 70+ providers with a single config format. The combination gives maximum flexibility.
Part 1: Install Restic and Rclone
# Ubuntu/Debian:
apt-get install -y restic
# macOS:
brew install restic
# Direct download (any Linux):
wget https://github.com/restic/restic/releases/latest/download/restic_linux_amd64.bz2
bunzip2 restic_linux_amd64.bz2
chmod +x restic_linux_amd64
mv restic_linux_amd64 /usr/local/bin/restic
# Install Rclone:
curl https://rclone.org/install.sh | bash
Part 2: Set Up Backblaze B2 (Recommended Cloud Backend)
Backblaze B2 is the cheapest major cloud storage at $0.006/GB/month ($6/TB). 10x cheaper than AWS S3.
- Create a Backblaze account
- Create a bucket:
my-server-backups - Create an Application Key with read/write access to that bucket
Configure Rclone for B2
rclone config
Follow the wizard:
n→ New remote- Name:
b2 - Storage:
Backblaze B2 - Account: your B2 Account ID
- Key: your Application Key
- Leave other settings as default
Verify:
rclone ls b2:my-server-backups
# Should list empty (or existing files)
Part 3: Initialize Restic Repository
# Set password as env var (or use a password file):
export RESTIC_PASSWORD="your-strong-backup-password-here"
export RESTIC_REPOSITORY="rclone:b2:my-server-backups"
# Initialize repository:
restic init
# Output:
# created restic repository abc12345 at rclone:b2:my-server-backups
# Please note that knowledge of your password is required to access
# the repository. Losing your password means that your data is
# irrecoverably lost!
Save your password somewhere safe — without it, your backups are permanently inaccessible (that's the encryption working as intended).
For local backup repository (fast, for dev):
export RESTIC_REPOSITORY="/backup/local-restic"
restic init
Part 4: Your First Backup
# Backup a directory:
restic backup /opt/docker /etc /home
# With tags (label for filtering snapshots later):
restic backup /opt/docker --tag docker,production
# Backup a specific Docker volume mount path:
restic backup /var/lib/docker/volumes/nextcloud_data/_data \
--tag nextcloud
# Dry run (see what would be backed up):
restic backup /opt/docker --dry-run -v
Exclude Patterns
restic backup /home \
--exclude="*/node_modules" \
--exclude="*/.git" \
--exclude="*/vendor" \
--exclude="*.log" \
--exclude="*/cache/*"
Part 5: Automate with a Backup Script
Create /usr/local/bin/backup.sh:
#!/bin/bash
# /usr/local/bin/backup.sh — Automated Restic backup
set -euo pipefail
# Configuration
export RESTIC_REPOSITORY="rclone:b2:my-server-backups"
export RESTIC_PASSWORD_FILE="/root/.restic-password" # More secure than env var
export RCLONE_CONFIG="/root/.config/rclone/rclone.conf"
# Logging
LOG_FILE="/var/log/backup.log"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
log() {
echo "[$TIMESTAMP] $1" | tee -a "$LOG_FILE"
}
# --- Backup Docker volumes and configs ---
log "Starting backup..."
restic backup \
/opt/docker \
/etc \
/root/.config \
--tag server,$(hostname) \
--exclude="*/node_modules" \
--exclude="*/.git" \
--exclude="*.tmp" \
>> "$LOG_FILE" 2>&1
log "Backup completed."
# --- Forget old snapshots (retention policy) ---
restic forget \
--keep-last 7 \
--keep-daily 14 \
--keep-weekly 8 \
--keep-monthly 6 \
--prune \
>> "$LOG_FILE" 2>&1
log "Pruned old snapshots."
# --- Verify latest snapshot integrity ---
restic check --read-data-subset=5% >> "$LOG_FILE" 2>&1
log "Integrity check passed."
log "Backup finished successfully."
chmod +x /usr/local/bin/backup.sh
# Create password file (more secure than environment variable):
echo "your-strong-backup-password" > /root/.restic-password
chmod 600 /root/.restic-password
Schedule with Cron
crontab -e
# Run backup daily at 3am:
0 3 * * * /usr/local/bin/backup.sh >> /var/log/backup-cron.log 2>&1
# Run backup twice daily (high availability data):
0 3,15 * * * /usr/local/bin/backup.sh >> /var/log/backup-cron.log 2>&1
Schedule with Systemd Timer (preferred on systemd systems)
# /etc/systemd/system/backup.service
[Unit]
Description=Restic Backup
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh
User=root
StandardOutput=append:/var/log/backup.log
StandardError=append:/var/log/backup.log
# /etc/systemd/system/backup.timer
[Unit]
Description=Run Restic backup daily
[Timer]
OnCalendar=*-*-* 03:00:00
RandomizedDelaySec=1800 # Spread load by up to 30 minutes
Persistent=true # Run if system was off during scheduled time
[Install]
WantedBy=timers.target
systemctl enable --now backup.timer
systemctl status backup.timer
Part 6: Back Up PostgreSQL and MySQL
PostgreSQL
# Dump and pipe directly into Restic (no temp file):
pg_dump -U myapp myapp | restic backup --stdin \
--stdin-filename postgres-myapp-$(date +%Y%m%d).sql \
--tag postgres,myapp
# All databases:
pg_dumpall -U postgres | restic backup --stdin \
--stdin-filename postgres-all-$(date +%Y%m%d).sql
MySQL / MariaDB
mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" \
--all-databases --single-transaction \
| restic backup --stdin \
--stdin-filename mysql-all-$(date +%Y%m%d).sql \
--tag mysql
Docker-Aware Backup Script
#!/bin/bash
# backup-docker-databases.sh — Dump all DBs then backup
# PostgreSQL containers:
for container in $(docker ps --filter "ancestor=postgres" --format "{{.Names}}"); do
DB_NAME=$(docker exec "$container" env | grep POSTGRES_DB | cut -d= -f2)
DB_USER=$(docker exec "$container" env | grep POSTGRES_USER | cut -d= -f2)
docker exec "$container" pg_dump -U "$DB_USER" "$DB_NAME" | \
restic backup --stdin \
--stdin-filename "${container}-${DB_NAME}.sql" \
--tag postgres,docker,"$container"
done
Part 7: Restore from Backup
# List all snapshots:
restic snapshots
# Output:
# ID Time Host Tags Paths
# abc12345 2026-03-09 03:01:23 myhost server,docker /opt/docker, /etc
# List files in a snapshot:
restic ls abc12345
# Restore entire snapshot to /tmp/restore:
restic restore abc12345 --target /tmp/restore
# Restore latest snapshot:
restic restore latest --target /restore
# Restore specific directory from snapshot:
restic restore abc12345 --target /restore \
--include /opt/docker/nextcloud
# Mount snapshot as filesystem (for browsing):
mkdir /mnt/restic
restic mount /mnt/restic &
ls /mnt/restic/snapshots/latest/opt/docker
# Browse and copy exactly what you need
umount /mnt/restic
Part 8: Multi-Destination Backup (3-2-1 Rule)
The 3-2-1 rule: 3 copies, 2 different storage media, 1 offsite. Achieve this with multiple repositories:
#!/bin/bash
# backup-321.sh — 3-2-1 backup strategy
# 1. Local backup (fast, for quick restores):
RESTIC_REPOSITORY="/backup/local" restic backup /opt/docker /etc
# 2. Remote B2 backup (offsite, cheap):
RESTIC_REPOSITORY="rclone:b2:backups-primary" restic backup /opt/docker /etc
# 3. Remote S3 backup (second offsite, different provider):
RESTIC_REPOSITORY="s3:s3.amazonaws.com/my-backup-bucket" restic backup /opt/docker /etc
echo "3-2-1 backup complete."
Forget and Prune per Repository
for repo in "/backup/local" "rclone:b2:backups-primary"; do
RESTIC_REPOSITORY="$repo" restic forget \
--keep-last 7 \
--keep-daily 14 \
--keep-weekly 8 \
--prune
done
Part 9: Monitoring and Alerts
Healthchecks.io Integration
Healthchecks.io monitors cron jobs — send a ping after each successful backup, get alerted if it doesn't run:
# Add to backup.sh after successful completion:
curl -fsS --retry 3 https://hc-ping.com/your-uuid > /dev/null
# Ping on failure:
trap 'curl -fsS --retry 3 https://hc-ping.com/your-uuid/fail > /dev/null' ERR
Self-host Healthchecks.io with Docker — it's open source (BSD 3-Clause).
Backup Size Monitoring
# Check repository stats:
restic stats
# Output:
# Repository Size: 12.345 GiB
# Total File Count: 1,234,567
# Total Blob Count: 987,654
# Unique: 8.234 GiB (deduplication ratio: 1.5x)
Cost Comparison
| Storage Backend | Cost per 100GB/month | Notes |
|---|---|---|
| Backblaze B2 | $0.60 | No egress within B2 CDN |
| Wasabi | $0.59 | No egress fees |
| Cloudflare R2 | $1.50 | No egress fees |
| AWS S3 Standard | $2.30 | + egress fees |
| Hetzner Storage Box | €0.38 | SFTP, 1TB for €3.81 |
| Self-hosted (ext HDD) | ~$0.01 | Hardware cost amortized |
Recommendation: Backblaze B2 for primary offsite backup, Hetzner Storage Box for secondary offsite, local SSD for immediate restore speed.
Quick Reference
# Common Restic commands:
restic init # Initialize repository
restic backup /path/to/data # Create snapshot
restic snapshots # List snapshots
restic ls latest # List files in latest snapshot
restic restore latest --target /tmp # Restore to /tmp
restic forget --keep-last 7 --prune # Delete old snapshots
restic check # Verify repository integrity
restic stats # Show repository size/stats
restic mount /mnt/restic # Browse as filesystem
Designing Your Backup Strategy for a Full Self-Hosted Stack
A reliable backup strategy needs to account for every stateful service in your infrastructure, not just the obvious ones. For teams running a self-hosted stack — a PaaS platform, Git hosting, a CRM, analytics, monitoring, and automation tools — the volume of data that needs protection grows quickly. Restic handles this well because it deduplicates across snapshots and across directories, so backing up multiple services together is more efficient than backing them up separately.
The recommended approach for a complete self-hosted stack is to organize your backup script around service categories. Application code lives in your Git repositories and doesn't need to be in your Restic backup (it's already versioned). What needs backing up is state: database dumps, uploaded files, Docker volume data, configuration files, and TLS certificates.
For a typical setup running on Coolify or Dokku, the stateful directories you need to back up include the Docker volume mount paths (usually /var/lib/docker/volumes/), your application configuration in /etc/ subdirectories, and any data directories you've mapped into containers. If you're running Gitea or Forgejo for Git hosting, the repository data directory is critical — it contains every repository, commit history, issue, and wiki for your organization. Losing it without a backup would mean starting over from scratch.
A sensible retention policy for production infrastructure is: keep the last 7 daily snapshots for quick recovery from accidental deletions, 4 weekly snapshots for recovering from issues that took a week to notice, and 6 monthly snapshots for longer-term data recovery needs. This balances storage cost against recovery capability. At Backblaze B2 prices ($0.006/GB/month), retaining 50GB of deduplicated backup data for 6 months costs under $2 per month — substantially cheaper than any managed backup service.
The restic check command is underused by most teams. Running it weekly (or after the daily backup as shown in Part 5's script) validates that your backup repository hasn't become corrupted and that your snapshots are actually recoverable. There is nothing worse than discovering your backups have been failing silently for three months when you actually need to restore. The --read-data-subset=5% flag checks a random 5% of stored data each run, giving you ongoing integrity verification without downloading the entire repository every day.
For teams self-hosting a Gitea or Forgejo instance, Restic's stdin backup mode is particularly useful. You can pipe a gitea admin dump directly into Restic without writing a temporary file to disk, keeping your backup encrypted from the moment it leaves the application. The same applies to PostgreSQL with pg_dump and MySQL with mysqldump as shown in Part 6.
Disaster Recovery Planning: Beyond Just Running Backups
Having backups and having a working disaster recovery plan are two different things. Most teams that implement Restic and Rclone stop at "backups are running" without ever testing the restore process. This is dangerous — an untested restore procedure is an unreliable restore procedure.
The minimum viable disaster recovery test is to restore a recent snapshot to a temporary directory and verify that the files are present and uncorrupted. Run restic restore latest --target /tmp/restore-test monthly, spot-check a few critical files, and delete the temporary directory. This takes five minutes and confirms your encryption key works, your Rclone connection to B2 is healthy, and the snapshot data is intact.
For database backups specifically, restoration testing means actually loading the SQL dump into a temporary database and querying it. A corrupt SQL dump will fail at the pg_restore or mysql import step. Running this test quarterly catches issues like encoding problems, truncated dumps from disk-full events, or permission errors that prevent the dump from completing successfully.
Recovery time objective (RTO) and recovery point objective (RPO) are worth defining explicitly for your infrastructure. RTO is how long you can tolerate being offline while restoring. RPO is how much data loss is acceptable. For a team running daily backups at 3 AM, the maximum RPO is 24 hours — any event that occurs between backups results in up to 24 hours of data loss. If that's unacceptable for certain data, run more frequent backups for those specific directories. Critical application databases can be backed up every 6 hours using a separate cron entry calling only the database dump sections of your backup script.
Pair your backup monitoring with a broader observability setup. If you're using Grafana vs Uptime Kuma for infrastructure monitoring, add a Healthchecks.io check (as shown in Part 9) to both tools. Uptime Kuma can monitor your Healthchecks.io endpoint to alert you if the backup job hasn't pinged it in over 25 hours. Grafana can display backup job success/failure metrics over time. This way, backup failures surface as alerts rather than being discovered during an incident.
One commonly overlooked aspect of the 3-2-1 backup rule is the "1 offsite" requirement. Many teams implement "local backup to external drive + remote backup to the same datacenter." That's two copies on different media but in the same physical location — a flood, fire, or datacenter power event takes out both. Rclone makes it easy to replicate to providers in geographically distinct regions: Backblaze B2 in Rancho Dominguez (California) and Hetzner's storage in Falkenstein (Germany) is a genuinely offsite pair that covers most single-provider failure scenarios.
Finally, document your backup and restore procedure in a runbook stored separately from the servers themselves. GitHub, Notion, or a simple text file in a password manager will do. The runbook should include: the Restic repository URL, how to retrieve the encryption password (from a password manager or vault), the exact commands to restore, and the expected time each step takes. During an actual outage is not the time to be reconstructing this information from memory.
One frequently neglected component of backup strategy is the backup of the backup configuration itself. Your Rclone configuration file (typically at ~/.config/rclone/rclone.conf) contains your storage provider credentials. Your Restic password file or environment variable is required to access any encrypted snapshot. If you lose your server and need to restore from scratch, you need both the backup data and the configuration to access it. Store both in a password manager or secure secrets vault — not on the same server as the backups themselves.
The incremental storage model of Restic means that backup storage costs remain predictable even for rapidly changing data. Restic splits data into variable-size chunks using content-defined chunking (CDC), then deduplicates identical chunks across all snapshots. A 10GB application directory where 500MB of data changes daily will accumulate roughly 500MB of new storage per day — not 10GB. Over a month, that's 15GB of incremental data rather than 300GB of full daily backups. This deduplication efficiency is the main reason Restic is recommended over tools like rsync for backup scenarios where storage cost matters.
For teams running multiple services on the same server — a common configuration when using a platform like Coolify or Dokku to host several applications — the single-repository model (backing up all services into one Restic repository) maximizes deduplication efficiency. Shared libraries, base Docker layers, and common configuration files are stored only once regardless of how many services reference them. Tags make it easy to restore specific services from the combined repository without needing to extract the full dataset.
See all open source backup tools at OSSAlt.com/categories/backup.
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.