Skip to main content

Litestream vs WAL-G vs pgBackRest

·OSSAlt Team
litestreamwal-gpgbackrestpostgresbackupself-hostingsqlite2026

TL;DR

Litestream continuously replicates SQLite databases to S3 — it's for SQLite, not Postgres. WAL-G archives Postgres WAL (Write-Ahead Log) to S3/GCS/Azure continuously — sub-second RPO (Recovery Point Objective) with minimal overhead. pgBackRest is a full-featured Postgres backup solution with incremental backups, parallel processing, compression, and encryption. For most self-hosted Postgres deployments: WAL-G for continuous point-in-time recovery, pgBackRest when you need advanced features.

Key Takeaways

  • Litestream: Apache 2.0, ~9K stars, Go — SQLite only, real-time S3 streaming
  • WAL-G: Apache 2.0, ~4K stars, Go — Postgres WAL archiving to S3/GCS/Azure/SFTP
  • pgBackRest: MIT, ~3K stars, C — full backup suite (full/diff/incr), parallel, encryption
  • pg_dump: Built into Postgres — logical backups, good for small DBs, not for PITR
  • RPO comparison: WAL-G/pgBackRest ~few seconds; pg_dump depends on schedule (hourly = 1hr RPO)
  • RTO comparison: WAL-G restore ~5–30 min; pgBackRest ~5 min (parallel); pg_dump restore varies

Understanding Postgres Backup Types

Before comparing tools, understand the backup strategies:

TypeHow It WorksRPORTOUse Case
Logical (pg_dump)SQL dump of dataHours (cron)Minutes–hoursSmall DBs, schema migrations
Physical (base backup)Copy data directoryContinuousMinutesMedium–large DBs
WAL archivingStream WAL changesSecondsMinutesProduction with PITR
Streaming replicationReplica follows primaryNear-zeroFailover timeHA, not backup

WAL archiving + base backup = Point-In-Time Recovery (PITR): restore to any point in time, not just scheduled backup times.


Litestream: Continuous SQLite Replication

Litestream streams every SQLite transaction to S3 in real time. It's specifically for SQLite — not Postgres. But if you're running SQLite (which is increasingly common for small apps, n8n SQLite mode, Forgejo, etc.), Litestream is the best backup tool available.

Why Litestream for SQLite

The classic SQLite problem: you can't just copy an .db file while the process is running (you get corruption). Litestream uses SQLite's WAL mode to capture and stream every page change.

# Install Litestream:
wget https://github.com/benbjohnson/litestream/releases/latest/download/litestream-linux-amd64.tar.gz
tar xzf litestream-linux-amd64.tar.gz
mv litestream /usr/local/bin/

Configuration

# /etc/litestream.yml
access-key-id: your-b2-key-id
secret-access-key: your-b2-application-key

dbs:
  - path: /app/data/app.db
    replicas:
      - url: s3://my-bucket/app-db
        endpoint: https://s3.us-west-004.backblazeb2.com

  - path: /app/data/sessions.db
    replicas:
      - url: s3://my-bucket/sessions-db
        endpoint: https://s3.us-west-004.backblazeb2.com
      - path: /backup/local/sessions.db   # Local replica too
# Run as daemon:
litestream replicate -config /etc/litestream.yml

# Or as systemd service:
systemctl enable litestream
systemctl start litestream

Restore from Litestream

# Restore latest:
litestream restore -config /etc/litestream.yml /app/data/app.db

# Restore to specific point in time:
litestream restore -config /etc/litestream.yml \
  -timestamp "2026-03-09T12:00:00Z" \
  /app/data/app.db

# List available restore points:
litestream generations -config /etc/litestream.yml /app/data/app.db

Docker Integration (run alongside your app)

services:
  app:
    image: your-app
    volumes:
      - app_data:/data

  litestream:
    image: litestream/litestream:latest
    command: replicate
    volumes:
      - app_data:/data
      - ./litestream.yml:/etc/litestream.yml
    environment:
      AWS_ACCESS_KEY_ID: your-key-id
      AWS_SECRET_ACCESS_KEY: your-secret-key

volumes:
  app_data:

WAL-G: Continuous Postgres WAL Archiving

WAL-G archives Postgres WAL files to cloud storage as they're generated. Combined with periodic base backups, this gives you continuous PITR.

How WAL Archiving Works

Postgres writes → WAL files → WAL-G archives to S3 → Base backup periodically

Restore = apply base backup + replay WAL files up to target time.

Docker Setup with WAL-G

services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"

      # WAL-G environment:
      WALG_S3_PREFIX: s3://my-bucket/postgres-backups
      AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
      AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
      AWS_ENDPOINT: "https://s3.us-west-004.backblazeb2.com"   # B2 endpoint
      WALG_COMPRESSION_METHOD: brotli     # Better compression than zstd default
      WALG_DELTA_MAX_STEPS: 6             # Incremental steps before full backup

    volumes:
      - pg_data:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf

volumes:
  pg_data:
# postgresql.conf additions:
archive_mode = on
archive_command = 'wal-g wal-push %p'
archive_timeout = 60         # Archive WAL every 60 seconds even if not full

restore_command = 'wal-g wal-fetch %f %p'

Take Base Backups

# Schedule in cron (daily at 1am):
0 1 * * * docker exec postgres wal-g backup-push /var/lib/postgresql/data

# Or from host using docker exec:
docker exec postgres wal-g backup-push /var/lib/postgresql/data

List and Manage Backups

# List backups:
docker exec postgres wal-g backup-list

# LABEL                         MODIFIED             WAL SEGMENT   START LSN FINISH LSN  HOSTNAME
# base_000000010000000000000001  2026-03-09T03:01:23Z 000000010...  0/1000000 0/1500000   myhost

# Delete old backups (keep last 7):
docker exec postgres wal-g delete retain FULL 7 --confirm

Point-in-Time Recovery

# Step 1: Stop Postgres
docker compose stop postgres

# Step 2: Backup current data dir (safety net)
cp -r /var/lib/docker/volumes/postgres_data/_data/ /tmp/pg-old-data/

# Step 3: Fetch the base backup
docker exec postgres wal-g backup-fetch /var/lib/postgresql/data LATEST

# Step 4: Create recovery.conf (Postgres 11 and earlier)
# OR postgresql.conf additions (Postgres 12+):
cat >> /var/lib/postgresql/data/postgresql.conf << EOF
restore_command = 'wal-g wal-fetch %f %p'
recovery_target_time = '2026-03-09 12:30:00'    # Target time
recovery_target_action = 'promote'
EOF

# Also create recovery signal:
touch /var/lib/postgresql/data/recovery.signal

# Step 5: Start Postgres
docker compose start postgres
# Postgres applies WAL until target time, then promotes to read-write

pgBackRest is the most feature-rich Postgres backup solution. Used by large-scale Postgres installations. Features: full/differential/incremental backups, parallel processing, compression, encryption, S3 storage, retention policies.

When to Choose pgBackRest over WAL-G

  • Need AES-256 encryption at rest (pgBackRest: built-in; WAL-G: relies on S3 SSE)
  • Parallel backup/restore — critical for large (100GB+) databases
  • Differential and incremental backup types (WAL-G only does delta)
  • Detailed reporting and monitoring integration
  • Multi-repository support (backup to multiple destinations)

Docker Setup

services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
    volumes:
      - pg_data:/var/lib/postgresql/data
      - ./pgbackrest.conf:/etc/pgbackrest/pgbackrest.conf

  pgbackrest:
    image: pgbackrest/pgbackrest:latest
    volumes:
      - pg_data:/var/lib/postgresql/data:ro
      - ./pgbackrest.conf:/etc/pgbackrest/pgbackrest.conf
      - pgbackrest_spool:/var/spool/pgbackrest
    command: ["info"]   # Replace with actual backup command in cron

volumes:
  pg_data:
  pgbackrest_spool:

Configuration

# pgbackrest.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-s3-bucket=my-backup-bucket
repo1-s3-endpoint=s3.us-east-1.amazonaws.com
repo1-s3-key=your-access-key
repo1-s3-key-secret=your-secret-key
repo1-s3-region=us-east-1
repo1-type=s3
repo1-cipher-type=aes-256-cbc      # Encrypt all backups
repo1-cipher-pass=strong-encryption-key

# Retention:
repo1-retention-full=2             # Keep last 2 full backups
repo1-retention-diff=6             # Keep last 6 differential
repo1-retention-archive=30         # WAL for last 30 days

[myapp]
pg1-path=/var/lib/postgresql/data

Commands

# Initialize repository:
pgbackrest --stanza=myapp stanza-create

# Full backup:
pgbackrest --stanza=myapp --type=full backup

# Incremental backup (after full):
pgbackrest --stanza=myapp --type=incr backup

# List backups:
pgbackrest --stanza=myapp info

# Restore latest:
pgbackrest --stanza=myapp --delta restore

# Restore to point in time:
pgbackrest --stanza=myapp restore \
  --target="2026-03-09 12:00:00" \
  --target-action=promote

Simple pg_dump: When Is It Enough?

For small databases (< 10GB) and moderate RPO requirements (hourly is acceptable), pg_dump is completely sufficient:

#!/bin/bash
# backup-postgres.sh

DB_NAME=myapp
BACKUP_DIR=/backup/postgres

mkdir -p "$BACKUP_DIR"

# Full logical dump:
docker exec postgres pg_dump -U myapp "$DB_NAME" | \
  gzip > "${BACKUP_DIR}/${DB_NAME}-$(date +%Y%m%d-%H%M%S).sql.gz"

# Keep last 14 backups:
ls -t "${BACKUP_DIR}/${DB_NAME}-"*.sql.gz | tail -n +15 | xargs -r rm

# Upload to B2:
rclone copy "${BACKUP_DIR}" b2:my-bucket/postgres-dumps
# Hourly backups:
0 * * * * /usr/local/bin/backup-postgres.sh

pg_dump limitations: No PITR (you can only restore to a backup timestamp), not suitable for > 50GB databases (too slow), doesn't capture in-flight transactions during dump.


Tool Selection Guide

Use Litestream if:
  → Your application uses SQLite (n8n SQLite, small apps, embedded DBs)
  → You want continuous sub-second RPO for SQLite
  → Dead simple setup — just a sidecar process

Use WAL-G if:
  → Postgres database < 200GB
  → You want PITR (restore to any point in time)
  → S3/B2/GCS storage backend
  → Simple configuration preferred
  → Good enough for most production self-hosted workloads

Use pgBackRest if:
  → Postgres database > 200GB (parallel processing helps)
  → Encryption at rest is required
  → Need differential/incremental backup granularity
  → Managing multiple Postgres instances
  → Enterprise or regulated environment

Use pg_dump if:
  → Database < 10GB
  → Hourly RPO is acceptable
  → Simplest possible solution
  → Schema-only backups for version control
  → Migration between Postgres versions

See all open source database backup tools at OSSAlt.com/categories/backup.

Comments