Skip to main content

Open-source alternatives guide

How to Migrate from AWS S3 to MinIO 2026

Step-by-step guide to migrating from AWS S3 to MinIO (or SeaweedFS) on your own server in 2026. Data migration with rclone, app code changes, cost now.

·OSSAlt Team
Share:

TL;DR

Migrating from AWS S3 to a self-hosted S3-compatible store (MinIO or SeaweedFS) saves 70-90% on storage costs for most use cases. The migration takes 2-6 hours for most apps: sync data with rclone, swap the endpoint environment variable, set forcePathStyle: true, and redeploy. The hardest part is pre-signed URLs and IAM — handle those separately. Note: If you're starting fresh, use SeaweedFS instead of MinIO — MinIO entered maintenance mode in late 2025.

Key Takeaways

  • rclone handles data migration — sync all S3 buckets to self-hosted in one command
  • Application code changes are minimal — just change endpoint, region, credentials, and add forcePathStyle: true
  • Pre-signed URLs change — update any hardcoded domain expectations in your frontend
  • Cost savings: S3 Standard at $0.023/GB/month → MinIO at server cost only (~$0.002/GB/month)
  • MinIO is in maintenance mode — for new deployments, prefer SeaweedFS; this guide works for both

Cost Comparison

SolutionStorage CostEgress CostNotes
AWS S3 Standard$0.023/GB/month$0.09/GBPlus request costs
Cloudflare R2$0.015/GB/month$0/GB (free!)Best managed alternative
MinIO/SeaweedFS (self-hosted)~$0.002/GB/month$0Server cost amortized

For 1 TB of storage with 500 GB monthly egress:

  • AWS S3: ~$68/month
  • Cloudflare R2: ~$15/month (no egress)
  • Self-hosted (Hetzner CX42): ~$15/month server, effectively unlimited storage/egress

Self-hosting wins at scale but requires operational overhead. Cloudflare R2 is the best managed alternative if you don't want to run infrastructure.


Part 1: Deploy Your Self-Hosted Store

Option A: MinIO (existing deployments)

# docker-compose.yml for MinIO:
version: '3.8'
services:
  minio:
    image: minio/minio
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: your-access-key
      MINIO_ROOT_PASSWORD: your-secret-key
    volumes:
      - ./data:/data
    command: server /data --console-address ":9001"
    restart: unless-stopped
docker compose up -d
# MinIO API: http://your-server:9000
# MinIO Console: http://your-server:9001
# docker-compose.yml for SeaweedFS with S3 gateway:
version: '3.8'
services:
  seaweedfs:
    image: chrislusf/seaweedfs:latest
    ports:
      - "8333:8333"   # S3 API
      - "9333:9333"   # Master
      - "8080:8080"   # Volume
    volumes:
      - ./data:/data
    command: server -dir=/data -s3 -s3.port=8333
    restart: unless-stopped

Create Buckets

# Using AWS CLI (works with both MinIO and SeaweedFS):
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_DEFAULT_REGION=us-east-1

# Create buckets to match your S3 buckets:
aws s3 mb s3://my-uploads --endpoint-url http://your-server:9000
aws s3 mb s3://my-backups --endpoint-url http://your-server:9000

Part 2: Migrate Data with rclone

rclone is the standard tool for migrating between S3-compatible stores. It handles multipart uploads, retry logic, and can sync incrementally.

Install rclone

# Linux/macOS:
curl https://rclone.org/install.sh | sudo bash

# Or via package manager:
brew install rclone      # macOS
apt install rclone       # Ubuntu/Debian

Configure Both Remotes

# ~/.config/rclone/rclone.conf

[s3-source]
type = s3
provider = AWS
region = us-east-1
access_key_id = AKIAIOSFODNN7EXAMPLE
secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

[self-hosted-dest]
type = s3
provider = Minio          # Works for MinIO, SeaweedFS, and most S3-compatible stores
endpoint = http://your-server:9000
access_key_id = your-access-key
secret_access_key = your-secret-key

Run the Migration

# Dry run first (no data moved — verify what will happen):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --dry-run --progress

# Actual sync (copies everything, deletes files in dest that don't exist in source):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress

# Migrate all buckets at once:
rclone sync s3-source: self-hosted-dest: --progress

# For large datasets, use multiple parallel transfers:
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads \
  --progress \
  --transfers 16 \
  --checkers 32 \
  --buffer-size 256M

Final Sync (Zero-Downtime)

For production migrations, run the initial sync while the app still uses S3, then run a final incremental sync during a brief maintenance window:

# Initial sync (may take hours for large datasets):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress

# ... wait for initial sync to complete ...

# Put app in maintenance mode or stop writes temporarily

# Final delta sync (should be fast — only changed files):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress

# Switch app to new endpoint (next section)

# Verify:
rclone check s3-source:my-uploads self-hosted-dest:my-uploads

Part 3: Update Application Code

Node.js / TypeScript (AWS SDK v3)

// Before (AWS S3):
import { S3Client } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  region: process.env.AWS_REGION!,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
});
// After (self-hosted — only 3 changes):
import { S3Client } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  region: process.env.S3_REGION ?? 'us-east-1',          // 1. Can be any value
  endpoint: process.env.S3_ENDPOINT!,                      // 2. Add endpoint
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY_ID!,
    secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
  },
  forcePathStyle: true,                                     // 3. Required for non-AWS
});
# .env — update these values:
# Before:
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG...

# After:
S3_ENDPOINT=http://your-server:9000
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=your-minio-access-key
S3_SECRET_ACCESS_KEY=your-minio-secret-key
S3_BUCKET=my-uploads

No other code changes required. PutObjectCommand, GetObjectCommand, DeleteObjectCommand, ListObjectsV2Command all work identically.

Python (boto3)

# Before:
import boto3
s3 = boto3.client('s3')

# After:
import boto3
s3 = boto3.client(
    's3',
    endpoint_url=os.environ['S3_ENDPOINT'],     # Add endpoint
    aws_access_key_id=os.environ['S3_ACCESS_KEY_ID'],
    aws_secret_access_key=os.environ['S3_SECRET_ACCESS_KEY'],
    config=boto3.session.Config(signature_version='s3v4')
)

Part 4: Handle Pre-Signed URLs

Pre-signed URLs are the trickiest part of the migration. The URL domain changes from s3.amazonaws.com to your server.

Generate Pre-Signed URLs

import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

// Pre-signed URL generation works identically:
const url = await getSignedUrl(
  s3,
  new GetObjectCommand({ Bucket: 'my-uploads', Key: 'file.pdf' }),
  { expiresIn: 3600 }
);

// The URL will now point to your server:
// Before: https://my-uploads.s3.us-east-1.amazonaws.com/file.pdf?X-Amz-...
// After:  http://your-server:9000/my-uploads/file.pdf?X-Amz-...

CORS Configuration

If your frontend uses pre-signed URLs directly (file uploads, direct downloads), configure CORS on your self-hosted store:

# MinIO: set CORS via mc CLI:
mc alias set myminio http://your-server:9000 access-key secret-key
mc anonymous set download myminio/my-uploads  # For public buckets

# Or create a CORS policy file:
cat > cors.json << EOF
{
  "CORSRules": [{
    "AllowedOrigins": ["https://yourdomain.com"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
    "AllowedHeaders": ["*"],
    "MaxAgeSeconds": 3000
  }]
}
EOF
aws s3api put-bucket-cors \
  --bucket my-uploads \
  --cors-configuration file://cors.json \
  --endpoint-url http://your-server:9000

Put the Store Behind a Domain + SSL

For production, put MinIO/SeaweedFS behind Nginx or Traefik with SSL instead of exposing port 9000 directly:

# Nginx reverse proxy for S3-compatible store:
server {
    listen 443 ssl;
    server_name storage.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/storage.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/storage.yourdomain.com/privkey.pem;

    # Increase for large file uploads:
    client_max_body_size 5G;

    location / {
        proxy_pass http://localhost:9000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Then set S3_ENDPOINT=https://storage.yourdomain.com in your app.


Verification Checklist

After switching the app endpoint:

# 1. Upload a test file:
aws s3 cp test.txt s3://my-uploads/test.txt \
  --endpoint-url http://your-server:9000

# 2. Download it back:
aws s3 cp s3://my-uploads/test.txt /tmp/test-download.txt \
  --endpoint-url http://your-server:9000

# 3. List objects:
aws s3 ls s3://my-uploads/ \
  --endpoint-url http://your-server:9000

# 4. Verify object count matches source:
aws s3 ls s3://my-uploads/ --recursive | wc -l  # self-hosted
aws s3 ls s3://my-uploads/ --recursive | wc -l  # AWS source (compare)

# 5. Test pre-signed URL:
url=$(aws s3 presign s3://my-uploads/test.txt \
  --endpoint-url http://your-server:9000 \
  --expires-in 3600)
curl -o /dev/null -w "%{http_code}" "$url"  # Should return 200

Cost Savings Example

Before (100 GB storage, 200 GB egress/month on AWS S3):

  • Storage: 100 GB × $0.023 = $2.30
  • Egress: 200 GB × $0.09 = $18.00
  • Requests: ~$2.00
  • Total: ~$22/month

After (Hetzner CX22 VPS at €4.35/month):

  • Server (shared with other services): ~$2/month allocated to storage
  • Egress: $0 (included in VPS)
  • Total: ~$2/month — 90% savings

At 1 TB + 1 TB egress, savings reach $100+/month.


Compare all open source AWS alternatives at OSSAlt.com.

See open source alternatives to AWS S3 on OSSAlt.

Data Ownership and File Management

Self-hosted file storage gives you control over data residency, access patterns, and cost that no cloud provider can match. The tradeoff is operational responsibility — you own the uptime and durability guarantees you create.

Redundancy at the disk level: For critical data, use RAID or ZFS on your storage node. RAID 1 (mirroring) protects against a single disk failure; ZFS adds checksumming that detects and corrects silent corruption. Neither is a replacement for offsite backups — they protect against disk failures, not against the broader failure modes (fire, theft, software bugs, accidental deletion) that offsite backups handle.

Offsite backups: Use Duplicati for automated encrypted daily backups to Backblaze B2 or Cloudflare R2. The 3-2-1 rule (3 copies, 2 different media, 1 offsite) remains the standard for data you can't afford to lose. B2 storage costs $0.006/GB/month — a 1TB backup repository costs $6/month.

File synchronization across devices: Syncthing provides peer-to-peer file synchronization without a central server. Files sync directly between your devices without routing through your server, making it useful for camera rolls, documents, and any dataset that needs real-time synchronization rather than archive access.

Access control: For team environments, use your server's user/group permissions plus application-level access control. Authentik provides SSO and role-based access that integrates with applications supporting OIDC or LDAP.

Monitoring storage health: Track disk usage trends with Prometheus + Grafana. A disk that fills gradually will fail suddenly if you're not watching — set an alert at 75% capacity to give yourself time to archive old data or expand storage.

Network Security and Hardening

Self-hosted services exposed to the internet require baseline hardening. The default Docker networking model exposes container ports directly — without additional configuration, any open port is accessible from anywhere.

Firewall configuration: Use ufw (Uncomplicated Firewall) on Ubuntu/Debian or firewalld on RHEL-based systems. Allow only ports 22 (SSH), 80 (HTTP redirect), and 443 (HTTPS). Block all other inbound ports. Docker bypasses ufw's OUTPUT rules by default — install the ufw-docker package or configure Docker's iptables integration to prevent containers from opening ports that bypass your firewall rules.

SSH hardening: Disable password authentication and root login in /etc/ssh/sshd_config. Use key-based authentication only. Consider changing the default SSH port (22) to a non-standard port to reduce brute-force noise in your logs.

Fail2ban: Install fail2ban to automatically ban IPs that make repeated failed authentication attempts. Configure jails for SSH, Nginx, and any application-level authentication endpoints.

TLS/SSL: Use Let's Encrypt certificates via Certbot or Traefik's automatic ACME integration. Never expose services over HTTP in production. Configure HSTS headers to prevent protocol downgrade attacks. Check your SSL configuration with SSL Labs' server test — aim for an A or A+ rating.

Container isolation: Avoid running containers as root. Add user: "1000:1000" to your docker-compose.yml service definitions where the application supports non-root execution. Use read-only volumes (volumes: - /host/path:/container/path:ro) for configuration files the container only needs to read.

Secrets management: Never put passwords and API keys directly in docker-compose.yml files committed to version control. Use Docker secrets, environment files (.env), or a secrets manager like Vault for sensitive configuration. Add .env to your .gitignore before your first commit.

Production Deployment Checklist

Before treating any self-hosted service as production-ready, work through this checklist. Each item represents a class of failure that will eventually affect your service if left unaddressed.

Infrastructure

  • Server OS is running latest security patches (apt upgrade / dnf upgrade)
  • Firewall configured: only ports 22, 80, 443 open
  • SSH key-only authentication (password auth disabled)
  • Docker and Docker Compose are current stable versions
  • Swap space configured (at minimum equal to RAM for <4GB servers)

Application

  • Docker image version pinned (not latest) in docker-compose.yml
  • Data directories backed by named volumes (not bind mounts to ephemeral paths)
  • Environment variables stored in .env file (not hardcoded in compose)
  • Container restart policy set to unless-stopped or always
  • Health check configured in Compose or Dockerfile

Networking

  • SSL certificate issued and auto-renewal configured
  • HTTP requests redirect to HTTPS
  • Domain points to server IP (verify with dig +short your.domain)
  • Reverse proxy (Nginx/Traefik) handles SSL termination

Monitoring and Backup

  • Uptime monitoring configured with alerting
  • Automated daily backup of Docker volumes to remote storage
  • Backup tested with a successful restore drill
  • Log retention configured (no unbounded log accumulation)

Access Control

  • Default admin credentials changed
  • Email confirmation configured if the app supports it
  • User registration disabled if the service is private
  • Authentication middleware added if the service lacks native login

Conclusion and Getting Started

The self-hosting ecosystem has matured dramatically. What required significant Linux expertise in 2015 is now achievable for any developer comfortable with Docker Compose and a basic understanding of DNS. The tools have gotten better, the documentation has improved, and the community has built enough tutorials that most common configurations have been solved publicly.

The operational overhead that remains is real but manageable. A stable self-hosted service — one that is properly monitored, backed up, and kept updated — requires roughly 30-60 minutes of attention per month once the initial deployment is complete. That time investment is justified for services where data ownership, cost savings, or customization requirements make the cloud alternative unsuitable.

Start with one service. Trying to migrate your entire stack to self-hosted infrastructure at once is a recipe for an overwhelming weekend project that doesn't get finished. Pick the service where the cloud alternative is most expensive or where data ownership matters most, run it for 30 days, and then evaluate whether to expand.

Build your operational foundation before adding services. Get monitoring, backup, and SSL configured correctly for your first service before adding a second. These cross-cutting concerns become easier to extend to new services once the pattern is established, and much harder to retrofit to a fleet of services that were deployed without them.

Treat this like a product. Your self-hosted services have users (even if that's just you). Write a runbook. Document the restore procedure. Create a status page. These practices don't take long but they transform self-hosting from a series of experiments into reliable infrastructure you can depend on.

The community around self-hosted software is active and helpful. Reddit's r/selfhosted, the Awesome-Selfhosted GitHub list, and Discord servers for specific applications all have people who have already solved the problem you're encountering. The configuration questions that feel unique usually aren't.

MinIO is a mature, production-grade object storage server with S3 API compatibility and an active commercial backing. The migration path from AWS S3 is well-documented precisely because MinIO has been used for this purpose extensively. Once the migration is complete, your application code requires zero changes — the same SDK, the same API calls, the same bucket operations — while your data leaves AWS infrastructure and moves under your control. For workloads with high egress costs or data residency requirements, MinIO self-hosting often pays back its operational overhead within 2-3 months.

MinIO's mc mirror command makes the initial migration straightforward — it copies objects in parallel, handles large buckets efficiently, and can resume interrupted transfers. After the initial sync, configure a second mc mirror run with --watch to catch any objects created during the migration window. Validate the migration by comparing object counts and total storage sizes using mc du. The migration is complete when counts match and your application has been updated to point at the MinIO endpoint rather than the AWS S3 endpoint.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.