How to Migrate from AWS S3 to MinIO: Self-Hosted Object Storage 2026
TL;DR
Migrating from AWS S3 to a self-hosted S3-compatible store (MinIO or SeaweedFS) saves 70-90% on storage costs for most use cases. The migration takes 2-6 hours for most apps: sync data with rclone, swap the endpoint environment variable, set forcePathStyle: true, and redeploy. The hardest part is pre-signed URLs and IAM — handle those separately. Note: If you're starting fresh, use SeaweedFS instead of MinIO — MinIO entered maintenance mode in late 2025.
Key Takeaways
- rclone handles data migration — sync all S3 buckets to self-hosted in one command
- Application code changes are minimal — just change
endpoint,region,credentials, and addforcePathStyle: true - Pre-signed URLs change — update any hardcoded domain expectations in your frontend
- Cost savings: S3 Standard at $0.023/GB/month → MinIO at server cost only (~$0.002/GB/month)
- MinIO is in maintenance mode — for new deployments, prefer SeaweedFS; this guide works for both
Cost Comparison
| Solution | Storage Cost | Egress Cost | Notes |
|---|---|---|---|
| AWS S3 Standard | $0.023/GB/month | $0.09/GB | Plus request costs |
| Cloudflare R2 | $0.015/GB/month | $0/GB (free!) | Best managed alternative |
| MinIO/SeaweedFS (self-hosted) | ~$0.002/GB/month | $0 | Server cost amortized |
For 1 TB of storage with 500 GB monthly egress:
- AWS S3: ~$68/month
- Cloudflare R2: ~$15/month (no egress)
- Self-hosted (Hetzner CX42): ~$15/month server, effectively unlimited storage/egress
Self-hosting wins at scale but requires operational overhead. Cloudflare R2 is the best managed alternative if you don't want to run infrastructure.
Part 1: Deploy Your Self-Hosted Store
Option A: MinIO (existing deployments)
# docker-compose.yml for MinIO:
version: '3.8'
services:
minio:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: your-access-key
MINIO_ROOT_PASSWORD: your-secret-key
volumes:
- ./data:/data
command: server /data --console-address ":9001"
restart: unless-stopped
docker compose up -d
# MinIO API: http://your-server:9000
# MinIO Console: http://your-server:9001
Option B: SeaweedFS (recommended for new deployments)
# docker-compose.yml for SeaweedFS with S3 gateway:
version: '3.8'
services:
seaweedfs:
image: chrislusf/seaweedfs:latest
ports:
- "8333:8333" # S3 API
- "9333:9333" # Master
- "8080:8080" # Volume
volumes:
- ./data:/data
command: server -dir=/data -s3 -s3.port=8333
restart: unless-stopped
Create Buckets
# Using AWS CLI (works with both MinIO and SeaweedFS):
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_DEFAULT_REGION=us-east-1
# Create buckets to match your S3 buckets:
aws s3 mb s3://my-uploads --endpoint-url http://your-server:9000
aws s3 mb s3://my-backups --endpoint-url http://your-server:9000
Part 2: Migrate Data with rclone
rclone is the standard tool for migrating between S3-compatible stores. It handles multipart uploads, retry logic, and can sync incrementally.
Install rclone
# Linux/macOS:
curl https://rclone.org/install.sh | sudo bash
# Or via package manager:
brew install rclone # macOS
apt install rclone # Ubuntu/Debian
Configure Both Remotes
# ~/.config/rclone/rclone.conf
[s3-source]
type = s3
provider = AWS
region = us-east-1
access_key_id = AKIAIOSFODNN7EXAMPLE
secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[self-hosted-dest]
type = s3
provider = Minio # Works for MinIO, SeaweedFS, and most S3-compatible stores
endpoint = http://your-server:9000
access_key_id = your-access-key
secret_access_key = your-secret-key
Run the Migration
# Dry run first (no data moved — verify what will happen):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --dry-run --progress
# Actual sync (copies everything, deletes files in dest that don't exist in source):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress
# Migrate all buckets at once:
rclone sync s3-source: self-hosted-dest: --progress
# For large datasets, use multiple parallel transfers:
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads \
--progress \
--transfers 16 \
--checkers 32 \
--buffer-size 256M
Final Sync (Zero-Downtime)
For production migrations, run the initial sync while the app still uses S3, then run a final incremental sync during a brief maintenance window:
# Initial sync (may take hours for large datasets):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress
# ... wait for initial sync to complete ...
# Put app in maintenance mode or stop writes temporarily
# Final delta sync (should be fast — only changed files):
rclone sync s3-source:my-uploads self-hosted-dest:my-uploads --progress
# Switch app to new endpoint (next section)
# Verify:
rclone check s3-source:my-uploads self-hosted-dest:my-uploads
Part 3: Update Application Code
Node.js / TypeScript (AWS SDK v3)
// Before (AWS S3):
import { S3Client } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
// After (self-hosted — only 3 changes):
import { S3Client } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: process.env.S3_REGION ?? 'us-east-1', // 1. Can be any value
endpoint: process.env.S3_ENDPOINT!, // 2. Add endpoint
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
forcePathStyle: true, // 3. Required for non-AWS
});
# .env — update these values:
# Before:
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG...
# After:
S3_ENDPOINT=http://your-server:9000
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=your-minio-access-key
S3_SECRET_ACCESS_KEY=your-minio-secret-key
S3_BUCKET=my-uploads
No other code changes required. PutObjectCommand, GetObjectCommand, DeleteObjectCommand, ListObjectsV2Command all work identically.
Python (boto3)
# Before:
import boto3
s3 = boto3.client('s3')
# After:
import boto3
s3 = boto3.client(
's3',
endpoint_url=os.environ['S3_ENDPOINT'], # Add endpoint
aws_access_key_id=os.environ['S3_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['S3_SECRET_ACCESS_KEY'],
config=boto3.session.Config(signature_version='s3v4')
)
Part 4: Handle Pre-Signed URLs
Pre-signed URLs are the trickiest part of the migration. The URL domain changes from s3.amazonaws.com to your server.
Generate Pre-Signed URLs
import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
// Pre-signed URL generation works identically:
const url = await getSignedUrl(
s3,
new GetObjectCommand({ Bucket: 'my-uploads', Key: 'file.pdf' }),
{ expiresIn: 3600 }
);
// The URL will now point to your server:
// Before: https://my-uploads.s3.us-east-1.amazonaws.com/file.pdf?X-Amz-...
// After: http://your-server:9000/my-uploads/file.pdf?X-Amz-...
CORS Configuration
If your frontend uses pre-signed URLs directly (file uploads, direct downloads), configure CORS on your self-hosted store:
# MinIO: set CORS via mc CLI:
mc alias set myminio http://your-server:9000 access-key secret-key
mc anonymous set download myminio/my-uploads # For public buckets
# Or create a CORS policy file:
cat > cors.json << EOF
{
"CORSRules": [{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3000
}]
}
EOF
aws s3api put-bucket-cors \
--bucket my-uploads \
--cors-configuration file://cors.json \
--endpoint-url http://your-server:9000
Put the Store Behind a Domain + SSL
For production, put MinIO/SeaweedFS behind Nginx or Traefik with SSL instead of exposing port 9000 directly:
# Nginx reverse proxy for S3-compatible store:
server {
listen 443 ssl;
server_name storage.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/storage.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/storage.yourdomain.com/privkey.pem;
# Increase for large file uploads:
client_max_body_size 5G;
location / {
proxy_pass http://localhost:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Then set S3_ENDPOINT=https://storage.yourdomain.com in your app.
Verification Checklist
After switching the app endpoint:
# 1. Upload a test file:
aws s3 cp test.txt s3://my-uploads/test.txt \
--endpoint-url http://your-server:9000
# 2. Download it back:
aws s3 cp s3://my-uploads/test.txt /tmp/test-download.txt \
--endpoint-url http://your-server:9000
# 3. List objects:
aws s3 ls s3://my-uploads/ \
--endpoint-url http://your-server:9000
# 4. Verify object count matches source:
aws s3 ls s3://my-uploads/ --recursive | wc -l # self-hosted
aws s3 ls s3://my-uploads/ --recursive | wc -l # AWS source (compare)
# 5. Test pre-signed URL:
url=$(aws s3 presign s3://my-uploads/test.txt \
--endpoint-url http://your-server:9000 \
--expires-in 3600)
curl -o /dev/null -w "%{http_code}" "$url" # Should return 200
Cost Savings Example
Before (100 GB storage, 200 GB egress/month on AWS S3):
- Storage: 100 GB × $0.023 = $2.30
- Egress: 200 GB × $0.09 = $18.00
- Requests: ~$2.00
- Total: ~$22/month
After (Hetzner CX22 VPS at €4.35/month):
- Server (shared with other services): ~$2/month allocated to storage
- Egress: $0 (included in VPS)
- Total: ~$2/month — 90% savings
At 1 TB + 1 TB egress, savings reach $100+/month.
Compare all open source AWS alternatives at OSSAlt.com.