How to Migrate from Firebase to Supabase 2026
How to Migrate from Firebase to Supabase 2026
Firebase locks you into Google's ecosystem with proprietary NoSQL (Firestore) and opaque pricing that scales unpredictably. Supabase gives you PostgreSQL — the world's most advanced open source database — plus auth, storage, and real-time subscriptions, with fully predictable self-hostable pricing. The migration takes real effort, but teams consistently report it pays for itself within months.
TL;DR
The core migration is Firestore (NoSQL) → PostgreSQL (relational). This requires schema redesign — you can't just dump documents into tables. Auth migration is medium effort (users need password resets). Storage is straightforward. Client code changes are significant but follow a consistent pattern. Plan 2-4 weeks for a medium-sized production app.
Key Takeaways
- Firestore → PostgreSQL is the hardest part: document nesting becomes JOIN relationships
- Firebase Auth users can be migrated, but passwords cannot — users must reset or use OAuth
- Supabase Storage is S3-compatible; file migration is scripted with rclone or direct API calls
- The Supabase JS client replaces the Firebase SDK with a SQL-like query builder
- Row Level Security (RLS) in Supabase replaces Firebase's Firestore Security Rules
- Self-hosting Supabase on a $40/month VPS eliminates per-read/write costs entirely
What Changes
| Firebase | Supabase | Migration Effort |
|---|---|---|
| Firestore (NoSQL) | PostgreSQL (relational) | 🔴 High — schema redesign |
| Firebase Auth | Supabase Auth | 🟡 Medium — user migration |
| Cloud Storage | Supabase Storage | 🟢 Low — file migration |
| Realtime Database | Supabase Realtime | 🟡 Medium — API changes |
| Cloud Functions | Supabase Edge Functions | 🟡 Medium — rewrite |
| Firebase Hosting | Not included | Use Vercel/Netlify |
| FCM (Push) | Not included | Use OneSignal/ntfy |
| Firebase Analytics | Not included | Use PostHog or Umami |
Why Teams Switch from Firebase
Firebase's billing is based on reads, writes, and deletes — which sounds small until a poorly-optimized query fan-out on a product launch hits 50 million reads in an hour. Teams have received surprise bills of thousands of dollars from a single traffic spike with no warning. Supabase's pricing is based on compute and storage, not operation counts. For read-heavy applications, this difference alone drives migration decisions.
There's also the lock-in question. Firestore uses a proprietary query model that doesn't map to any standard. Supabase runs PostgreSQL — you can take your database and self-host it, migrate to any PostgreSQL-compatible cloud, or leverage the entire pg ecosystem: PostGIS, pgvector, Timescale, logical replication, pg_cron. That portability is genuinely valuable for long-term planning. When you outgrow Supabase, you can migrate to a managed Postgres service without rewriting your application.
Another key driver is querying power. Firestore can't do joins. If you want data from two collections together, you're doing N+1 fetches or denormalization — both of which create maintenance headaches. PostgreSQL joins are a solved problem with decades of query optimization behind them.
Step 1: Set Up Supabase
Cloud (fastest to start):
- Sign up at supabase.com
- Create a new project (choose a region close to your users)
- Note your project URL and anon key from Project Settings → API
Self-hosted (best for production control):
git clone https://github.com/supabase/supabase.git
cd supabase/docker
cp .env.example .env
# Edit .env: set POSTGRES_PASSWORD, JWT_SECRET, ANON_KEY, SERVICE_ROLE_KEY
docker compose up -d
Self-hosting gives you unlimited API calls, full data ownership, and zero cold starts. For EU teams with GDPR requirements, self-hosting means data never leaves your infrastructure. The Docker deployment runs Postgres, PostgREST, GoTrue (auth), Realtime, Storage, and Studio as separate containers. A dedicated $40-60/month VPS handles most production workloads comfortably.
Step 2: Design Your PostgreSQL Schema
This is the most important step and where teams spend most migration time. Firestore stores data as nested document trees. PostgreSQL is relational. The translation requires you to make relationships explicit.
Analyze your Firestore structure first:
Before writing any SQL, map out your Firestore collection hierarchy. A typical structure like users/{userId} with subcollections posts/{postId} and posts/{postId}/comments/{commentId} becomes three tables with foreign keys. The implicit joins that Firestore enforces with document paths become explicit SQL REFERENCES constraints.
One rule of thumb: every Firestore collection becomes a PostgreSQL table. Every subcollection also becomes a table with a foreign key to its parent. Firestore arrays of primitives (like tags: ['react', 'typescript']) can become PostgreSQL TEXT[] arrays. Firestore arrays of objects should generally become a separate table with a foreign key.
Export Firestore data:
// Node.js script to export Firestore collections
const admin = require('firebase-admin');
const fs = require('fs');
admin.initializeApp();
const db = admin.firestore();
async function exportCollection(name) {
const snapshot = await db.collection(name).get();
const data = snapshot.docs.map(doc => ({ id: doc.id, ...doc.data() }));
fs.writeFileSync(`${name}.json`, JSON.stringify(data, null, 2));
}
await exportCollection('users');
await exportCollection('posts');
await exportCollection('comments');
Design PostgreSQL schema:
-- Convert Firestore documents to relational tables
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT UNIQUE NOT NULL,
name TEXT,
avatar_url TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE posts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
title TEXT NOT NULL,
content TEXT,
tags TEXT[],
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE comments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
post_id UUID REFERENCES posts(id) ON DELETE CASCADE,
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
body TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Add indexes for common query patterns
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_comments_post_id ON comments(post_id);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);
Import data:
// Import exported JSON into Supabase
const { createClient } = require('@supabase/supabase-js');
const users = require('./users.json');
const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);
for (const user of users) {
await supabase.from('users').insert({
email: user.email,
name: user.name,
avatar_url: user.avatarUrl,
created_at: user.createdAt?.toDate?.() || new Date(),
});
}
Run data imports in batches of 500-1000 rows to avoid timeouts. Wrap inserts in transactions and handle constraint violations — you may find data inconsistencies in Firestore that relational constraints won't allow, since NoSQL doesn't enforce referential integrity.
Step 3: Migrate Authentication
Firebase Auth and Supabase Auth share similar feature sets: email/password, social OAuth, magic links, phone OTP, and MFA. The key challenge is passwords. Firebase hashes passwords with scrypt using parameters that differ from Supabase's bcrypt implementation. You cannot transfer password hashes directly.
Export Firebase users:
# Use Firebase CLI
firebase auth:export users.json --format=json
Import to Supabase:
const { createClient } = require('@supabase/supabase-js');
const users = require('./users.json');
const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);
for (const user of users.users) {
await supabase.auth.admin.createUser({
email: user.email,
email_confirm: true,
user_metadata: {
name: user.displayName,
firebase_uid: user.localId,
},
});
}
Users will need to reset passwords via the forgot-password flow, or you can set up a magic link email campaign to authenticate them without an explicit password reset. If your app supports Google OAuth, users who signed in via Google can simply click "Continue with Google" — no password reset needed, and accounts automatically link by email.
Store the original Firebase UID in user metadata during migration. This lets you cross-reference migrated data (which uses Firebase UIDs in existing records) with new Supabase UUIDs. A mapping table in your database makes this clean.
Step 4: Migrate Storage
// Download from Firebase Storage, upload to Supabase Storage
const { getStorage } = require('firebase-admin/storage');
const bucket = getStorage().bucket();
// List and download files
const [files] = await bucket.getFiles({ prefix: 'uploads/' });
for (const file of files) {
const [content] = await file.download();
await supabase.storage
.from('uploads')
.upload(file.name, content, {
contentType: file.metadata.contentType,
});
}
Create your Supabase Storage buckets first in the dashboard. Set them to private or public based on access requirements, then configure storage policies. For large file migrations (gigabytes), use rclone with Supabase's S3-compatible endpoint rather than the JS SDK — rclone handles retries and parallel transfers significantly better.
Step 5: Update Client Code
Before (Firebase):
import { collection, getDocs } from 'firebase/firestore';
const snapshot = await getDocs(collection(db, 'posts'));
const posts = snapshot.docs.map(doc => doc.data());
After (Supabase):
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
const { data: posts } = await supabase
.from('posts')
.select('*, users(name, avatar_url)')
.order('created_at', { ascending: false });
The Supabase client uses a SQL-like query builder. The select() call can include joins using PostgREST's relationship syntax — users(name, avatar_url) fetches the related user in one query, something Firestore literally cannot do. This alone often results in a significant reduction in the number of API calls your frontend makes.
Step 6: Set Up Row Level Security
-- Enable RLS on tables
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
ALTER TABLE comments ENABLE ROW LEVEL SECURITY;
-- Users can read all posts
CREATE POLICY "Public read" ON posts FOR SELECT USING (true);
-- Users can only edit their own posts
CREATE POLICY "Own posts" ON posts FOR ALL
USING (auth.uid() = user_id);
-- Same pattern for comments
CREATE POLICY "Public comment read" ON comments FOR SELECT USING (true);
CREATE POLICY "Own comment write" ON comments FOR ALL
USING (auth.uid() = user_id);
RLS policies are enforced at the database level — not in application code. They apply to all clients including direct database connections, making them more secure than application-level checks. Firestore Security Rules achieve similar goals with a very different syntax. The mental model is the same: define who can read/write what. The implementation in SQL is more powerful because it can reference any table data via subqueries.
Common Pitfalls
Missing indexes: Firestore auto-indexes common query patterns. PostgreSQL does not. After migration, run EXPLAIN ANALYZE on your main queries and add missing indexes. Unindexed sequential scans on tables over 100K rows will cause noticeable latency.
Timestamp conversion: Firestore timestamps are firebase.firestore.Timestamp objects. Call .toDate() to convert to a JS Date object before inserting into PostgreSQL's TIMESTAMPTZ columns. Failing to convert them results in either null values or errors.
User session handling: After migration, existing Firebase JWT tokens are invalid for your Supabase backend. Plan a cutover window where users need to re-authenticate, or build a parallel-run period where both backends accept requests.
Firestore arrays of objects: Don't store them as JSONB unless you're certain you'll never need to filter or query by their contents. Arrays of objects almost always become separate tables in a proper relational model.
Running Firebase and Supabase in Parallel
A parallel-run period reduces the risk of production cutover. For two to four weeks, route a small percentage of new user registrations to Supabase while existing users continue using Firebase. This lets you validate auth flows, query performance, and data integrity in production before committing fully.
The practical implementation: use a feature flag to determine which backend handles each request. New users get Supabase; existing users get Firebase until explicitly migrated. Trigger the migration for batches of existing users by sending password reset emails (which also moves them to Supabase Auth). When all users are migrated, decommission the Firebase connection.
Post-Migration Checklist
- All Firestore collections exported and imported to PostgreSQL
- Firebase Auth users imported, password reset emails sent
- Storage files migrated to Supabase Storage buckets
- Client SDK replaced and all queries updated
- RLS policies set up and tested for all tables
- Realtime subscriptions switched to Supabase channels
- Edge Functions deployed and tested
- Firebase project downgraded after parallel testing period
Cost Comparison: Firebase vs Supabase Over Time
Firebase's pay-as-you-go pricing makes budgeting difficult. Firestore charges per read, write, and delete operation. A production application with 100,000 monthly active users performing routine data operations can accumulate $100 to $500 per month depending on how many Firestore reads each user session generates. Real-time listeners are especially costly because each live update delivered to any connected client counts as a billed read operation.
Supabase Pro costs $25 per month and includes 50,000 MAU, 8 GB database storage, and 100 GB bandwidth. Unlimited API requests are included — Supabase's billing model does not charge per database operation. For applications with high read volumes, this difference is substantial. An application performing 100 million Firestore reads per month at $0.06 per 100,000 reads pays $60 just for reads, before writes, storage, and egress. On Supabase, those same reads are included in the $25 base plan.
Self-hosted Supabase reduces the cost further. The full Supabase stack runs on a single VPS. A Hetzner CX22 with 2 vCPUs and 4 GB RAM costs approximately $6 per month and handles Supabase comfortably for most production workloads up to tens of thousands of daily active users. At that price point, even a modest Firebase bill justifies the self-hosting operational overhead within the first month.
The migration work is real — two to four weeks for a medium-sized application — but the ongoing savings make it worthwhile for any application with active and growing usage. Teams migrating from Firebase to Supabase consistently report lower monthly bills and substantially improved query performance after the schema redesign.
The Bottom Line
Firebase to Supabase is a significant migration — especially the Firestore to PostgreSQL conversion. But you gain SQL power, open source infrastructure, predictable pricing, and no vendor lock-in. Plan for 2-4 weeks of migration work for a medium-sized app. The effort is front-loaded; once your schema is designed and data is imported, the client-side changes are mechanical and consistent.
For lighter-weight alternatives worth comparing, see PocketBase vs Supabase for smaller projects, or the Appwrite vs PocketBase breakdown for a broader perspective on self-hosted BaaS options. For the full list of Firebase alternatives including hosted and self-hosted options, see best open source alternatives to Firebase.
Compare BaaS platforms on OSSAlt — database options, auth features, and pricing side by side.
See open source alternatives to Firebase on OSSAlt.