Migration Considerations
Planning for Portability
One of Supabase's strengths is that it's built on standard technologies. This makes migration—both to and from Supabase—more straightforward than proprietary alternatives.
Migrating TO Supabase
From Firebase/Firestore
The biggest challenge: NoSQL to SQL data model transformation.
Firestore (Document) PostgreSQL (Relational)
users/ CREATE TABLE users (
user_123/ id uuid PRIMARY KEY,
name: "Alice" name text,
posts: [ email text
{title: "..."}, );
{title: "..."}
] CREATE TABLE posts (
id uuid PRIMARY KEY,
user_id uuid REFERENCES users(id),
title text
);
Steps:
- Design relational schema
- Export Firestore data (JSON)
- Transform data to match schema
- Import to PostgreSQL
- Update application queries
- Migrate authentication
Tooling: Community migration tools exist, but expect manual work.
From Other SQL Databases
PostgreSQL compatibility makes this easier:
# From MySQL
mysqldump --compatible=postgresql database > dump.sql
# May need adjustments for data types
# From another PostgreSQL
pg_dump -h old-host -d olddb > dump.sql
psql -h your-project.supabase.co -d postgres < dump.sql
Considerations:
- Data type differences (MySQL vs PostgreSQL)
- Auto-increment vs UUID decisions
- Foreign key constraint order
From Custom Backend
Map your existing API to Supabase patterns:
Custom API Supabase Equivalent
GET /users/:id supabase.from('users').select('*').eq('id', id)
POST /users supabase.from('users').insert(data)
PUT /users/:id supabase.from('users').update(data).eq('id', id)
DELETE /users/:id supabase.from('users').delete().eq('id', id)
Steps:
- Export existing database
- Create equivalent schema in Supabase
- Import data
- Implement RLS policies
- Update client code to use Supabase SDK
Migrating FROM Supabase
The Portability Promise
Supabase is PostgreSQL—your data is always exportable:
# Full database export
pg_dump -h your-project.supabase.co \
-U postgres \
-d postgres \
--no-owner \
> full_backup.sql
# Specific tables
pg_dump -h your-project.supabase.co \
-U postgres \
-d postgres \
-t public.users \
-t public.posts \
> partial_backup.sql
What Exports Cleanly
- Your tables: Complete data and schema
- Indexes: All custom indexes
- Views: Custom views you created
- Functions: Your stored procedures
- RLS policies: Exportable as SQL
What Needs Attention
- auth.users: Schema is specific to GoTrue
- storage.objects: Metadata only, not files
- Edge Functions: Deno-specific, may need porting
- Realtime config: Channel configuration
Migration to Self-Hosted Supabase
Most seamless option:
# 1. Set up self-hosted Supabase
docker-compose up -d
# 2. Export from hosted
pg_dump -h hosted.supabase.co ... > backup.sql
# 3. Import to self-hosted
psql -h localhost -d postgres < backup.sql
# 4. Export and import storage files
# 5. Update client connection strings
Migration to Different Backend
Requires more work:
Supabase Custom Backend
PostgreSQL Data → Any database
RLS Policies → Application middleware
GoTrue Auth → Auth0/Cognito/Custom
Edge Functions → AWS Lambda/Cloud Functions
Storage → S3/CloudStorage
Realtime → Custom WebSocket/Pusher
Data Export Strategies
Full Export
# Complete database dump
pg_dump -h your-project.supabase.co \
-U postgres \
-d postgres \
--format=custom \
--file=full_backup.dump
Incremental/Selective Export
-- Export recent data only
COPY (
SELECT * FROM orders
WHERE created_at > '2024-01-01'
) TO '/tmp/recent_orders.csv' WITH CSV HEADER;
Continuous Sync
For gradual migration:
// Webhook or trigger to sync changes
async function syncToNewSystem(payload) {
const { old, new: newRecord, eventType } = payload
switch (eventType) {
case 'INSERT':
await newSystem.create(newRecord)
break
case 'UPDATE':
await newSystem.update(newRecord.id, newRecord)
break
case 'DELETE':
await newSystem.delete(old.id)
break
}
}
Storage Migration
Export Files
// List all files
const { data: files } = await supabase.storage
.from('bucket')
.list('', { recursive: true })
// Download each file
for (const file of files) {
const { data } = await supabase.storage
.from('bucket')
.download(file.name)
// Save or upload to new storage
await saveFile(file.name, data)
}
Update References
-- If URLs are stored in database, update them
UPDATE posts
SET image_url = REPLACE(
image_url,
'old-project.supabase.co',
'new-storage-url.com'
);
Authentication Migration
Export User Data
-- Export user accounts
SELECT
id,
email,
phone,
created_at,
email_confirmed_at,
raw_user_meta_data
FROM auth.users;
Handle Password Migration
Passwords are hashed—options:
- Force password reset: Users set new passwords
- Keep hashes: If new system uses same bcrypt algorithm
- Gradual migration: Verify on login, migrate on success
OAuth Identities
-- Export OAuth links
SELECT
user_id,
provider,
identity_data
FROM auth.identities;
Users may need to re-link OAuth accounts depending on new system.
Minimizing Migration Risk
1. Run in Parallel
Phase 1: Write to both systems
Phase 2: Read from new, verify against old
Phase 3: Read/write from new only
Phase 4: Decommission old
2. Feature Flags
const USE_NEW_BACKEND = process.env.NEW_BACKEND === 'true'
async function getUser(id) {
if (USE_NEW_BACKEND) {
return newBackend.getUser(id)
}
return supabase.from('users').select('*').eq('id', id).single()
}
3. Gradual Rollout
- Migrate read-only operations first
- Then migrate writes
- Keep rollback capability
Self-Hosting Supabase
If you want Supabase features without the managed service:
Docker Compose
# docker-compose.yml (simplified)
services:
db:
image: supabase/postgres
kong:
image: kong
auth:
image: supabase/gotrue
rest:
image: postgrest/postgrest
realtime:
image: supabase/realtime
storage:
image: supabase/storage-api
studio:
image: supabase/studio
Considerations
- Operational burden: You manage everything
- Updates: Manual updates required
- Scaling: Your responsibility
- Cost: Infrastructure + time
When Self-Hosting Makes Sense
- Data sovereignty requirements
- Custom infrastructure needs
- Cost optimization at scale
- Regulatory compliance
Key Takeaways
- PostgreSQL is portable: Standard SQL exports work
- NoSQL migration is harder: Data model transformation needed
- Plan for auth migration: Passwords and OAuth need attention
- Storage requires file transfer: Not just metadata
- Gradual migration reduces risk: Run parallel systems
- Self-hosting is an option: Full control, more responsibility
Module Summary
In this module, you've learned:
- When Supabase is the right choice
- Current limitations and workarounds
- Migration strategies to and from Supabase
Choosing a backend is a significant decision, but Supabase's use of standard technologies ensures you're never locked in. Build with confidence knowing your data is always portable.
The best vendor relationship is one where you could leave but choose to stay. Supabase's commitment to open standards means you're always in control of your data and destiny.

