Understanding Supabase Limitations
Every Tool Has Trade-offs
Understanding limitations helps you design around them and set appropriate expectations. Supabase is powerful, but knowing its boundaries prevents frustration.
Database Limitations
Connection Limits
PostgreSQL has connection limits:
| Tier | Direct Connections | Pooled Connections |
|---|---|---|
| Free | 60 | 200 |
| Pro | 100+ | 400+ |
| Enterprise | Custom | Custom |
Workarounds:
- Use connection pooling (Supavisor)
- Use the REST API instead of direct connections
- Implement connection management in your app
Database Size
Each tier has storage limits:
- Free: 500MB database
- Pro: 8GB included, expandable
- Enterprise: Custom
Workarounds:
- Archive old data
- Use external storage for large files
- Consider data retention policies
Compute Resources
PostgreSQL compute is shared/limited:
- Complex queries can time out
- Heavy aggregations may be slow
- Large imports can be throttled
Workarounds:
- Optimize queries with proper indexes
- Use materialized views for complex aggregations
- Batch large operations
Realtime Limitations
Concurrent Connections
Realtime connections are limited:
| Tier | Concurrent Connections |
|---|---|
| Free | 200 |
| Pro | 500+ |
Workarounds:
- Use connection only when needed
- Disconnect on page/app background
- Consider polling for less critical updates
Message Rate
Messages per second are capped:
- Postgres Changes: Limited by database
- Broadcast: Higher throughput but still limited
Workarounds:
- Batch updates
- Throttle high-frequency events
- Use selective subscriptions
RLS Overhead
Each realtime event checks RLS:
Event → RLS Check → Filtered to User
Many subscribers = Many RLS evaluations
Complex RLS = Slower event delivery
Workarounds:
- Simplify RLS policies
- Use Broadcast for ephemeral data (no RLS)
- Limit subscriber count per channel
Storage Limitations
File Size
Maximum upload sizes vary:
- Free: Limited
- Pro: 5GB per file
- Custom: Configurable
Workarounds:
- Implement chunked uploads
- Compress before upload
- Use external storage for very large files
Transformations
Image transformation limits:
- Dimensions have maximums
- Some formats not supported
- Processing adds latency
Workarounds:
- Pre-process images before upload
- Cache transformed images
- Use CDN for frequently accessed images
Edge Functions Limitations
Execution Time
Functions have timeout limits:
- Default: 30 seconds
- Maximum: Varies by plan
Workarounds:
- Break long tasks into steps
- Use background processing
- Implement async patterns
Memory
Limited memory per execution:
- Each invocation has memory constraints
- Large data processing may fail
Workarounds:
- Process data in batches
- Stream large datasets
- Use database for heavy processing
Cold Starts
First invocation may be slower:
Cold Start: ~200-500ms
Warm Start: ~20-50ms
Workarounds:
- Keep functions warm with periodic pings
- Accept occasional latency
- Use for non-latency-critical operations
Authentication Limitations
Provider Availability
Not all OAuth providers supported natively:
- Major providers: Yes (Google, GitHub, etc.)
- Niche providers: May need custom implementation
Workarounds:
- Use custom OAuth implementation
- Edge Functions for custom providers
- SAML for enterprise SSO
Session Management
Limited session customization:
- Token expiry has defaults
- Some session patterns not supported
Workarounds:
- Adjust settings where possible
- Implement custom session logic
- Use Edge Functions for complex auth flows
Query Limitations
PostgREST Constraints
The REST API has limitations:
// Some PostgreSQL features not available via REST
// - Custom SQL functions (need RPC)
// - Complex CTEs (need database functions)
// - Lateral joins (limited support)
Workarounds:
- Create database functions, call via RPC
- Use direct database connection for complex queries
- Pre-compute complex data with views
No GraphQL Mutations
pg_graphql is primarily for queries:
- Complex mutations limited
- Custom resolver logic not supported
Workarounds:
- Use REST API for mutations
- Create RPC functions
- Use Edge Functions for complex operations
Operational Limitations
Backup Control
Backup options vary by tier:
- Free: Limited backups
- Pro: Daily backups, point-in-time recovery
- Custom backup scheduling requires higher tiers
Workarounds:
- Implement your own backup scripts
- Use
pg_dumpregularly - Consider data replication
Migration Complexity
Schema changes can be tricky:
- No built-in schema versioning
- RLS policies require careful migration
- Some changes need downtime
Workarounds:
- Use Supabase CLI migrations
- Test migrations in staging
- Plan for zero-downtime deployments
Multi-Region
Single region per project:
- Higher latency for distant users
- No automatic failover
Workarounds:
- Choose region closest to most users
- Use CDN for static assets
- Consider read replicas (enterprise)
Working Around Limitations
Pattern: Hybrid Architecture
Supabase (Core Data)
│
├── Auth & User Data
├── Business Logic
└── Relational Data
External Services (Specialized Needs)
│
├── Search (Algolia/Elasticsearch)
├── Analytics (Mixpanel/Amplitude)
└── Heavy Processing (Cloud Functions)
Pattern: Caching Layer
// Cache frequently accessed data
const cache = new Map()
async function getPopularPosts() {
if (cache.has('popular_posts')) {
return cache.get('popular_posts')
}
const { data } = await supabase
.from('posts')
.select('*')
.order('views', { ascending: false })
.limit(10)
cache.set('popular_posts', data)
setTimeout(() => cache.delete('popular_posts'), 60000) // 1 min TTL
return data
}
Pattern: Background Processing
// Don't block user requests with heavy operations
async function processLargeImport(data) {
// Queue the job
await supabase.from('job_queue').insert({
type: 'import',
data,
status: 'pending'
})
// Return immediately
return { status: 'queued' }
}
// Process in background (Edge Function or cron)
async function processQueue() {
const { data: jobs } = await supabase
.from('job_queue')
.select('*')
.eq('status', 'pending')
.limit(10)
for (const job of jobs) {
await processJob(job)
}
}
Key Takeaways
- Limitations exist: Every platform has them
- Plan around them: Design with limits in mind
- Hybrid is valid: Use specialized tools alongside Supabase
- Caching helps: Reduce database load
- Async for heavy work: Don't block user requests
- Higher tiers expand limits: Consider upgrading for scale
Next Steps
With limitations understood, we'll discuss migration considerations—both migrating to and from Supabase.
Knowing limitations upfront is a feature, not a bug. It lets you design systems that work within constraints rather than hitting walls in production.

