Scaling Realtime Applications
The Scaling Challenge
Realtime features introduce unique scaling challenges compared to traditional request-response APIs. Each connected client maintains a persistent connection, and messages must be delivered quickly to many recipients simultaneously.
Understanding Realtime Load
Connection Load
Every connected client consumes:
Per Connection:
βββ Server memory (~50KB-200KB)
βββ File descriptor
βββ CPU for message processing
βββ Bandwidth for keepalives
1,000 connections = ~200MB memory
10,000 connections = ~2GB memory
Message Load
When a change occurs, it must be evaluated and potentially sent to many clients:
Broadcast to 1,000 clients:
βββ 1 message received
βββ 1,000 RLS evaluations (for Postgres Changes)
βββ 1,000 messages sent
βββ 1,000 acknowledgments received
Supabase Realtime Limits
Understanding platform limits helps you design appropriately:
| Metric | Free Tier | Pro Tier |
|---|---|---|
| Concurrent connections | 200 | 500+ |
| Messages per second | 100 | 1,000+ |
| Channels per connection | 100 | 100 |
| Message size | 1MB | 1MB |
Limits vary by plan; check current documentation
Optimization Strategies
1. Minimize Subscriptions
Don't subscribe to everything:
// Bad: Subscribe to all changes
supabase.channel('all-data').on('postgres_changes', {
event: '*',
schema: 'public',
table: '*' // All tables!
}, handler)
// Good: Subscribe only to what you need
supabase.channel('user-posts').on('postgres_changes', {
event: '*',
schema: 'public',
table: 'posts',
filter: `user_id=eq.${userId}` // Only user's posts
}, handler)
2. Use Filters Effectively
Filters are evaluated server-side, reducing network traffic:
// Without filter: Client receives ALL messages, discards most
channel.on('postgres_changes', {
event: 'INSERT',
schema: 'public',
table: 'messages'
}, msg => {
if (msg.new.room_id === roomId) { // Client-side filter
displayMessage(msg)
}
})
// With filter: Only relevant messages sent
channel.on('postgres_changes', {
event: 'INSERT',
schema: 'public',
table: 'messages',
filter: `room_id=eq.${roomId}` // Server-side filter
}, displayMessage)
3. Throttle High-Frequency Updates
For mouse movement, typing, etc.:
// Bad: Send on every mouse move
document.addEventListener('mousemove', (e) => {
channel.send({
type: 'broadcast',
event: 'cursor',
payload: { x: e.clientX, y: e.clientY }
})
})
// Good: Throttle to reasonable rate
const throttledSend = throttle((x, y) => {
channel.send({
type: 'broadcast',
event: 'cursor',
payload: { x, y }
})
}, 50) // Max 20 updates/second
document.addEventListener('mousemove', (e) => {
throttledSend(e.clientX, e.clientY)
})
4. Batch Updates
Combine multiple changes into single messages:
// Bad: Many small broadcasts
items.forEach(item => {
channel.send({
type: 'broadcast',
event: 'item_update',
payload: item
})
})
// Good: Single batched broadcast
channel.send({
type: 'broadcast',
event: 'items_update',
payload: { items }
})
5. Disconnect When Not Needed
Clean up subscriptions when leaving pages:
// React example
useEffect(() => {
const channel = supabase.channel('my-channel')
// ... setup subscriptions ...
return () => {
// Cleanup on unmount
supabase.removeChannel(channel)
}
}, [])
Channel Design Patterns
Pattern: Room-Based Channels
Instead of one global channel, create channels per context:
// Bad: Global channel with filters
supabase.channel('all-chats')
.on('postgres_changes', {
table: 'messages',
filter: `room_id=eq.${roomId}`
}, handler)
// Better: Room-specific channels
supabase.channel(`chat:${roomId}`)
.on('postgres_changes', {
table: 'messages',
filter: `room_id=eq.${roomId}`
}, handler)
Benefits:
- Only connected users in the room receive messages
- Server can optimize routing
- Cleaner subscription management
Pattern: User-Specific Channels
For notifications or private data:
// User's private notification channel
const userChannel = supabase.channel(`user:${userId}:notifications`)
.on('postgres_changes', {
event: 'INSERT',
schema: 'public',
table: 'notifications',
filter: `user_id=eq.${userId}`
}, handleNotification)
Pattern: Hierarchical Channels
Organize channels by feature and scope:
Channel Naming:
βββ app:announcements (global)
βββ team:{teamId}:chat (team-specific)
βββ doc:{docId}:cursors (document-specific)
βββ user:{userId}:inbox (user-specific)
Handling Disconnections
Network issues happen. Design for resilience:
Automatic Reconnection
The Supabase SDK handles reconnection:
const channel = supabase.channel('my-channel')
channel.subscribe((status) => {
switch (status) {
case 'SUBSCRIBED':
console.log('Connected')
break
case 'CLOSED':
console.log('Disconnected')
break
case 'CHANNEL_ERROR':
console.log('Error occurred')
break
}
})
// SDK automatically attempts to reconnect
Handling Missed Messages
During disconnection, messages are lost. Handle this:
// Track last received timestamp
let lastMessageTime = Date.now()
channel.on('postgres_changes', { ... }, (payload) => {
lastMessageTime = Date.now()
handleMessage(payload)
})
// On reconnect, fetch missed messages
channel.subscribe((status) => {
if (status === 'SUBSCRIBED') {
// Fetch messages since last received
fetchMessagesSince(lastMessageTime)
}
})
async function fetchMessagesSince(timestamp) {
const { data } = await supabase
.from('messages')
.select('*')
.gt('created_at', new Date(timestamp).toISOString())
.order('created_at')
data.forEach(handleMessage)
}
Performance Monitoring
Track Connection Health
let latencyMs = 0
let lastHeartbeat = Date.now()
// Periodic latency check
setInterval(async () => {
const start = Date.now()
await channel.send({
type: 'broadcast',
event: 'ping',
payload: { timestamp: start }
})
}, 10000)
channel.on('broadcast', { event: 'pong' }, ({ payload }) => {
latencyMs = Date.now() - payload.timestamp
console.log(`Latency: ${latencyMs}ms`)
})
Monitor Subscription Count
// Track active subscriptions
const activeChannels = new Set()
function subscribe(channelName, config) {
const channel = supabase.channel(channelName)
activeChannels.add(channelName)
channel.subscribe((status) => {
if (status === 'CLOSED') {
activeChannels.delete(channelName)
}
console.log(`Active channels: ${activeChannels.size}`)
})
return channel
}
When to Use Different Approaches
Realtime vs Polling
| Scenario | Approach | Why |
|---|---|---|
| Chat messages | Realtime | Immediate delivery expected |
| Dashboard updates | Realtime or Polling | Depends on update frequency |
| User list | Presence | Built-in tracking |
| Infrequent data | Polling | Simpler, less overhead |
| High-frequency data | Realtime with throttling | Balance responsiveness and cost |
Postgres Changes vs Broadcast
| Scenario | Approach | Why |
|---|---|---|
| Persistent data | Postgres Changes | Need DB sync |
| Ephemeral state | Broadcast | No storage needed |
| 1-5 updates/sec | Postgres Changes | Reasonable DB load |
| 50+ updates/sec | Broadcast | DB can't keep up |
Architecture Considerations
Fan-Out Limits
When one message goes to many clients:
Message to 10,000 clients:
βββ Server must process 10,000 sends
βββ Each requires RLS check (for Postgres Changes)
βββ Network bandwidth = message_size Γ 10,000
Mitigation:
βββ Use broadcast for ephemeral data (no RLS)
βββ Use simple RLS policies
βββ Consider message size
Connection Pool Management
Server Connection Limits:
βββ PostgreSQL: ~100 connections
βββ Realtime: thousands of WebSocket connections
βββ Each Realtime connection doesn't need DB connection
Implication:
βββ Realtime scales differently than API
βββ DB is rarely the bottleneck for Realtime
Key Takeaways
- Connections have cost: Manage subscription lifecycle
- Filter server-side: Use filter parameter, not client filtering
- Throttle high-frequency: Mouse moves, typing indicators
- Design channels wisely: Room-based isolation
- Handle disconnections: Fetch missed messages on reconnect
- Choose appropriate tool: Realtime isn't always the answer
Module Summary
In this module, you've learned:
- How Supabase Realtime architecture works
- Postgres Changes for database sync
- Broadcast for ephemeral messaging
- Presence for online status tracking
- Scaling considerations and optimizations
Realtime makes applications feel alive. Use it wisely, and your users will enjoy instant, responsive experiences. Next, we'll explore Supabase Storage.
Scaling realtime isn't about handling more messagesβit's about sending fewer, better-targeted messages. The most scalable realtime system is one where every message matters.
Discussion
Sign in to join the discussion.

