Supabase Storage Architecture
The File Storage Challenge
Modern applications need more than just database storage. User avatars, document uploads, media files—all require a different kind of storage optimized for binary data and efficient delivery.
What is Supabase Storage?
Supabase Storage is an S3-compatible object storage system integrated with your Supabase project. It provides:
- File upload and download
- Access control via RLS policies
- Image transformations
- CDN delivery
- Direct database integration
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Client Application │
└─────────────────────────────────────────────────────────────┘
│
HTTP/REST API
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Storage API Server │
│ │
│ ┌─────────────────┐ ┌─────────────────────────────┐ │
│ │ Authorization │ │ Image Transformation │ │
│ │ (RLS Policies) │ │ (resize, crop, format) │ │
│ └─────────────────┘ └─────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────┴─────────┐
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ PostgreSQL │ │ S3-Compatible │
│ (Metadata in │ │ Object Store │
│ storage schema) │ │ (File Content) │
└─────────────────┘ └─────────────────┘
The storage Schema
Supabase stores file metadata in PostgreSQL:
storage.buckets
-- Bucket definitions
CREATE TABLE storage.buckets (
id text PRIMARY KEY,
name text UNIQUE,
owner uuid REFERENCES auth.users(id),
public boolean DEFAULT false,
avif_autodetection boolean DEFAULT false,
file_size_limit bigint,
allowed_mime_types text[],
created_at timestamptz DEFAULT now(),
updated_at timestamptz DEFAULT now()
);
storage.objects
-- File metadata
CREATE TABLE storage.objects (
id uuid PRIMARY KEY,
bucket_id text REFERENCES storage.buckets(id),
name text, -- Path within bucket
owner uuid REFERENCES auth.users(id),
metadata jsonb,
created_at timestamptz DEFAULT now(),
updated_at timestamptz DEFAULT now(),
last_accessed_at timestamptz,
version text
);
-- Unique constraint: one path per bucket
CREATE UNIQUE INDEX ON storage.objects (bucket_id, name);
Key Insight
The actual file bytes are stored in S3-compatible storage, but metadata lives in PostgreSQL. This enables:
- RLS policies for access control
- SQL queries on file metadata
- Foreign key relationships
- Triggers on file events
Public vs Private Buckets
Public Buckets
Files accessible without authentication:
URL: https://project.supabase.co/storage/v1/object/public/avatars/image.jpg
Access: Anyone with the URL
Use case: Profile pictures, public images, marketing assets
Private Buckets
Files require authentication:
URL: https://project.supabase.co/storage/v1/object/authenticated/documents/report.pdf
Header: Authorization: Bearer <jwt>
Access: Controlled by RLS policies
Use case: User documents, private uploads, sensitive files
File Naming and Organization
Path Structure
Files are organized by path within buckets:
Bucket: avatars
├── user_123/
│ ├── profile.jpg
│ └── cover.jpg
├── user_456/
│ └── profile.png
└── defaults/
└── placeholder.svg
Full path: avatars/user_123/profile.jpg
Naming Conventions
// Good: Organized, predictable paths
const path = `${userId}/documents/${documentId}/${filename}`
// users/123/documents/456/report.pdf
// Good: Include file type for filtering
const path = `uploads/${category}/${uniqueId}.${extension}`
// uploads/images/abc123.jpg
// Bad: Flat structure becomes unmanageable
const path = filename
// report.pdf (no organization)
Handling Duplicate Names
Options for uniqueness:
// Option 1: UUID prefix
const path = `${userId}/${crypto.randomUUID()}_${filename}`
// Option 2: Timestamp
const path = `${userId}/${Date.now()}_${filename}`
// Option 3: Content hash
const hash = await hashFile(file)
const path = `${userId}/${hash}.${extension}`
How Uploads Work
Upload Flow
1. Client initiates upload
└── POST /storage/v1/object/{bucket}/{path}
└── Body: File content
└── Headers: Authorization, Content-Type
2. Storage API authenticates
└── Validates JWT
└── Extracts user info
3. RLS policy check
└── Evaluates INSERT policy on storage.objects
└── Uses auth.uid() and bucket/path info
4. File stored
└── Bytes → S3-compatible store
└── Metadata → PostgreSQL storage.objects
5. Response returned
└── Success: File URL/key
└── Error: Policy violation or storage error
Upload Options
// Basic upload
const { data, error } = await supabase.storage
.from('avatars')
.upload('path/to/file.jpg', fileBlob)
// With options
const { data, error } = await supabase.storage
.from('avatars')
.upload('path/to/file.jpg', fileBlob, {
cacheControl: '3600', // CDN cache
contentType: 'image/jpeg',
upsert: false // Fail if exists
})
// Upsert (replace if exists)
const { data, error } = await supabase.storage
.from('avatars')
.upload('path/to/file.jpg', fileBlob, {
upsert: true
})
How Downloads Work
Download Flow
1. Client requests file
└── GET /storage/v1/object/public/{bucket}/{path}
└── or: GET /storage/v1/object/authenticated/{bucket}/{path}
2. For authenticated buckets:
└── Validate JWT
└── Check RLS SELECT policy
3. File retrieved and returned
└── From S3 store or CDN cache
Download Methods
// Get public URL (for public buckets)
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('path/to/file.jpg')
// data.publicUrl: https://project.supabase.co/storage/v1/object/public/avatars/path/to/file.jpg
// Download file (for private buckets)
const { data, error } = await supabase.storage
.from('documents')
.download('path/to/file.pdf')
// data: Blob
// Create signed URL (temporary access)
const { data, error } = await supabase.storage
.from('documents')
.createSignedUrl('path/to/file.pdf', 3600) // Expires in 1 hour
// data.signedUrl: https://...?token=...
Metadata and Search
Since metadata is in PostgreSQL, you can query it:
// List files in a folder
const { data, error } = await supabase.storage
.from('documents')
.list('user_123/reports', {
limit: 100,
offset: 0,
sortBy: { column: 'created_at', order: 'desc' }
})
// Result includes metadata
[
{
name: 'q1-report.pdf',
id: 'uuid',
created_at: '2024-01-15T10:00:00Z',
metadata: { size: 1024000, mimetype: 'application/pdf' }
}
]
Querying via SQL
-- Find large files
SELECT name, metadata->>'size' as size
FROM storage.objects
WHERE bucket_id = 'documents'
AND (metadata->>'size')::bigint > 10000000;
-- Count files per user
SELECT owner, COUNT(*) as file_count
FROM storage.objects
WHERE bucket_id = 'uploads'
GROUP BY owner;
CDN and Caching
How CDN Works
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Client │────→│ CDN │────→│ Origin │
│ │ │ (Edge) │ │(Storage)│
└─────────┘ └─────────┘ └─────────┘
First request:
Client → CDN (miss) → Origin → CDN (cache) → Client
Subsequent requests:
Client → CDN (hit) → Client
Cache Control
// Set cache duration on upload
await supabase.storage
.from('avatars')
.upload('profile.jpg', file, {
cacheControl: '31536000', // 1 year
contentType: 'image/jpeg'
})
Cache Invalidation
When you update a file:
- Using
upsert: truereplaces the file - CDN may serve cached version until TTL expires
- Solution: Use versioned paths or unique filenames
// Versioned path for cache busting
const version = Date.now()
const path = `user_123/avatar_${version}.jpg`
Key Takeaways
- Metadata in PostgreSQL: Files have queryable metadata
- RLS protects files: Same security model as tables
- Public vs Private: Choose based on access needs
- CDN improves performance: Automatic edge caching
- Organize with paths: Structure files like folders
- Query with SQL: Metadata is fully queryable
Looking Ahead
Understanding the architecture helps you design effective storage solutions. Next, we'll explore buckets, objects, and access control in detail.
Supabase Storage isn't just "S3 with auth"—it's deeply integrated with PostgreSQL. This integration lets you manage files with the same tools and patterns you use for data.

