Buckets, Objects, and Access Control
Understanding Buckets
A bucket is a container for files. Think of it as a top-level folder with its own access rules and configuration.
Creating Buckets
Via Dashboard
- Go to Storage in Supabase Dashboard
- Click "New Bucket"
- Configure name, visibility, and restrictions
Via SQL
INSERT INTO storage.buckets (id, name, public, file_size_limit, allowed_mime_types)
VALUES (
'user-uploads',
'user-uploads',
false, -- Private bucket
5242880, -- 5MB limit
ARRAY['image/jpeg', 'image/png', 'image/webp', 'application/pdf']
);
Via JavaScript
const { data, error } = await supabase.storage.createBucket('avatars', {
public: true,
fileSizeLimit: 1024 * 1024, // 1MB
allowedMimeTypes: ['image/jpeg', 'image/png', 'image/webp']
})
Bucket Configuration
Public vs Private
-- Public: Anyone can read without authentication
INSERT INTO storage.buckets (id, name, public)
VALUES ('marketing-assets', 'marketing-assets', true);
-- Private: All access requires authentication + RLS
INSERT INTO storage.buckets (id, name, public)
VALUES ('user-documents', 'user-documents', false);
File Size Limits
-- Limit file size per bucket
UPDATE storage.buckets
SET file_size_limit = 10485760 -- 10MB
WHERE id = 'uploads';
MIME Type Restrictions
-- Only allow specific file types
UPDATE storage.buckets
SET allowed_mime_types = ARRAY[
'image/jpeg',
'image/png',
'image/gif',
'image/webp'
]
WHERE id = 'avatars';
Working with Objects
Objects are files within buckets, identified by their path.
Object Operations
// Upload
const { data, error } = await supabase.storage
.from('bucket-name')
.upload('folder/file.jpg', fileBlob)
// Download
const { data, error } = await supabase.storage
.from('bucket-name')
.download('folder/file.jpg')
// Get URL
const { data } = supabase.storage
.from('bucket-name')
.getPublicUrl('folder/file.jpg')
// Delete
const { error } = await supabase.storage
.from('bucket-name')
.remove(['folder/file.jpg'])
// Move/Rename
const { error } = await supabase.storage
.from('bucket-name')
.move('old/path.jpg', 'new/path.jpg')
// Copy
const { error } = await supabase.storage
.from('bucket-name')
.copy('source/file.jpg', 'dest/file.jpg')
// List
const { data, error } = await supabase.storage
.from('bucket-name')
.list('folder/', {
limit: 100,
offset: 0
})
Access Control with RLS
Storage uses RLS policies on storage.objects table for fine-grained access control.
Policy Structure
CREATE POLICY "policy_name"
ON storage.objects
FOR { SELECT | INSERT | UPDATE | DELETE | ALL }
USING (expression) -- For SELECT, UPDATE, DELETE
WITH CHECK (expression); -- For INSERT, UPDATE
Helper Function: bucket_id
-- Get the bucket from object path
bucket_id = 'avatars' -- Direct comparison
Helper Function: storage.foldername()
-- Extract folder path components
storage.foldername(name)
-- 'users/123/documents/file.pdf' → ['users', '123', 'documents']
Helper Function: storage.filename()
-- Extract just the filename
storage.filename(name)
-- 'users/123/documents/file.pdf' → 'file.pdf'
Helper Function: storage.extension()
-- Extract file extension
storage.extension(name)
-- 'users/123/documents/file.pdf' → 'pdf'
Common Access Control Patterns
Pattern 1: User's Own Files
Users can only access files in their own folder:
-- Allow users to view their own files
CREATE POLICY "Users can view own files"
ON storage.objects FOR SELECT
USING (
bucket_id = 'user-uploads'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Allow users to upload to their own folder
CREATE POLICY "Users can upload to own folder"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'user-uploads'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Allow users to delete their own files
CREATE POLICY "Users can delete own files"
ON storage.objects FOR DELETE
USING (
bucket_id = 'user-uploads'
AND auth.uid()::text = (storage.foldername(name))[1]
);
File structure:
user-uploads/
├── user-uuid-123/
│ └── ... (user 123's files)
├── user-uuid-456/
│ └── ... (user 456's files)
Pattern 2: Public Read, Owner Write
-- Anyone can view files in avatars bucket
CREATE POLICY "Public avatar access"
ON storage.objects FOR SELECT
USING (bucket_id = 'avatars');
-- Users can only upload their own avatar
CREATE POLICY "Users upload own avatar"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'avatars'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Users can only update their own avatar
CREATE POLICY "Users update own avatar"
ON storage.objects FOR UPDATE
USING (
bucket_id = 'avatars'
AND auth.uid()::text = (storage.foldername(name))[1]
)
WITH CHECK (
bucket_id = 'avatars'
AND auth.uid()::text = (storage.foldername(name))[1]
);
Pattern 3: Team-Based Access
Files accessible by team members:
-- Check if user is team member
CREATE FUNCTION is_team_member(team_uuid text) RETURNS boolean AS $$
SELECT EXISTS (
SELECT 1 FROM team_members
WHERE team_id = team_uuid::uuid
AND user_id = auth.uid()
)
$$ LANGUAGE sql SECURITY DEFINER STABLE;
-- Team members can access team files
CREATE POLICY "Team members access files"
ON storage.objects FOR SELECT
USING (
bucket_id = 'team-files'
AND is_team_member((storage.foldername(name))[1])
);
-- Team members can upload to team folder
CREATE POLICY "Team members upload files"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'team-files'
AND is_team_member((storage.foldername(name))[1])
);
File structure:
team-files/
├── team-uuid-abc/
│ └── ... (team ABC's files)
├── team-uuid-xyz/
│ └── ... (team XYZ's files)
Pattern 4: Document Ownership via Database
Reference documents table for access control:
-- Documents table tracks file ownership
CREATE TABLE documents (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
user_id uuid REFERENCES auth.users(id),
storage_path text NOT NULL, -- Path in storage bucket
title text,
created_at timestamptz DEFAULT now()
);
-- Storage policy checks documents table
CREATE POLICY "Access documents via ownership"
ON storage.objects FOR SELECT
USING (
bucket_id = 'documents'
AND EXISTS (
SELECT 1 FROM public.documents
WHERE storage_path = name
AND user_id = auth.uid()
)
);
Pattern 5: Signed URL Bypass
For sharing private files temporarily:
// Server-side: Generate signed URL
const { data, error } = await supabase.storage
.from('private-bucket')
.createSignedUrl('path/to/file.pdf', 60 * 60) // 1 hour
// Share the signed URL - works without auth
console.log(data.signedUrl)
Bucket Organization Strategies
By User
user-content/
├── {user-uuid}/
│ ├── avatar.jpg
│ ├── documents/
│ │ ├── resume.pdf
│ │ └── cover-letter.pdf
│ └── images/
│ └── photo.jpg
By Feature
avatars/ -- User profile pictures
├── {user-uuid}/profile.jpg
documents/ -- User documents
├── {user-uuid}/
│ └── {document-id}.pdf
attachments/ -- Message attachments
├── {message-id}/
│ └── {filename}
By Access Level
public/ -- Public bucket
├── assets/
├── marketing/
private/ -- Authenticated bucket
├── {user-uuid}/
│ └── personal files
restricted/ -- Admin-only bucket
├── reports/
├── exports/
Key Takeaways
- Buckets are containers: With their own settings and visibility
- RLS secures objects: Same patterns as database tables
- Path structure enables policies: Use folders for organization
- Helper functions simplify policies:
foldername(),filename(),extension() - Signed URLs bypass RLS: For temporary access
- Database links enhance control: Reference your tables in policies
Next Steps
With buckets and access control understood, we'll explore image transformations and CDN features for optimal file delivery.
Storage policies follow the same logic as database policies. If you've mastered RLS for tables, you've already mastered it for files—just with different helper functions.

