Supabase Storage: File Upload and Management Guide

Supabase Storage provides S3-compatible object storage with built-in CDN, image transformations, and Row Level Security integration, enabling applications to store and serve user uploads, avatars, documents, videos, and any file type with the same security model as your database. This comprehensive guide covers creating storage buckets, configuring public and private access, uploading files from JavaScript, downloading and serving files, implementing on-the-fly image transformations (resize, crop, quality), setting up Row Level Security policies for storage, handling file metadata and MIME types, implementing progress tracking, and organizing files with folder structures. Unlike separate storage services like AWS S3 or Cloudinary requiring complex configuration and billing, Supabase Storage integrates seamlessly with authentication and database, providing unified access control and simplified development. Before proceeding, understand authentication and Row Level Security as they apply to storage.
Storage Features
| Feature | Description | Use Case | Free Tier |
|---|---|---|---|
| S3 Compatible | Standard object storage API | Easy migration | 1GB storage |
| CDN Integration | Global content delivery | Fast file serving | 2GB bandwidth |
| Image Transform | On-the-fly resizing | Responsive images | Included |
| RLS Policies | Row-level access control | Secure uploads | Included |
| Public/Private | Bucket-level permissions | Flexible access | Included |
Creating Storage Buckets
Buckets are containers for organizing files, similar to folders or S3 buckets. Create buckets in Supabase Dashboard under Storage. Choose between public buckets (files accessible via URL) and private buckets (authentication required). Public buckets work for user avatars, blog images, and shared assets. Private buckets secure documents, user uploads, and sensitive files.
// Create bucket via SQL (or use Dashboard UI)
-- Create public bucket for avatars
insert into storage.buckets (id, name, public)
values ('avatars', 'avatars', true);
-- Create private bucket for documents
insert into storage.buckets (id, name, public)
values ('documents', 'documents', false);
-- Check existing buckets
select * from storage.buckets;
-- Dashboard method:
-- 1. Go to Storage in Supabase Dashboard
-- 2. Click 'New Bucket'
-- 3. Enter bucket name
-- 4. Toggle 'Public bucket' if needed
-- 5. Click 'Create bucket'Uploading Files
// Upload file from JavaScript
import { supabase } from './supabaseClient'
// Upload from file input
async function uploadFile(file) {
const fileName = `${Date.now()}-${file.name}`
const { data, error } = await supabase.storage
.from('avatars')
.upload(fileName, file, {
cacheControl: '3600',
upsert: false
})
if (error) {
console.error('Upload error:', error.message)
return null
}
console.log('File uploaded:', data.path)
return data
}
// Upload with progress tracking
async function uploadWithProgress(file, onProgress) {
const fileName = `uploads/${Date.now()}-${file.name}`
const { data, error } = await supabase.storage
.from('documents')
.upload(fileName, file, {
cacheControl: '3600',
upsert: false,
onUploadProgress: (progress) => {
const percentage = (progress.loaded / progress.total) * 100
onProgress(percentage)
}
})
return { data, error }
}
// Upload from URL (server-side only)
async function uploadFromUrl(fileUrl, fileName) {
const response = await fetch(fileUrl)
const blob = await response.blob()
const { data, error } = await supabase.storage
.from('avatars')
.upload(fileName, blob)
return { data, error }
}
// Replace existing file (upsert)
async function replaceFile(file, existingPath) {
const { data, error } = await supabase.storage
.from('avatars')
.upload(existingPath, file, {
cacheControl: '3600',
upsert: true // Overwrites existing file
})
return { data, error }
}Downloading and Serving Files
// Get public URL (public buckets only)
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('user-123.jpg')
console.log('Public URL:', data.publicUrl)
// Use in <img> tag directly
// Download file (works for private buckets)
const { data, error } = await supabase.storage
.from('documents')
.download('invoice.pdf')
if (data) {
// Create download link
const url = URL.createObjectURL(data)
const a = document.createElement('a')
a.href = url
a.download = 'invoice.pdf'
a.click()
}
// Get signed URL (temporary access to private files)
const { data, error } = await supabase.storage
.from('documents')
.createSignedUrl('private-doc.pdf', 60) // Valid for 60 seconds
if (data) {
console.log('Signed URL:', data.signedUrl)
// Use this URL to display/download file
}
// List files in bucket/folder
const { data, error } = await supabase.storage
.from('documents')
.list('user-123/uploads', {
limit: 100,
offset: 0,
sortBy: { column: 'name', order: 'asc' }
})
console.log('Files:', data)Image Transformations
// On-the-fly image transformations (CDN-powered)
// Resize image
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('profile.jpg', {
transform: {
width: 200,
height: 200
}
})
// Result: https://project.supabase.co/storage/v1/render/image/public/avatars/profile.jpg?width=200&height=200
// Multiple transformations
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('photo.jpg', {
transform: {
width: 800,
height: 600,
resize: 'cover', // 'contain' | 'cover' | 'fill'
quality: 80,
format: 'webp' // 'origin' | 'webp'
}
})
// Responsive images with different sizes
function getResponsiveUrls(path) {
const sizes = [400, 800, 1200]
return sizes.map(width => {
const { data } = supabase.storage
.from('images')
.getPublicUrl(path, {
transform: { width, quality: 85 }
})
return { width, url: data.publicUrl }
})
}
// Use in srcset
const urls = getResponsiveUrls('hero.jpg')
const srcset = urls.map(u => `${u.url} ${u.width}w`).join(', ')Storage RLS Policies
-- Enable RLS on storage.objects table
alter table storage.objects enable row level security;
-- Users can upload to their own folder
create policy "Users can upload own files"
on storage.objects for insert
with check (
bucket_id = 'avatars' and
(storage.foldername(name))[1] = auth.uid()::text
);
-- Users can view their own files
create policy "Users can view own files"
on storage.objects for select
using (
bucket_id = 'avatars' and
(storage.foldername(name))[1] = auth.uid()::text
);
-- Users can update their own files
create policy "Users can update own files"
on storage.objects for update
using (
bucket_id = 'avatars' and
(storage.foldername(name))[1] = auth.uid()::text
);
-- Users can delete their own files
create policy "Users can delete own files"
on storage.objects for delete
using (
bucket_id = 'avatars' and
(storage.foldername(name))[1] = auth.uid()::text
);
-- Public read access for avatars bucket
create policy "Public avatars are viewable"
on storage.objects for select
using ( bucket_id = 'avatars' );
-- File structure example:
-- avatars/
-- user-123/
-- profile.jpg
-- banner.jpg
-- user-456/
-- avatar.pngDeleting Files
// Delete single file
const { data, error } = await supabase.storage
.from('avatars')
.remove(['user-123/profile.jpg'])
if (error) {
console.error('Delete error:', error.message)
} else {
console.log('File deleted')
}
// Delete multiple files
const filesToDelete = [
'user-123/old-photo.jpg',
'user-123/temp.png',
'user-123/draft.pdf'
]
const { data, error } = await supabase.storage
.from('documents')
.remove(filesToDelete)
// Delete all files in folder
const { data: files } = await supabase.storage
.from('documents')
.list('user-123/temp')
const filePaths = files.map(file => `user-123/temp/${file.name}`)
const { error } = await supabase.storage
.from('documents')
.remove(filePaths)Storage Best Practices
- Organize with Folders: Use user IDs or logical groupings in paths (e.g., avatars/user-123/profile.jpg)
- Unique Filenames: Prefix files with timestamps or UUIDs to prevent conflicts and enable caching
- Validate File Types: Check MIME types and file extensions before uploading to prevent malicious files
- Limit File Sizes: Enforce maximum file sizes in client and server to prevent abuse
- Implement RLS Policies: Always set storage RLS policies for private buckets
- Use Image Transformations: Serve optimized images with appropriate sizes and formats
- Set Cache Headers: Use cacheControl parameter for better CDN performance
Next Steps
- Build Upload UI: Create React image upload components with preview and progress
- Add Real-time Updates: Use Supabase Realtime to notify users of upload completion
- Secure Storage: Implement comprehensive RLS policies for file access
- Build Complete Apps: Integrate storage with Next.js applications
Conclusion
Supabase Storage provides enterprise-grade file management with S3 compatibility, CDN integration, and on-the-fly image transformations—all unified with your authentication and database security model. By leveraging public buckets for shared assets and private buckets with RLS policies for user files, you maintain fine-grained access control matching your application's security requirements. Image transformations eliminate the need for separate image processing pipelines, enabling responsive images optimized for any device. The JavaScript client makes file operations straightforward with simple upload, download, and delete methods while automatic CDN delivery ensures fast file serving worldwide. Always organize files with logical folder structures, implement proper RLS policies for private buckets, validate uploads on client and server, and leverage image transformations for optimal performance. With storage mastered, you're equipped to build feature-rich applications handling user uploads, avatars, documents, and media. Continue with practical upload implementations and real-time features for production applications.
$ share --platform
$ cat /comments/ (0)
$ cat /comments/
// No comments found. Be the first!


