$ cat /posts/supabase-backup-and-restore-database-management.md
[tags]Supabase

Supabase Backup and Restore: Database Management

drwxr-xr-x2026-01-265 min0 views
Supabase Backup and Restore: Database Management

Implementing backup and restore strategies protects against data loss, enables disaster recovery, supports rollback to previous states, and ensures business continuity through automated daily backups, point-in-time recovery, manual export options, and tested restoration procedures. Unlike applications without backup plans vulnerable to accidental deletions, database corruption, ransomware attacks, or infrastructure failures causing permanent data loss and business disruption, proper backup strategies enable rapid recovery, maintain data integrity, prevent financial losses, and provide compliance with data retention regulations. This comprehensive guide covers understanding Supabase backup types and retention policies, configuring automated daily backups, performing manual database exports, implementing point-in-time recovery, restoring databases from backups, backing up Storage files, creating disaster recovery plans, automating backup verification, and managing backup costs. Backup planning becomes essential when storing critical business data, handling user-generated content, managing production databases, meeting compliance requirements, or scaling applications requiring data protection guarantees. Before proceeding, understand database basics, migrations, and security practices.

Supabase Backup Types

Backup TypeFrequencyRetentionUse Case
Daily AutomatedEvery 24 hours7-30 days (plan-based)Regular recovery
Point-in-Time RecoveryContinuousUp to 7 daysPrecise rollback
Manual ExportOn-demandUnlimited (self-managed)Long-term archival
Storage BackupManualSelf-managedFile recovery

Supabase provides automated daily backups for all paid plans with retention periods varying by tier: Free plan has no automated backups, Pro plan retains 7 daily backups, Team plan keeps 14 daily backups, and Enterprise offers custom retention up to 30 days or more. Point-in-Time Recovery (PITR) available on Pro plans and above enables restoring database to any specific timestamp within retention window, while manual exports allow downloading complete database dumps for long-term archival or migration purposes.

Automated Daily Backups

bashautomated_backups.sh
# View automated backups in Supabase Dashboard
# Go to: Project Settings > Database > Backups

# Automated backups include:
# - Full database schema
# - All table data
# - Functions, triggers, policies
# - Extension configurations
# - NOT included: Storage files (separate backup needed)

# Backup retention by plan:
# Free: No automated backups
# Pro: 7 daily backups
# Team: 14 daily backups  
# Enterprise: 30+ days (customizable)

# Restore from automated backup:
# 1. Navigate to Project Settings > Database > Backups
# 2. Select backup from list
# 3. Click "Restore" button
# 4. Confirm restoration (this will overwrite current database)
# 5. Wait for restoration process (5-30 minutes depending on size)
# 6. Verify data integrity after restore

# IMPORTANT: Restoration overwrites current database
# Always export current state before restoring if uncertain

# Check backup status via API
curl -X GET 'https://api.supabase.com/v1/projects/{project-ref}/backups' \
  -H "Authorization: Bearer {access-token}" \
  -H "Content-Type: application/json"

# Response:
# {
#   "backups": [
#     {
#       "id": "backup-id",
#       "created_at": "2026-01-25T00:00:00Z",
#       "status": "completed",
#       "size_mb": 245
#     }
#   ]
# }

Manual Database Export

bashmanual_export.sh
# Export via Supabase CLI
# Install CLI first: npm install -g supabase

# 1. Login to Supabase
supabase login

# 2. Link to your project
supabase link --project-ref your-project-ref

# 3. Export database schema and data
supabase db dump -f backup.sql

# Export only schema (no data)
supabase db dump --schema-only -f schema.sql

# Export only data (no schema)
supabase db dump --data-only -f data.sql

# Export specific tables
supabase db dump -f posts_backup.sql --table posts,comments

# Export with pg_dump directly
pg_dump -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -f backup_$(date +%Y%m%d).sql

# Export compressed backup
pg_dump -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -F c \
  -f backup_$(date +%Y%m%d).dump

# Export specific schema only
pg_dump -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -n public \
  -f public_schema.sql

# Automated backup script
#!/bin/bash
# backup.sh - Daily backup automation

DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups"
PROJECT_REF="your-project-ref"
BACKUP_FILE="${BACKUP_DIR}/supabase_backup_${DATE}.sql"

# Create backup directory if not exists
mkdir -p $BACKUP_DIR

# Export database
supabase db dump --project-ref $PROJECT_REF -f $BACKUP_FILE

if [ $? -eq 0 ]; then
    echo "Backup successful: $BACKUP_FILE"
    
    # Compress backup
    gzip $BACKUP_FILE
    
    # Upload to cloud storage (optional)
    aws s3 cp ${BACKUP_FILE}.gz s3://my-backups/supabase/
    
    # Delete backups older than 30 days
    find $BACKUP_DIR -name "supabase_backup_*.sql.gz" -mtime +30 -delete
else
    echo "Backup failed!"
    exit 1
fi

# Schedule with cron
# Add to crontab: crontab -e
# Run daily at 2 AM:
# 0 2 * * * /path/to/backup.sh >> /var/log/supabase-backup.log 2>&1

Point-in-Time Recovery

bashpitr.sh
# Point-in-Time Recovery (PITR) - Pro plan and above
# Enables restoring database to any specific timestamp

# Enable PITR in Dashboard:
# 1. Go to Project Settings > Database > Backups
# 2. Enable "Point-in-Time Recovery"
# 3. Select retention period (up to 7 days)

# PITR use cases:
# - Undo accidental data deletion
# - Recover from bad migration
# - Investigate data at specific time
# - Rollback after unauthorized changes

# Perform PITR restore:
# 1. Navigate to Project Settings > Database > Backups
# 2. Click "Point-in-Time Recovery" tab
# 3. Select date and time to restore to
# 4. Preview affected tables (if available)
# 5. Confirm restoration
# 6. Monitor progress (can take 10-60 minutes)

# PITR via CLI (Enterprise)
supabase db restore \
  --project-ref your-project-ref \
  --recovery-time "2026-01-25 14:30:00+00"

# Example: Recover from accidental deletion
# Scenario: Deleted important posts at 2:45 PM
# Solution: Restore to 2:40 PM (5 minutes before deletion)

# 1. Note current time: 3:00 PM
# 2. Identify deletion time: 2:45 PM
# 3. Choose recovery point: 2:40 PM (safe margin)
# 4. Perform PITR to 2:40 PM
# 5. Verify data is restored
# 6. Export recovered data if needed

# PITR limitations:
# - Only available on Pro+ plans
# - Maximum 7-day retention window
# - Full database restore (cannot select specific tables)
# - Overwrites current database state
# - Storage files not included

# Best practices:
# - Enable PITR immediately on production databases
# - Test recovery process in staging environment
# - Document recovery procedures
# - Monitor PITR storage costs
# - Export critical data before risky operations

Database Restoration

bashrestore.sh
# Restore from SQL backup file

# Method 1: Supabase CLI
supabase db reset
psql -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -f backup.sql

# Method 2: pg_restore for compressed backups
pg_restore -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -c \
  backup.dump

# Method 3: Restore via Supabase dashboard
# 1. Reset database: Project Settings > Database > Reset
# 2. Run migrations: supabase db push
# 3. Import data: Use SQL Editor to run backup.sql

# Restore specific tables only
psql -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -c "TRUNCATE TABLE posts CASCADE;"

psql -h db.your-project-ref.supabase.co \
  -U postgres \
  -d postgres \
  -f posts_backup.sql

# Restoration script with verification
#!/bin/bash
# restore.sh - Restore database with verification

BACKUP_FILE="$1"
PROJECT_REF="your-project-ref"
DB_HOST="db.${PROJECT_REF}.supabase.co"

if [ -z "$BACKUP_FILE" ]; then
    echo "Usage: ./restore.sh <backup_file>"
    exit 1
fi

if [ ! -f "$BACKUP_FILE" ]; then
    echo "Backup file not found: $BACKUP_FILE"
    exit 1
fi

echo "Starting restoration from $BACKUP_FILE"

# Create pre-restore backup
echo "Creating safety backup..."
supabase db dump -f pre_restore_backup.sql

# Confirm restoration
read -p "This will overwrite the database. Continue? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
    echo "Restoration cancelled."
    exit 0
fi

# Perform restoration
echo "Restoring database..."
psql -h $DB_HOST -U postgres -d postgres -f $BACKUP_FILE

if [ $? -eq 0 ]; then
    echo "Restoration successful!"
    
    # Verify table counts
    echo "Verifying restoration..."
    psql -h $DB_HOST -U postgres -d postgres -c "
        SELECT schemaname, tablename, 
               (SELECT count(*) FROM "+schemaname+"."+tablename+") as row_count
        FROM pg_tables
        WHERE schemaname = 'public'
        ORDER BY tablename;
    "
else
    echo "Restoration failed!"
    echo "Restoring from safety backup..."
    psql -h $DB_HOST -U postgres -d postgres -f pre_restore_backup.sql
    exit 1
fi

# Restore Edge Functions if needed
echo "Restoring Edge Functions..."
supabase functions deploy --project-ref $PROJECT_REF

echo "Restoration complete!"

Storage Files Backup

typescriptstorage_backup.ts
// Backup Supabase Storage files
import { createClient } from '@supabase/supabase-js'
import fs from 'fs'
import path from 'path'

const supabase = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_SERVICE_ROLE_KEY!)

async function backupStorageBucket(bucketName: string, backupDir: string) {
  console.log(`Backing up bucket: ${bucketName}`)
  
  // List all files in bucket
  const { data: files, error } = await supabase
    .storage
    .from(bucketName)
    .list('', {
      limit: 1000,
      offset: 0,
      sortBy: { column: 'name', order: 'asc' },
    })
  
  if (error) {
    console.error('Error listing files:', error)
    return
  }
  
  // Create backup directory
  const bucketBackupDir = path.join(backupDir, bucketName)
  if (!fs.existsSync(bucketBackupDir)) {
    fs.mkdirSync(bucketBackupDir, { recursive: true })
  }
  
  // Download each file
  for (const file of files) {
    console.log(`Downloading: ${file.name}`)
    
    const { data, error: downloadError } = await supabase
      .storage
      .from(bucketName)
      .download(file.name)
    
    if (downloadError) {
      console.error(`Error downloading ${file.name}:`, downloadError)
      continue
    }
    
    // Save file
    const filePath = path.join(bucketBackupDir, file.name)
    const arrayBuffer = await data.arrayBuffer()
    const buffer = Buffer.from(arrayBuffer)
    fs.writeFileSync(filePath, buffer)
    
    console.log(`Saved: ${filePath}`)
  }
  
  console.log(`Backup complete for bucket: ${bucketName}`)
}

// Backup all buckets
async function backupAllBuckets() {
  const backupDir = `./storage-backups/${new Date().toISOString().split('T')[0]}`
  
  // List all buckets
  const { data: buckets, error } = await supabase.storage.listBuckets()
  
  if (error) {
    console.error('Error listing buckets:', error)
    return
  }
  
  for (const bucket of buckets) {
    await backupStorageBucket(bucket.name, backupDir)
  }
  
  console.log('All storage backups complete!')
}

backupAllBuckets()

// Restore storage files
async function restoreStorageBucket(bucketName: string, backupDir: string) {
  const bucketBackupDir = path.join(backupDir, bucketName)
  
  if (!fs.existsSync(bucketBackupDir)) {
    console.error(`Backup directory not found: ${bucketBackupDir}`)
    return
  }
  
  const files = fs.readdirSync(bucketBackupDir)
  
  for (const filename of files) {
    const filePath = path.join(bucketBackupDir, filename)
    const fileBuffer = fs.readFileSync(filePath)
    
    console.log(`Uploading: ${filename}`)
    
    const { error } = await supabase
      .storage
      .from(bucketName)
      .upload(filename, fileBuffer, {
        upsert: true,
      })
    
    if (error) {
      console.error(`Error uploading ${filename}:`, error)
    }
  }
  
  console.log(`Restore complete for bucket: ${bucketName}`)
}

Disaster Recovery Plan

markdowndisaster_recovery_plan.md
# Disaster Recovery Plan Template

## 1. Backup Strategy
- **Daily automated backups**: Enabled (7-day retention)
- **Point-in-Time Recovery**: Enabled (7-day window)
- **Manual exports**: Weekly full backups to S3
- **Storage backups**: Daily to cloud storage
- **Backup verification**: Monthly restoration tests

## 2. Recovery Time Objectives (RTO)
- **Critical data**: 1 hour
- **Full database**: 4 hours
- **Storage files**: 2 hours
- **Complete system**: 8 hours

## 3. Recovery Point Objectives (RPO)
- **Database**: 5 minutes (PITR)
- **Storage files**: 24 hours (daily backup)
- **Maximum acceptable data loss**: 1 hour

## 4. Incident Response Procedures

### Data Loss Incident
1. Identify scope of data loss
2. Determine last known good state
3. Choose recovery method:
   - PITR for recent issues (<7 days)
   - Automated backup for specific date
   - Manual export for older data
4. Create pre-restore backup
5. Execute restoration
6. Verify data integrity
7. Document incident

### Database Corruption
1. Immediately prevent further writes
2. Assess corruption extent
3. Restore from latest backup
4. Compare with PITR if available
5. Validate data consistency
6. Resume operations

### Complete System Failure
1. Create new Supabase project
2. Apply latest schema migrations
3. Restore database from backup
4. Restore storage files
5. Update DNS/environment variables
6. Verify all integrations
7. Test critical user flows

## 5. Contact Information
- **Primary DBA**: [email protected]
- **Backup DBA**: [email protected]
- **Supabase Support**: https://supabase.com/dashboard/support
- **On-call rotation**: [link to schedule]

## 6. Backup Locations
- **Primary**: Supabase automated backups
- **Secondary**: AWS S3 (s3://company-backups/supabase/)
- **Tertiary**: Local NAS (optional)

## 7. Testing Schedule
- **Monthly**: Test PITR restoration
- **Quarterly**: Full database restore test
- **Annually**: Complete disaster recovery drill

## 8. Backup Monitoring
```bash
#!/bin/bash
# check-backups.sh - Verify backup health

# Check last backup timestamp
LAST_BACKUP=$(supabase db dump --dry-run | grep -o '[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}')
CURRENT_DATE=$(date +%Y-%m-%d)

if [ "$LAST_BACKUP" != "$CURRENT_DATE" ]; then
    echo "WARNING: Last backup is not from today!"
    # Send alert
    curl -X POST https://hooks.slack.com/... \
      -d '{"text":"Supabase backup failed or outdated"}'
else
    echo "Backup status: OK"
fi
```

## 9. Recovery Validation Checklist
- [ ] All tables present
- [ ] Row counts match expectations
- [ ] Foreign key relationships intact
- [ ] RLS policies active
- [ ] Functions and triggers working
- [ ] Storage files accessible
- [ ] Authentication functional
- [ ] API endpoints responding
- [ ] Real-time subscriptions working

Backup Best Practices

  • Enable Automated Backups: Use Pro plan or above ensuring 7+ days of automated daily backups
  • Enable Point-in-Time Recovery: Critical for production databases allowing precise rollback
  • Test Restorations Regularly: Monthly restoration tests verifying backup integrity and procedures
  • Store Offsite Backups: Export weekly backups to S3 or cloud storage for disaster recovery
  • Backup Storage Files: Automated backups exclude Storage, create separate file backup strategy
  • Document Recovery Procedures: Maintain detailed disaster recovery plan with RTO and RPO targets
  • Monitor Backup Health: Automate backup verification alerting on failures or missing backups
Critical: Supabase Free plan has NO automated backups. Upgrade to Pro immediately for production databases. Test restoration procedures before disasters occur. Automated backups do NOT include Storage files - implement separate backup strategy. Review security practices.

Common Backup Issues

  • No Backups Available: Free plan lacks automated backups, upgrade to Pro or manually export regularly
  • Restoration Failed: Check database connection limits, ensure sufficient space, verify backup file integrity
  • PITR Unavailable: Only on Pro+ plans, enable in Project Settings > Database > Backups
  • Backup Size Too Large: Optimize storage by cleaning old data or archiving historical records

Next Steps

  1. Implement Security: Secure backups with security best practices
  2. Test Regularly: Create testing procedures for backup validation
  3. Automate Workflows: Use migrations for schema versioning
  4. Monitor Performance: Track database performance

Conclusion

Implementing backup and restore strategies protects against data loss and ensures business continuity through automated daily backups retaining 7-30 days based on plan tier, Point-in-Time Recovery enabling precise rollback within 7-day windows, manual exports creating long-term archives, tested restoration procedures verifying recovery capabilities, and disaster recovery plans documenting response procedures. By enabling automated backups on Pro plans or above, configuring Point-in-Time Recovery for production databases, performing weekly manual exports to offsite storage like S3, implementing separate Storage file backup strategies, testing restoration procedures monthly validating backup integrity, documenting disaster recovery plans with RTO and RPO targets, automating backup verification with health checks, and monitoring backup costs optimizing retention policies, you build resilient applications. Backup fundamentals include automated daily backups preventing data loss, PITR enabling granular recovery, manual exports providing long-term archival, Storage backups protecting user files, tested procedures ensuring successful recovery, disaster recovery plans guiding incident response, and backup monitoring detecting failures early. Always enable automated backups immediately for production, configure PITR for critical databases, test restoration regularly verifying procedures work, backup Storage files separately from database, document recovery procedures clearly, store backups offsite preventing single points of failure, and monitor backup health continuously. Backup planning becomes essential when storing critical business data, handling user-generated content, managing production databases, meeting compliance requirements, or scaling applications. Continue with security practices, testing, and local development.

$ cat /comments/ (0)

new_comment.sh

// Email hidden from public

>_

$ cat /comments/

// No comments found. Be the first!

[session] guest@{codershandbook}[timestamp] 2026

Navigation

Categories

Connect

Subscribe

// 2026 {Coders Handbook}. EOF.