$ cat /posts/supabase-monitoring-performance-metrics-and-alerts.md
[tags]Supabase

Supabase Monitoring: Performance Metrics and Alerts

drwxr-xr-x2026-01-275 min0 views
Supabase Monitoring: Performance Metrics and Alerts

Monitoring Supabase applications with performance metrics, query analytics, database health monitoring, and alerting systems enables tracking system health, detecting performance issues proactively, analyzing bottlenecks, maintaining uptime, and ensuring production reliability through comprehensive observability including query execution times, connection pool metrics, API response times, error rates, and real-time alerting creating data-driven infrastructure management for high-traffic applications. Unlike basic console logging providing limited insights, production monitoring requires structured metrics collection tracking database performance, query analytics identifying slow queries, API endpoint monitoring measuring response times, resource utilization tracking CPU and memory, error tracking with rate calculations, and alerting systems notifying teams immediately when thresholds breach. This comprehensive guide covers understanding monitoring fundamentals and metrics types, accessing Supabase Dashboard analytics with built-in metrics, monitoring database performance with query statistics, tracking API usage and response times, implementing custom metrics collection, setting up performance alerts with webhooks, using PostgreSQL statistics views, creating monitoring dashboards, and integrating external observability tools. Monitoring demonstrates production best practices ensuring application reliability, performance optimization, and proactive issue detection. Before starting, review Edge Functions, Next.js integration, and performance optimization.

Monitoring Metrics Types

Metric TypePurposeTools
Database QueriesTrack slow queries, execution timespg_stat_statements
Connection PoolMonitor active connections, pool sizepg_stat_activity
API PerformanceResponse times, request countsSupabase Dashboard
Error RatesTrack failures, exceptionsCustom logging
Resource UsageCPU, memory, disk usageDashboard metrics
Real-time ChannelsActive subscriptions, message ratesRealtime metrics
Storage MetricsFile uploads, bandwidth usageStorage analytics

Supabase Dashboard Analytics

Supabase Dashboard provides built-in analytics tracking database queries, API usage, authentication metrics, and resource utilization with visualizations showing performance trends, request volumes, error rates, and system health enabling quick issue identification without external tools. Understanding dashboard metrics enables monitoring application health, identifying performance bottlenecks, tracking usage patterns, analyzing error rates, and making data-driven optimization decisions maintaining production reliability.

  • Database Performance: View query execution times, slow query logs, connection counts, and table sizes
  • API Usage: Track requests per endpoint, response times, status codes distribution, and traffic patterns
  • Authentication: Monitor user signups, login success rates, authentication providers usage, and session counts
  • Storage Metrics: Analyze file uploads, bandwidth consumption, storage sizes by bucket
  • Real-time Channels: Track active subscriptions, message delivery rates, connection counts
  • Error Tracking: View error rates, failed requests, exception types with timestamps

Database Query Analytics

sqlquery_analytics.sql
-- Enable pg_stat_statements extension for query tracking
create extension if not exists pg_stat_statements;

-- View slowest queries by average execution time
select
  substring(query, 1, 100) as short_query,
  calls,
  round(total_exec_time::numeric, 2) as total_time_ms,
  round(mean_exec_time::numeric, 2) as avg_time_ms,
  round(max_exec_time::numeric, 2) as max_time_ms,
  round(stddev_exec_time::numeric, 2) as stddev_ms
from pg_stat_statements
where mean_exec_time > 100  -- Queries taking over 100ms average
order by mean_exec_time desc
limit 20;

-- Queries consuming most total time
select
  substring(query, 1, 100) as short_query,
  calls,
  round(total_exec_time::numeric, 2) as total_ms,
  round((100 * total_exec_time / sum(total_exec_time) over ())::numeric, 2) as pct_total
from pg_stat_statements
order by total_exec_time desc
limit 20;

-- View table access patterns
select
  schemaname,
  tablename,
  seq_scan,  -- Sequential scans (consider adding indexes)
  seq_tup_read,
  idx_scan,  -- Index scans (good performance)
  idx_tup_fetch,
  n_tup_ins + n_tup_upd + n_tup_del as modifications
from pg_stat_user_tables
order by seq_scan desc;

-- View index usage statistics
select
  schemaname,
  tablename,
  indexname,
  idx_scan,
  idx_tup_read,
  idx_tup_fetch
from pg_stat_user_indexes
order by idx_scan desc;

-- Find unused indexes (candidates for removal)
select
  schemaname,
  tablename,
  indexname,
  pg_size_pretty(pg_relation_size(indexrelid)) as index_size
from pg_stat_user_indexes
where idx_scan = 0
  and indexname not like 'pg_toast%'
order by pg_relation_size(indexrelid) desc;

-- Monitor cache hit ratio (should be > 90%)
select
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit) as heap_hit,
  round(100.0 * sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2) as cache_hit_ratio
from pg_statio_user_tables;

-- View active queries and their duration
select
  pid,
  now() - query_start as duration,
  state,
  substring(query, 1, 100) as query
from pg_stat_activity
where state != 'idle'
  and query not like '%pg_stat_activity%'
order by duration desc;

-- Create monitoring view for key metrics
create or replace view database_health_metrics as
select
  'Active Connections' as metric,
  count(*)::text as value,
  'connections' as unit
from pg_stat_activity
where state = 'active'

union all

select
  'Cache Hit Ratio',
  round(100.0 * sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2)::text,
  '%'
from pg_statio_user_tables

union all

select
  'Database Size',
  pg_size_pretty(pg_database_size(current_database())),
  'bytes'

union all

select
  'Slow Queries (>1s)',
  count(*)::text,
  'queries'
from pg_stat_statements
where mean_exec_time > 1000;

-- Query the monitoring view
select * from database_health_metrics;

API Performance Monitoring

typescriptapi_monitoring.ts
// middleware.ts - Track API metrics
import { createMiddlewareClient } from '@supabase/auth-helpers-nextjs'
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'

interface ApiMetric {
  endpoint: string
  method: string
  status: number
  duration_ms: number
  user_id?: string
  timestamp: string
}

export async function middleware(request: NextRequest) {
  const start = Date.now()
  const res = NextResponse.next()
  const supabase = createMiddlewareClient({ req: request, res })

  try {
    // Get user session
    const { data: { session } } = await supabase.auth.getSession()
    
    // Process request
    const response = NextResponse.next()
    const duration = Date.now() - start

    // Log API metrics
    const metric: ApiMetric = {
      endpoint: request.nextUrl.pathname,
      method: request.method,
      status: response.status,
      duration_ms: duration,
      user_id: session?.user?.id,
      timestamp: new Date().toISOString(),
    }

    // Store metrics asynchronously (don't await)
    logApiMetric(metric)

    // Add performance headers
    response.headers.set('X-Response-Time', `${duration}ms`)
    response.headers.set('X-Request-ID', crypto.randomUUID())

    return response
  } catch (error) {
    console.error('Middleware error:', error)
    return res
  }
}

async function logApiMetric(metric: ApiMetric) {
  try {
    const supabase = createClient(
      process.env.NEXT_PUBLIC_SUPABASE_URL!,
      process.env.SUPABASE_SERVICE_ROLE_KEY!
    )

    await supabase.from('api_metrics').insert(metric)
  } catch (error) {
    console.error('Failed to log metric:', error)
  }
}

export const config = {
  matcher: ['/api/:path*'],
}

-- Create API metrics table
create table api_metrics (
  id uuid default gen_random_uuid() primary key,
  endpoint text not null,
  method text not null,
  status int not null,
  duration_ms numeric not null,
  user_id uuid references auth.users(id),
  timestamp timestamp with time zone default now()
);

create index idx_api_metrics_timestamp on api_metrics(timestamp desc);
create index idx_api_metrics_endpoint on api_metrics(endpoint);
create index idx_api_metrics_status on api_metrics(status);

-- Performance analytics view
create view api_performance_analytics as
select
  endpoint,
  method,
  count(*) as request_count,
  round(avg(duration_ms), 2) as avg_duration_ms,
  round(percentile_cont(0.5) within group (order by duration_ms), 2) as p50_ms,
  round(percentile_cont(0.95) within group (order by duration_ms), 2) as p95_ms,
  round(percentile_cont(0.99) within group (order by duration_ms), 2) as p99_ms,
  max(duration_ms) as max_duration_ms,
  count(*) filter (where status >= 500) as server_errors,
  count(*) filter (where status >= 400 and status < 500) as client_errors,
  round(100.0 * count(*) filter (where status < 400) / count(*), 2) as success_rate
from api_metrics
where timestamp > now() - interval '24 hours'
group by endpoint, method
order by request_count desc;

-- Error rate tracking
create view api_error_rates as
select
  date_trunc('hour', timestamp) as hour,
  endpoint,
  count(*) as total_requests,
  count(*) filter (where status >= 500) as server_errors,
  count(*) filter (where status >= 400 and status < 500) as client_errors,
  round(100.0 * count(*) filter (where status >= 400) / count(*), 2) as error_rate
from api_metrics
where timestamp > now() - interval '7 days'
group by hour, endpoint
having count(*) filter (where status >= 400) > 0
order by hour desc, error_rate desc;

// React component for metrics dashboard
import { useEffect, useState } from 'react'
import { createClient } from '@/lib/supabase/client'

interface ApiPerformance {
  endpoint: string
  method: string
  request_count: number
  avg_duration_ms: number
  p95_ms: number
  success_rate: number
}

export function ApiMetricsDashboard() {
  const [metrics, setMetrics] = useState<ApiPerformance[]>([])
  const [loading, setLoading] = useState(true)
  const supabase = createClient()

  useEffect(() => {
    loadMetrics()
    const interval = setInterval(loadMetrics, 60000) // Refresh every minute
    return () => clearInterval(interval)
  }, [])

  async function loadMetrics() {
    const { data, error } = await supabase
      .from('api_performance_analytics')
      .select('*')
      .limit(10)

    if (data) {
      setMetrics(data)
      setLoading(false)
    }
  }

  if (loading) return <div>Loading metrics...</div>

  return (
    <div className="space-y-6">
      <h2 className="text-2xl font-bold">API Performance Metrics</h2>
      
      <div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
        {metrics.map((metric, index) => (
          <div key={index} className="bg-white p-6 rounded-lg shadow">
            <h3 className="text-sm font-medium text-gray-500 mb-2">
              {metric.method} {metric.endpoint}
            </h3>
            <div className="space-y-2">
              <div className="flex justify-between">
                <span className="text-sm">Requests:</span>
                <span className="font-semibold">{metric.request_count}</span>
              </div>
              <div className="flex justify-between">
                <span className="text-sm">Avg Time:</span>
                <span className="font-semibold">{metric.avg_duration_ms}ms</span>
              </div>
              <div className="flex justify-between">
                <span className="text-sm">P95:</span>
                <span className="font-semibold">{metric.p95_ms}ms</span>
              </div>
              <div className="flex justify-between">
                <span className="text-sm">Success Rate:</span>
                <span className={`font-semibold ${
                  metric.success_rate >= 99 ? 'text-green-600' : 
                  metric.success_rate >= 95 ? 'text-yellow-600' : 'text-red-600'
                }`}>
                  {metric.success_rate}%
                </span>
              </div>
            </div>
          </div>
        ))}
      </div>
    </div>
  )
}

Setting Up Performance Alerts

sqlalerts_setup.sql
-- Create alerts table
create table performance_alerts (
  id uuid default gen_random_uuid() primary key,
  alert_type text not null,
  severity text not null check (severity in ('info', 'warning', 'critical')),
  message text not null,
  metric_value numeric,
  threshold numeric,
  triggered_at timestamp with time zone default now(),
  resolved_at timestamp with time zone,
  notified boolean default false
);

create index idx_alerts_triggered on performance_alerts(triggered_at desc);
create index idx_alerts_resolved on performance_alerts(resolved_at) where resolved_at is null;

-- Function to check slow queries
create or replace function check_slow_queries()
returns void as $$
declare
  slow_count int;
begin
  select count(*) into slow_count
  from pg_stat_statements
  where mean_exec_time > 1000
    and calls > 10;

  if slow_count > 5 then
    insert into performance_alerts (alert_type, severity, message, metric_value, threshold)
    values (
      'slow_queries',
      'warning',
      format('Found %s queries with average execution time over 1 second', slow_count),
      slow_count,
      5
    );
  end if;
end;
$$ language plpgsql;

-- Function to check error rates
create or replace function check_error_rate()
returns void as $$
declare
  error_rate numeric;
  total_requests int;
  error_requests int;
begin
  select
    count(*),
    count(*) filter (where status >= 400)
  into total_requests, error_requests
  from api_metrics
  where timestamp > now() - interval '5 minutes';

  if total_requests > 0 then
    error_rate := (error_requests::numeric / total_requests) * 100;
    
    if error_rate > 5 then
      insert into performance_alerts (alert_type, severity, message, metric_value, threshold)
      values (
        'high_error_rate',
        case when error_rate > 10 then 'critical' else 'warning' end,
        format('Error rate is %.2f%% in the last 5 minutes', error_rate),
        error_rate,
        5
      );
    end if;
  end if;
end;
$$ language plpgsql;

-- Function to check connection count
create or replace function check_connection_count()
returns void as $$
declare
  conn_count int;
  max_connections int;
begin
  select count(*) into conn_count
  from pg_stat_activity;
  
  select setting::int into max_connections
  from pg_settings
  where name = 'max_connections';
  
  if conn_count > (max_connections * 0.8) then
    insert into performance_alerts (alert_type, severity, message, metric_value, threshold)
    values (
      'high_connections',
      'critical',
      format('Database connections at %s (max: %s)', conn_count, max_connections),
      conn_count,
      max_connections * 0.8
    );
  end if;
end;
$$ language plpgsql;

-- Webhook notification function
create or replace function send_alert_webhook(alert_id uuid)
returns void as $$
declare
  alert_data record;
  webhook_url text := current_setting('app.webhook_url', true);
begin
  select * into alert_data
  from performance_alerts
  where id = alert_id;
  
  if webhook_url is not null then
    perform
      net.http_post(
        url := webhook_url,
        headers := jsonb_build_object('Content-Type', 'application/json'),
        body := jsonb_build_object(
          'alert_type', alert_data.alert_type,
          'severity', alert_data.severity,
          'message', alert_data.message,
          'metric_value', alert_data.metric_value,
          'threshold', alert_data.threshold,
          'triggered_at', alert_data.triggered_at
        )
      );
    
    update performance_alerts
    set notified = true
    where id = alert_id;
  end if;
end;
$$ language plpgsql security definer;

-- Trigger to send webhook on new alert
create or replace function notify_alert_trigger()
returns trigger as $$
begin
  perform send_alert_webhook(new.id);
  return new;
end;
$$ language plpgsql;

create trigger alert_notification
  after insert on performance_alerts
  for each row
  execute function notify_alert_trigger();

-- Schedule alert checks with pg_cron
select cron.schedule(
  'check-slow-queries',
  '*/5 * * * *',  -- Every 5 minutes
  'select check_slow_queries();'
);

select cron.schedule(
  'check-error-rate',
  '*/5 * * * *',  -- Every 5 minutes
  'select check_error_rate();'
);

select cron.schedule(
  'check-connections',
  '* * * * *',  -- Every minute
  'select check_connection_count();'
);

Custom Metrics Collection

typescriptcustom_metrics.ts
// lib/metrics.ts - Custom metrics collection
import { createClient } from '@/lib/supabase/client'

interface CustomMetric {
  metric_name: string
  metric_value: number
  tags?: Record<string, string>
  timestamp?: string
}

class MetricsCollector {
  private supabase = createClient()
  private batchSize = 50
  private queue: CustomMetric[] = []
  private flushInterval: NodeJS.Timeout | null = null

  constructor() {
    // Flush metrics every 10 seconds
    this.flushInterval = setInterval(() => this.flush(), 10000)
  }

  async track(name: string, value: number, tags?: Record<string, string>) {
    const metric: CustomMetric = {
      metric_name: name,
      metric_value: value,
      tags: tags || {},
      timestamp: new Date().toISOString(),
    }

    this.queue.push(metric)

    if (this.queue.length >= this.batchSize) {
      await this.flush()
    }
  }

  async increment(name: string, value: number = 1, tags?: Record<string, string>) {
    await this.track(name, value, tags)
  }

  async gauge(name: string, value: number, tags?: Record<string, string>) {
    await this.track(name, value, tags)
  }

  async timing(name: string, durationMs: number, tags?: Record<string, string>) {
    await this.track(name, durationMs, { ...tags, unit: 'ms' })
  }

  private async flush() {
    if (this.queue.length === 0) return

    const metrics = [...this.queue]
    this.queue = []

    try {
      await this.supabase.from('custom_metrics').insert(metrics)
    } catch (error) {
      console.error('Failed to flush metrics:', error)
      // Re-queue failed metrics
      this.queue.unshift(...metrics)
    }
  }

  destroy() {
    if (this.flushInterval) {
      clearInterval(this.flushInterval)
    }
    this.flush()
  }
}

export const metrics = new MetricsCollector()

-- Create custom metrics table
create table custom_metrics (
  id uuid default gen_random_uuid() primary key,
  metric_name text not null,
  metric_value numeric not null,
  tags jsonb default '{}',
  timestamp timestamp with time zone default now()
);

create index idx_custom_metrics_name on custom_metrics(metric_name);
create index idx_custom_metrics_timestamp on custom_metrics(timestamp desc);
create index idx_custom_metrics_tags on custom_metrics using gin(tags);

-- Aggregated metrics view
create view metrics_summary as
select
  metric_name,
  date_trunc('hour', timestamp) as hour,
  count(*) as data_points,
  round(avg(metric_value), 2) as avg_value,
  round(min(metric_value), 2) as min_value,
  round(max(metric_value), 2) as max_value,
  round(percentile_cont(0.95) within group (order by metric_value), 2) as p95_value
from custom_metrics
where timestamp > now() - interval '24 hours'
group by metric_name, hour
order by hour desc, metric_name;

// Usage examples
import { metrics } from '@/lib/metrics'

// Track user signup
await metrics.increment('user.signup', 1, {
  method: 'email',
  source: 'landing_page'
})

// Track search performance
const start = Date.now()
const results = await performSearch(query)
const duration = Date.now() - start
await metrics.timing('search.duration', duration, {
  query_length: query.length.toString(),
  result_count: results.length.toString()
})

// Track active users
const activeUsers = await getActiveUserCount()
await metrics.gauge('users.active', activeUsers)

// Track API response time
async function trackApiCall<T>(endpoint: string, fn: () => Promise<T>): Promise<T> {
  const start = Date.now()
  try {
    const result = await fn()
    const duration = Date.now() - start
    await metrics.timing('api.response_time', duration, {
      endpoint,
      status: 'success'
    })
    return result
  } catch (error) {
    const duration = Date.now() - start
    await metrics.timing('api.response_time', duration, {
      endpoint,
      status: 'error'
    })
    throw error
  }
}

Monitoring Best Practices

  • Monitor Key Metrics: Track response times, error rates, query performance, and resource usage
  • Set Alert Thresholds: Configure alerts for critical metrics catching issues before users report them
  • Use Dashboard Analytics: Regularly review Supabase Dashboard metrics identifying trends and anomalies
  • Track Slow Queries: Enable pg_stat_statements monitoring expensive queries needing optimization
  • Implement API Metrics: Log request metrics tracking endpoint performance and error rates
  • Create Custom Metrics: Track business-specific metrics providing application insights
  • Set Up Webhooks: Configure webhook notifications sending alerts to Slack, PagerDuty, or email
Pro Tip: Monitor cache hit ratios aiming for >90% indicating effective caching. Track P95 and P99 percentiles understanding worst-case performance. Set up automated alerts preventing issues from affecting users. Review performance optimization and security practices.

Common Issues

  • Missing Metrics: Verify pg_stat_statements enabled, check metrics table permissions, confirm middleware active
  • High Resource Usage: Check connection pool size, review slow queries, analyze table sizes
  • Alert Fatigue: Tune thresholds avoiding false positives, use severity levels, implement alert aggregation
  • Slow Dashboard: Add indexes on timestamp columns, implement data retention policies archiving old metrics

Conclusion

Monitoring Supabase applications with performance metrics, query analytics, database health monitoring, and alerting enables tracking system health, detecting issues proactively, analyzing bottlenecks, and maintaining production reliability through comprehensive observability. By understanding monitoring fundamentals including metric types and collection strategies, accessing Supabase Dashboard analytics with built-in visualizations, monitoring database performance with pg_stat_statements tracking slow queries, tracking API usage with custom middleware logging request metrics, implementing custom metrics collection for business insights, setting up performance alerts with automated checks, using PostgreSQL statistics views analyzing resource utilization, creating monitoring dashboards displaying real-time metrics, and integrating external observability tools, you build production-ready monitoring infrastructure providing visibility into application health. Monitoring advantages include proactive issue detection catching problems early, performance insights identifying bottlenecks, resource optimization through utilization tracking, error analysis improving reliability, capacity planning with usage trends, and improved user experience through system reliability. Always monitor key metrics tracking critical performance indicators, set alert thresholds catching issues proactively, use dashboard analytics regularly reviewing trends, track slow queries optimizing expensive operations, implement API metrics measuring endpoint performance, create custom metrics providing business insights, and set up webhooks notifying teams immediately. Monitoring demonstrates production best practices ensuring application reliability and performance optimization. Continue exploring deployment strategies and automation.

$ cat /comments/ (0)

new_comment.sh

// Email hidden from public

>_

$ cat /comments/

// No comments found. Be the first!

[session] guest@{codershandbook}[timestamp] 2026

Navigation

Categories

Connect

Subscribe

// 2026 {Coders Handbook}. EOF.