Skip to main content
Technology & EngineeringDatabase Services334 lines

Supabase

Build with Supabase as a backend-as-a-service. Use this skill when the project

Quick Summary26 lines
You are a backend specialist who integrates Supabase into projects. Supabase is an
open-source Firebase alternative built on PostgreSQL, providing a database, auth,
real-time subscriptions, storage, and edge functions in one platform.

## Key Points

- Enable RLS on every table that holds user data — no exceptions
- Use the `auth.uid()` function in RLS policies to match the current user
- Use the service role key only on the server — never expose it to the client
- Use `select()` after `insert`/`update`/`upsert` to get the returned data
- Use database functions (RPC) for complex operations that need atomicity
- Use Supabase migrations for schema changes — `supabase db diff`
- Enable real-time only on tables that need it — it adds overhead
- Exposing the service role key to the client — bypasses all RLS
- Disabling RLS "temporarily" and forgetting to re-enable it
- Not creating RLS policies — tables default to deny-all with RLS enabled
- Using `.single()` without a unique filter — throws if multiple rows match
- Storing large files in the database instead of Supabase Storage

## Quick Example

```bash
npm install @supabase/supabase-js
```
skilldb get database-services-skills/SupabaseFull skill: 334 lines
Paste into your CLAUDE.md or agent config

Supabase Integration

You are a backend specialist who integrates Supabase into projects. Supabase is an open-source Firebase alternative built on PostgreSQL, providing a database, auth, real-time subscriptions, storage, and edge functions in one platform.

Core Philosophy

PostgreSQL is the foundation

Everything in Supabase runs on PostgreSQL. Row-level security, functions, triggers, extensions — you have the full power of Postgres. If you know SQL, you know Supabase.

Row-level security is mandatory

RLS policies are how you secure data in Supabase. Without them, your tables are either fully public or fully locked. Every table that holds user data must have RLS enabled with appropriate policies.

Client talks directly to the database

The Supabase client SDK generates PostgREST queries that hit your database directly (through the API gateway). There's no backend server to write for basic CRUD. This is the speed advantage — and the reason RLS matters so much.

Setup

Install

npm install @supabase/supabase-js

Initialize

import { createClient } from '@supabase/supabase-js';

const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);

Server-side client (with service role)

import { createClient } from '@supabase/supabase-js';

const supabaseAdmin = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY! // Bypasses RLS — server only
);

Key Techniques

Database queries

// Select with filters
const { data, error } = await supabase
  .from('posts')
  .select('id, title, created_at, author:profiles(name, avatar_url)')
  .eq('status', 'published')
  .order('created_at', { ascending: false })
  .limit(20);

// Insert
const { data, error } = await supabase
  .from('posts')
  .insert({ title: 'Hello', content: '...', author_id: userId })
  .select()
  .single();

// Update
const { error } = await supabase
  .from('posts')
  .update({ title: 'Updated title' })
  .eq('id', postId);

// Delete
const { error } = await supabase
  .from('posts')
  .delete()
  .eq('id', postId);

// Upsert
const { data, error } = await supabase
  .from('profiles')
  .upsert({ id: userId, name: 'Alice', updated_at: new Date().toISOString() })
  .select()
  .single();

Row-Level Security (RLS)

-- Enable RLS on table
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;

-- Users can read published posts
CREATE POLICY "Public read" ON posts
  FOR SELECT USING (status = 'published');

-- Users can only insert their own posts
CREATE POLICY "Owner insert" ON posts
  FOR INSERT WITH CHECK (auth.uid() = author_id);

-- Users can only update their own posts
CREATE POLICY "Owner update" ON posts
  FOR UPDATE USING (auth.uid() = author_id)
  WITH CHECK (auth.uid() = author_id);

-- Users can only delete their own posts
CREATE POLICY "Owner delete" ON posts
  FOR DELETE USING (auth.uid() = author_id);

-- Service role bypasses all policies (for admin operations)

Authentication

// Email/password signup
const { data, error } = await supabase.auth.signUp({
  email: 'user@example.com',
  password: 'securepassword',
  options: { data: { name: 'Alice' } },
});

// Email/password login
const { data, error } = await supabase.auth.signInWithPassword({
  email: 'user@example.com',
  password: 'securepassword',
});

// OAuth login
const { data, error } = await supabase.auth.signInWithOAuth({
  provider: 'google',
  options: { redirectTo: 'https://yourdomain.com/auth/callback' },
});

// Magic link
const { error } = await supabase.auth.signInWithOtp({
  email: 'user@example.com',
});

// Get current user
const { data: { user } } = await supabase.auth.getUser();

// Sign out
await supabase.auth.signOut();

// Listen to auth changes
supabase.auth.onAuthStateChange((event, session) => {
  if (event === 'SIGNED_IN') console.log('User signed in:', session.user);
  if (event === 'SIGNED_OUT') console.log('User signed out');
});

Real-time subscriptions

// Subscribe to inserts on a table
const channel = supabase
  .channel('posts-changes')
  .on('postgres_changes',
    { event: 'INSERT', schema: 'public', table: 'posts' },
    (payload) => {
      console.log('New post:', payload.new);
    }
  )
  .subscribe();

// Subscribe to all changes for a specific user's data
const channel = supabase
  .channel('my-posts')
  .on('postgres_changes',
    { event: '*', schema: 'public', table: 'posts', filter: `author_id=eq.${userId}` },
    (payload) => {
      console.log('Change:', payload.eventType, payload.new);
    }
  )
  .subscribe();

// Presence (who's online)
const channel = supabase.channel('room-1');
channel.on('presence', { event: 'sync' }, () => {
  const state = channel.presenceState();
  console.log('Online users:', Object.keys(state).length);
});
channel.subscribe(async (status) => {
  if (status === 'SUBSCRIBED') {
    await channel.track({ user_id: userId, online_at: new Date().toISOString() });
  }
});

// Unsubscribe
supabase.removeChannel(channel);

Storage

// Upload file
const { data, error } = await supabase.storage
  .from('avatars')
  .upload(`${userId}/avatar.png`, file, {
    cacheControl: '3600',
    upsert: true,
    contentType: 'image/png',
  });

// Get public URL
const { data: { publicUrl } } = supabase.storage
  .from('avatars')
  .getPublicUrl(`${userId}/avatar.png`);

// Download file
const { data, error } = await supabase.storage
  .from('documents')
  .download('report.pdf');

// List files
const { data, error } = await supabase.storage
  .from('avatars')
  .list(userId, { limit: 100 });

// Delete file
const { error } = await supabase.storage
  .from('avatars')
  .remove([`${userId}/avatar.png`]);

Edge Functions

// Deploy: supabase functions deploy my-function
// File: supabase/functions/my-function/index.ts

import { serve } from 'https://deno.land/std@0.168.0/http/server.ts';
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';

serve(async (req) => {
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
  );

  const { data, error } = await supabase.from('posts').select('*').limit(10);

  return new Response(JSON.stringify({ data }), {
    headers: { 'Content-Type': 'application/json' },
  });
});

// Invoke from client
const { data, error } = await supabase.functions.invoke('my-function', {
  body: { name: 'Alice' },
});

Vector embeddings (pgvector)

-- Enable the extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Create table with vector column
CREATE TABLE documents (
  id BIGSERIAL PRIMARY KEY,
  content TEXT,
  embedding VECTOR(1536)  -- OpenAI embedding dimension
);

-- Create index for similarity search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
// Similarity search
const { data } = await supabase.rpc('match_documents', {
  query_embedding: embedding, // float array
  match_threshold: 0.78,
  match_count: 10,
});

Database Design Patterns

User profiles (linked to auth)

CREATE TABLE profiles (
  id UUID REFERENCES auth.users(id) PRIMARY KEY,
  name TEXT,
  avatar_url TEXT,
  plan TEXT DEFAULT 'free',
  created_at TIMESTAMPTZ DEFAULT now(),
  updated_at TIMESTAMPTZ DEFAULT now()
);

-- Auto-create profile on signup
CREATE FUNCTION handle_new_user()
RETURNS TRIGGER AS $$
BEGIN
  INSERT INTO profiles (id, name)
  VALUES (new.id, new.raw_user_meta_data->>'name');
  RETURN new;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER on_auth_user_created
  AFTER INSERT ON auth.users
  FOR EACH ROW EXECUTE FUNCTION handle_new_user();

Best Practices

  • Enable RLS on every table that holds user data — no exceptions
  • Use the auth.uid() function in RLS policies to match the current user
  • Use the service role key only on the server — never expose it to the client
  • Use select() after insert/update/upsert to get the returned data
  • Use database functions (RPC) for complex operations that need atomicity
  • Use Supabase migrations for schema changes — supabase db diff
  • Enable real-time only on tables that need it — it adds overhead

Anti-Patterns

  • Exposing the service role key to the client — bypasses all RLS
  • Disabling RLS "temporarily" and forgetting to re-enable it
  • Not creating RLS policies — tables default to deny-all with RLS enabled
  • Using .single() without a unique filter — throws if multiple rows match
  • Storing large files in the database instead of Supabase Storage
  • Not handling the error return from every Supabase call
  • Subscribing to real-time on high-traffic tables without filters

Install this skill directly: skilldb add database-services-skills

Get CLI access →