Skip to main content
Technology & EngineeringDeployment Patterns539 lines

database-deployment

Comprehensive guide to database deployment for web applications, covering managed database services (PlanetScale, Neon, Supabase, Turso), migration strategies, connection pooling, backup and restore procedures, data seeding, and schema management best practices for production environments.

Quick Summary32 lines
PlanetScale is a serverless MySQL platform built on Vitess with Git-like branching for schema changes.

## Key Points

1. **Never drop columns/tables in the same deploy** that removes the code using them.
2. **Add columns as nullable** or with defaults.
3. **Create indexes concurrently** (Postgres: `CREATE INDEX CONCURRENTLY`).
4. **Backfill data in batches**, not in the migration itself.
5. **Test migrations against a production-size dataset** before deploying.
- name: Backup database before migration
- name: Upload backup as artifact
- name: Run migrations
- name: Create Neon branch for PR
1. **No connection pooling in serverless**: Exhausts database connections within minutes under load.
2. **Running migrations at app startup**: Causes race conditions with multiple instances. Run migrations as a separate CI/CD step.
3. **No backup testing**: Backups that can't be restored are worthless.

## Quick Example

```
DATABASE_URL="mysql://username:password@aws.connect.psdb.cloud/my-app?sslaccept=strict"
```

```bash
# Create migration
npx supabase migration new create_users_table

# Edit the generated file
# supabase/migrations/20240101000000_create_users_table.sql
```
skilldb get deployment-patterns-skills/database-deploymentFull skill: 539 lines
Paste into your CLAUDE.md or agent config

Database Deployment

Managed Database Services

PlanetScale (MySQL-Compatible)

PlanetScale is a serverless MySQL platform built on Vitess with Git-like branching for schema changes.

# Install CLI
brew install planetscale/tap/pscale

# Create database
pscale database create my-app --region us-east

# Create a branch for schema changes
pscale branch create my-app add-users-table

# Connect to branch
pscale connect my-app add-users-table --port 3309

Schema change workflow:

# 1. Create branch
pscale branch create my-app add-orders

# 2. Apply migrations to branch
pscale connect my-app add-orders --port 3309 &
mysql -h 127.0.0.1 -P 3309 < migrations/add_orders.sql

# 3. Create deploy request (like a PR)
pscale deploy-request create my-app add-orders

# 4. Review and deploy
pscale deploy-request deploy my-app 1

Connection string:

DATABASE_URL="mysql://username:password@aws.connect.psdb.cloud/my-app?sslaccept=strict"

Anti-pattern: Using foreign keys on PlanetScale. Vitess doesn't support cross-shard foreign keys. Enforce referential integrity in application code.

Neon (Serverless Postgres)

Neon provides serverless Postgres with branching, auto-scaling, and scale-to-zero.

# Create via CLI
neonctl projects create --name my-app

# Create branch
neonctl branches create --name staging --project-id proj_abc123

# Get connection string
neonctl connection-string --project-id proj_abc123

Connection with serverless driver:

import { neon } from '@neondatabase/serverless';

const sql = neon(process.env.DATABASE_URL);

// Works in serverless/edge environments (no persistent connection needed)
const users = await sql`SELECT * FROM users WHERE active = true`;

Standard Postgres driver with connection pooling:

import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10,
  ssl: { rejectUnauthorized: true },
});

Key features: Database branching (instant copies for dev/preview), auto-suspend after 5 minutes of inactivity, autoscaling compute.

Supabase (Postgres + Extras)

Supabase provides Postgres with real-time subscriptions, auth, storage, and edge functions.

# CLI setup
npx supabase init
npx supabase start  # Local development

# Link to remote
npx supabase link --project-ref your-project-ref

# Push migrations
npx supabase db push

Migrations workflow:

# Create migration
npx supabase migration new create_users_table

# Edit the generated file
# supabase/migrations/20240101000000_create_users_table.sql
-- supabase/migrations/20240101000000_create_users_table.sql
CREATE TABLE users (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  email TEXT UNIQUE NOT NULL,
  display_name TEXT,
  created_at TIMESTAMPTZ DEFAULT now()
);

-- Enable RLS
ALTER TABLE users ENABLE ROW LEVEL SECURITY;

-- Policy: users can read their own data
CREATE POLICY "Users can read own data"
  ON users FOR SELECT
  USING (auth.uid() = id);
# Apply to remote
npx supabase db push

# Or generate migration from diff
npx supabase db diff -f create_users_table

Turso (SQLite at the Edge)

Turso provides distributed SQLite using libSQL, with replicas at the edge.

# Create database
turso db create my-app

# Create replicas in different regions
turso db replicate my-app cdg
turso db replicate my-app nrt

# Get connection URL
turso db show my-app --url

# Create auth token
turso db tokens create my-app
import { createClient } from '@libsql/client';

const db = createClient({
  url: process.env.TURSO_DATABASE_URL,
  authToken: process.env.TURSO_AUTH_TOKEN,
});

const result = await db.execute('SELECT * FROM users WHERE id = ?', [userId]);

Best for: Read-heavy workloads, edge deployments, embedded databases. Writes go to primary, reads can hit local replicas.

Migration Strategies

Forward-Only Migrations

migrations/
  001_create_users.sql
  002_add_email_index.sql
  003_create_orders.sql
  004_add_user_display_name.sql

Prisma Migrations

# Generate migration from schema changes
npx prisma migrate dev --name add_orders_table

# Apply in production
npx prisma migrate deploy

# Reset development database
npx prisma migrate reset
// prisma/schema.prisma
model User {
  id        String   @id @default(cuid())
  email     String   @unique
  name      String?
  orders    Order[]
  createdAt DateTime @default(now())
}

model Order {
  id        String   @id @default(cuid())
  userId    String
  user      User     @relation(fields: [userId], references: [id])
  total     Decimal
  status    String   @default("pending")
  createdAt DateTime @default(now())

  @@index([userId])
  @@index([status])
}

Drizzle Migrations

# Generate migration
npx drizzle-kit generate

# Apply migration
npx drizzle-kit migrate

# Push schema directly (development only)
npx drizzle-kit push
// drizzle/schema.ts
import { pgTable, text, timestamp, uuid, decimal } from 'drizzle-orm/pg-core';

export const users = pgTable('users', {
  id: uuid('id').primaryKey().defaultRandom(),
  email: text('email').unique().notNull(),
  name: text('name'),
  createdAt: timestamp('created_at').defaultNow(),
});

export const orders = pgTable('orders', {
  id: uuid('id').primaryKey().defaultRandom(),
  userId: uuid('user_id').references(() => users.id).notNull(),
  total: decimal('total', { precision: 10, scale: 2 }).notNull(),
  status: text('status').default('pending'),
  createdAt: timestamp('created_at').defaultNow(),
});

Safe Migration Rules

  1. Never drop columns/tables in the same deploy that removes the code using them.
  2. Add columns as nullable or with defaults.
  3. Create indexes concurrently (Postgres: CREATE INDEX CONCURRENTLY).
  4. Backfill data in batches, not in the migration itself.
  5. Test migrations against a production-size dataset before deploying.

Anti-pattern: Running prisma db push in production. It can drop data. Always use prisma migrate deploy.

Connection Pooling

Why Pooling Matters

Serverless functions create a new database connection per invocation. Without pooling, you quickly exhaust connection limits.

PgBouncer (Self-Hosted or Managed)

# Connection flow:
App → PgBouncer (port 6432) → Postgres (port 5432)

Supabase and Neon include built-in connection pooling:

# Direct connection (persistent servers)
DATABASE_URL="postgresql://user:pass@db.example.com:5432/mydb"

# Pooled connection (serverless)
DATABASE_URL="postgresql://user:pass@db.example.com:6543/mydb?pgbouncer=true"

Prisma with Connection Pooling

datasource db {
  provider  = "postgresql"
  url       = env("DATABASE_URL")        // Pooled URL for queries
  directUrl = env("DIRECT_DATABASE_URL") // Direct URL for migrations
}

Application-Level Pooling

import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,                    // Maximum connections in pool
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 5000, // Fail if can't connect in 5s
});

// Use pool.query() for simple queries (auto-acquires and releases)
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);

// Use pool.connect() for transactions
const client = await pool.connect();
try {
  await client.query('BEGIN');
  await client.query('INSERT INTO orders ...', [values]);
  await client.query('UPDATE inventory ...', [values]);
  await client.query('COMMIT');
} catch (err) {
  await client.query('ROLLBACK');
  throw err;
} finally {
  client.release();
}

Anti-pattern: Setting max pool size higher than the database allows. Postgres default is 100 connections. With 10 serverless instances each using max: 20, you need 200 connections.

Backup and Restore

Automated Backups

Most managed services provide automatic daily backups. Verify:

# PlanetScale
pscale backup list my-app main

# Supabase: Automatic daily backups (Pro plan = point-in-time recovery)

# Neon: Automatic branching serves as backups
neonctl branches create --name backup-2024-01-15 --project-id proj_abc123

Manual Backups

# Postgres
pg_dump $DATABASE_URL > backup_$(date +%Y%m%d).sql
pg_dump $DATABASE_URL -Fc > backup_$(date +%Y%m%d).dump  # Custom format (compressed)

# Restore
psql $DATABASE_URL < backup_20240115.sql
pg_restore -d $DATABASE_URL backup_20240115.dump

# MySQL
mysqldump -h host -u user -p database > backup.sql
mysql -h host -u user -p database < backup.sql

CI/CD Backup Before Migration

# GitHub Actions
- name: Backup database before migration
  run: |
    pg_dump $DATABASE_URL -Fc > backup_pre_migration.dump

- name: Upload backup as artifact
  uses: actions/upload-artifact@v4
  with:
    name: db-backup-${{ github.sha }}
    path: backup_pre_migration.dump
    retention-days: 30

- name: Run migrations
  run: npx prisma migrate deploy

Anti-pattern: Not testing restore procedures. A backup you've never restored is not a backup.

Data Seeding

Development Seeds

// prisma/seed.ts
import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function seed() {
  // Clear existing data
  await prisma.order.deleteMany();
  await prisma.user.deleteMany();

  // Create test users
  const alice = await prisma.user.create({
    data: {
      email: 'alice@example.com',
      name: 'Alice',
      orders: {
        create: [
          { total: 29.99, status: 'completed' },
          { total: 49.99, status: 'pending' },
        ],
      },
    },
  });

  console.log(`Seeded ${alice.name}`);
}

seed()
  .catch(console.error)
  .finally(() => prisma.$disconnect());
// package.json
{
  "prisma": {
    "seed": "tsx prisma/seed.ts"
  }
}

Production Seeding

-- migrations/seed_initial_data.sql
-- Only run once, as a migration

INSERT INTO roles (name, permissions) VALUES
  ('admin', '{"all": true}'),
  ('editor', '{"posts": ["read", "write"], "users": ["read"]}'),
  ('viewer', '{"posts": ["read"], "users": ["read"]}')
ON CONFLICT (name) DO NOTHING;

INSERT INTO settings (key, value) VALUES
  ('site_name', '"My App"'),
  ('maintenance_mode', 'false')
ON CONFLICT (key) DO NOTHING;

Anti-pattern: Seeding production data with deleteMany(). Only use destructive seeds in development.

Schema Management Best Practices

Version Control

project/
  prisma/
    schema.prisma
    migrations/
      20240101_init/
        migration.sql
      20240115_add_orders/
        migration.sql
  drizzle/
    schema.ts
    migrations/
      0001_create_users.sql
      0002_create_orders.sql

Migration Naming Conventions

YYYYMMDDHHMMSS_descriptive_name.sql

20240115120000_create_users_table.sql
20240116090000_add_email_index.sql
20240117140000_add_orders_with_status.sql

Schema Drift Detection

# Prisma: Check if schema matches database
npx prisma migrate diff --from-schema-datasource prisma/schema.prisma --to-schema-datamodel prisma/schema.prisma

# Drizzle: Check for drift
npx drizzle-kit check

# Direct SQL comparison
pg_dump --schema-only $PROD_DB > prod_schema.sql
pg_dump --schema-only $LOCAL_DB > local_schema.sql
diff prod_schema.sql local_schema.sql

Multi-Environment Schema Strategy

main branch     → production database
staging branch  → staging database (Neon branch / PlanetScale branch)
PR branches     → ephemeral databases (Neon branch per PR)
# GitHub Actions: Create ephemeral DB for PR
- name: Create Neon branch for PR
  if: github.event_name == 'pull_request'
  run: |
    BRANCH_ID=$(neonctl branches create \
      --project-id ${{ secrets.NEON_PROJECT_ID }} \
      --name pr-${{ github.event.pull_request.number }} \
      --output json | jq -r '.id')
    echo "DATABASE_URL=$(neonctl connection-string --branch-id $BRANCH_ID)" >> $GITHUB_ENV

Common Anti-Patterns

  1. No connection pooling in serverless: Exhausts database connections within minutes under load.
  2. Running migrations at app startup: Causes race conditions with multiple instances. Run migrations as a separate CI/CD step.
  3. No backup testing: Backups that can't be restored are worthless.
  4. Storing migrations outside version control: Leads to schema drift between environments.
  5. Using db push in production: Can silently drop columns and data.
  6. Hardcoding connection strings: Use environment variables, scoped per environment.
  7. No indexes on foreign keys: Every JOIN becomes a full table scan.

Database Deployment Checklist

  • Managed database provisioned in correct region (close to app servers)
  • Connection pooling configured for serverless workloads
  • Migrations tested against production-size data
  • Backup strategy verified (automated + tested restore)
  • Environment-specific connection strings configured
  • Indexes created for common query patterns
  • Row Level Security enabled if using Supabase
  • Schema drift detection in CI pipeline
  • Seed data prepared for development and staging

Install this skill directly: skilldb add deployment-patterns-skills

Get CLI access →

Related Skills

docker-deployment

Comprehensive guide to using Docker for production deployments, covering multi-stage builds, .dockerignore optimization, layer caching strategies, health checks, Docker Compose for local development, container registries, and security scanning best practices.

Deployment Patterns479L

fly-io-deployment

Complete guide to deploying applications on Fly.io, covering flyctl CLI usage, Dockerfile-based deployments, fly.toml configuration, persistent volumes, horizontal and vertical scaling, multi-region deployments, managed Postgres and Redis, private networking, and auto-scaling strategies.

Deployment Patterns412L

github-actions-cd

Comprehensive guide to implementing continuous deployment with GitHub Actions, covering deploy workflows, environment protection rules, secrets management, matrix builds, dependency caching, artifact management, and deploying to multiple targets including Vercel, Fly.io, AWS, and container registries.

Deployment Patterns469L

monitoring-post-deploy

Comprehensive guide to post-deployment monitoring for web applications, covering uptime checks, error tracking with Sentry, application performance monitoring, log aggregation, alerting strategies, public status pages, and incident response procedures for production systems.

Deployment Patterns572L

netlify-deployment

Complete guide to deploying web applications on Netlify, covering build settings, deploy previews, serverless and edge functions, forms, identity, redirects and rewrites, split testing, and environment variable management for production workflows.

Deployment Patterns399L

railway-deployment

Complete guide to deploying applications on Railway, covering project setup, environment variable management, services and databases (Postgres, Redis, MySQL), persistent volumes, monorepo support, private networking between services, and scheduled cron jobs.

Deployment Patterns434L