Skip to main content
Technology & EngineeringCloudflare Workers358 lines

Workers D1

Cloudflare D1 serverless SQLite database for Workers, covering schema management, migrations, queries, prepared statements, batch operations, local development, replication, backups, and performance optimization.

Quick Summary31 lines
You are an expert in Cloudflare D1, a serverless SQLite database that runs at the edge alongside Cloudflare Workers, providing low-latency SQL queries without managing database infrastructure.

## Key Points

- **SQLite-based**: Full SQLite SQL support. Use any SQLite-compatible schema and queries.
- **Serverless**: No connection pooling, no connection strings, no database servers to manage.
- **Edge read replicas**: Reads are served from the nearest replica for low latency.
- **Binding-based access**: Workers access D1 through a binding, not a connection string.
- **Transaction support**: Full ACID transactions within a single database.
- **"D1_ERROR: no such table"** — Run migrations with `wrangler d1 migrations apply`.
- **Slow queries** — Add indexes, use `EXPLAIN QUERY PLAN`, reduce row scanning with WHERE clauses and LIMIT.
- **Stale reads** — D1 read replicas have slight replication lag. If you need read-your-own-writes consistency, use the `?consistency=strong` option or read immediately after write in the same batch.
- **"too many SQL variables"** — SQLite limits bound parameters to 100. Chunk large IN clauses.

## Quick Example

```bash
# Create a database
npx wrangler d1 create my-database

# Output includes the database_id — add it to wrangler.toml
```

```toml
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xxxx-yyyy-zzzz-aaaa"
```
skilldb get cloudflare-workers-skills/Workers D1Full skill: 358 lines
Paste into your CLAUDE.md or agent config

Workers D1 — Cloudflare Workers

You are an expert in Cloudflare D1, a serverless SQLite database that runs at the edge alongside Cloudflare Workers, providing low-latency SQL queries without managing database infrastructure.

Core Philosophy

Overview

D1 is built on SQLite and provides a familiar SQL interface. It stores data durably and replicates read replicas to Cloudflare's edge network for low-latency reads. Writes go to the primary instance and replicate automatically. D1 supports full SQLite syntax including joins, aggregations, indexes, triggers, and transactions.

Key characteristics

  • SQLite-based: Full SQLite SQL support. Use any SQLite-compatible schema and queries.
  • Serverless: No connection pooling, no connection strings, no database servers to manage.
  • Edge read replicas: Reads are served from the nearest replica for low latency.
  • Binding-based access: Workers access D1 through a binding, not a connection string.
  • Transaction support: Full ACID transactions within a single database.

Setup

Create a D1 database

# Create a database
npx wrangler d1 create my-database

# Output includes the database_id — add it to wrangler.toml

Bind in wrangler.toml

[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xxxx-yyyy-zzzz-aaaa"

TypeScript binding

export interface Env {
  DB: D1Database;
}

Schema Management and Migrations

Create migration files

# Create a new migration
npx wrangler d1 migrations create my-database create-users-table
# Creates: migrations/0001_create-users-table.sql

Write migrations

-- migrations/0001_create-users-table.sql
CREATE TABLE IF NOT EXISTS users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  name TEXT NOT NULL,
  role TEXT NOT NULL DEFAULT 'user',
  created_at TEXT NOT NULL DEFAULT (datetime('now')),
  updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);

CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_role ON users(role);
-- migrations/0002_create-posts-table.sql
CREATE TABLE IF NOT EXISTS posts (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
  title TEXT NOT NULL,
  slug TEXT NOT NULL UNIQUE,
  content TEXT NOT NULL,
  status TEXT NOT NULL DEFAULT 'draft' CHECK(status IN ('draft', 'published', 'archived')),
  published_at TEXT,
  created_at TEXT NOT NULL DEFAULT (datetime('now'))
);

CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_slug ON posts(slug);
CREATE INDEX idx_posts_status ON posts(status);

Apply migrations

# Apply to local database (development)
npx wrangler d1 migrations apply my-database --local

# Apply to remote (production)
npx wrangler d1 migrations apply my-database --remote

# List applied migrations
npx wrangler d1 migrations list my-database --local

Queries

Prepared statements (always use these)

// Single row
const user = await env.DB.prepare(
  "SELECT * FROM users WHERE id = ?"
).bind(42).first();
// Returns: object | null

// All rows
const { results } = await env.DB.prepare(
  "SELECT * FROM users WHERE role = ?"
).bind("admin").all();
// results: array of objects

// Raw column values
const { results } = await env.DB.prepare(
  "SELECT id, name FROM users LIMIT 10"
).all();

// Run (for INSERT, UPDATE, DELETE — returns metadata, not rows)
const info = await env.DB.prepare(
  "INSERT INTO users (email, name) VALUES (?, ?)"
).bind("alice@example.com", "Alice").run();
// info.meta.changes: number of rows affected
// info.meta.last_row_id: ID of the inserted row
// info.meta.duration: query execution time in ms

Multiple bindings

const { results } = await env.DB.prepare(
  "SELECT * FROM posts WHERE user_id = ? AND status = ? ORDER BY created_at DESC LIMIT ?"
).bind(userId, "published", 10).all();

Batch operations

Batch sends multiple statements in a single round trip. All statements in a batch run inside an implicit transaction.

const results = await env.DB.batch([
  env.DB.prepare("INSERT INTO users (email, name) VALUES (?, ?)").bind("bob@example.com", "Bob"),
  env.DB.prepare("INSERT INTO users (email, name) VALUES (?, ?)").bind("carol@example.com", "Carol"),
  env.DB.prepare("SELECT * FROM users ORDER BY id DESC LIMIT 2"),
]);

// results[0] — result of first INSERT
// results[1] — result of second INSERT
// results[2].results — array of the two new users

Explicit transactions with batch

// Transfer funds atomically
async function transfer(env: Env, fromId: number, toId: number, amount: number) {
  const results = await env.DB.batch([
    env.DB.prepare(
      "UPDATE accounts SET balance = balance - ? WHERE id = ? AND balance >= ?"
    ).bind(amount, fromId, amount),
    env.DB.prepare(
      "UPDATE accounts SET balance = balance + ? WHERE id = ?"
    ).bind(amount, toId),
    env.DB.prepare(
      "INSERT INTO transfers (from_id, to_id, amount) VALUES (?, ?, ?)"
    ).bind(fromId, toId, amount),
  ]);

  // Check if debit succeeded (sufficient funds)
  if (results[0].meta.changes === 0) {
    throw new Error("Insufficient funds");
  }
}

CRUD Patterns

Full resource CRUD

// CREATE
async function createPost(env: Env, data: { userId: number; title: string; content: string }) {
  const slug = data.title.toLowerCase().replace(/[^a-z0-9]+/g, "-");
  const result = await env.DB.prepare(
    "INSERT INTO posts (user_id, title, slug, content) VALUES (?, ?, ?, ?)"
  ).bind(data.userId, data.title, slug, data.content).run();
  return result.meta.last_row_id;
}

// READ with pagination
async function listPosts(env: Env, page: number, perPage: number = 20) {
  const offset = (page - 1) * perPage;

  const [countResult, postsResult] = await env.DB.batch([
    env.DB.prepare("SELECT COUNT(*) as total FROM posts WHERE status = 'published'"),
    env.DB.prepare(
      `SELECT p.*, u.name as author_name
       FROM posts p
       JOIN users u ON p.user_id = u.id
       WHERE p.status = 'published'
       ORDER BY p.published_at DESC
       LIMIT ? OFFSET ?`
    ).bind(perPage, offset),
  ]);

  const total = countResult.results[0].total as number;
  return {
    posts: postsResult.results,
    pagination: {
      page,
      perPage,
      total,
      totalPages: Math.ceil(total / perPage),
    },
  };
}

// UPDATE
async function updatePost(env: Env, id: number, data: { title?: string; content?: string }) {
  const fields: string[] = [];
  const values: unknown[] = [];

  if (data.title !== undefined) { fields.push("title = ?"); values.push(data.title); }
  if (data.content !== undefined) { fields.push("content = ?"); values.push(data.content); }

  if (fields.length === 0) return;

  fields.push("updated_at = datetime('now')");
  values.push(id);

  await env.DB.prepare(
    `UPDATE posts SET ${fields.join(", ")} WHERE id = ?`
  ).bind(...values).run();
}

// DELETE
async function deletePost(env: Env, id: number) {
  const result = await env.DB.prepare("DELETE FROM posts WHERE id = ?").bind(id).run();
  return result.meta.changes > 0;
}

Local Development

# Local D1 is automatic with wrangler dev
npx wrangler dev

# Execute SQL against local database
npx wrangler d1 execute my-database --local --command "SELECT * FROM users"

# Execute a SQL file locally
npx wrangler d1 execute my-database --local --file ./seed.sql

# Export local database
npx wrangler d1 export my-database --local --output ./backup.sql

Backups and Export

# Create a backup (remote)
npx wrangler d1 backup create my-database

# List backups
npx wrangler d1 backup list my-database

# Restore from backup
npx wrangler d1 backup restore my-database <backup-id>

# Export to SQL file
npx wrangler d1 export my-database --remote --output ./export.sql

Performance Optimization

Indexing strategy

-- Index columns used in WHERE, JOIN, and ORDER BY
CREATE INDEX idx_posts_user_status ON posts(user_id, status);

-- Covering index — includes all columns needed by the query
CREATE INDEX idx_posts_listing ON posts(status, published_at DESC)
  WHERE status = 'published';

-- Check query plan
EXPLAIN QUERY PLAN SELECT * FROM posts WHERE status = 'published' ORDER BY published_at DESC;

Query optimization tips

// Use batch() to reduce round trips
// BAD: 3 separate round trips
const user = await env.DB.prepare("SELECT...").first();
const posts = await env.DB.prepare("SELECT...").all();
const count = await env.DB.prepare("SELECT COUNT...").first();

// GOOD: 1 round trip
const [user, posts, count] = await env.DB.batch([
  env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(userId),
  env.DB.prepare("SELECT * FROM posts WHERE user_id = ?").bind(userId),
  env.DB.prepare("SELECT COUNT(*) as c FROM posts WHERE user_id = ?").bind(userId),
]);

Avoid common pitfalls

// NEVER interpolate values into SQL — always use bind()
// BAD — SQL injection risk
await env.DB.prepare(`SELECT * FROM users WHERE id = ${userId}`).all();

// GOOD
await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(userId).all();

// Use .first() when you expect one row (avoids allocating an array)
const user = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(1).first();

// Use LIMIT to avoid reading entire tables
const recent = await env.DB.prepare(
  "SELECT * FROM posts ORDER BY created_at DESC LIMIT 50"
).all();

Database Size and Limits

ResourceLimit
Max database size10 GB
Max databases per account50,000
Max bound databases per Worker40
Max query rows returned100,000
Max query size100 KB
Max batch count100 statements

Troubleshooting

  • "D1_ERROR: no such table" — Run migrations with wrangler d1 migrations apply.
  • Slow queries — Add indexes, use EXPLAIN QUERY PLAN, reduce row scanning with WHERE clauses and LIMIT.
  • Stale reads — D1 read replicas have slight replication lag. If you need read-your-own-writes consistency, use the ?consistency=strong option or read immediately after write in the same batch.
  • "too many SQL variables" — SQLite limits bound parameters to 100. Chunk large IN clauses.

Install this skill directly: skilldb add cloudflare-workers-skills

Get CLI access →