Skip to main content
Technology & EngineeringEdge Computing313 lines

Cloudflare D1

Cloudflare D1 for running SQLite databases at the edge with SQL query support

Quick Summary36 lines
You are an expert in Cloudflare D1 for building edge-first applications with relational SQL databases that run close to users.

## Key Points

- **Engine**: SQLite (with WAL mode)
- **Max database size**: 10 GB (paid), 500 MB (free)
- **Max rows read per query**: 1,000,000
- **Max rows written per query**: 100,000
- **Consistency**: Strong consistency for writes (routed to primary), eventual consistency for reads (served from nearest replica)
- **Pricing**: Based on rows read and rows written
- **Always use parameterized queries** — never interpolate user input into SQL strings. D1's `.bind()` prevents SQL injection.
- **Use `batch()` for transactions** — D1 does not support `BEGIN`/`COMMIT` directly. Batch multiple statements for atomic execution.
- **Create indexes for filtered/sorted columns** — D1 uses SQLite's query planner. Missing indexes cause full table scans.
- **Use `--local` during development** — `wrangler d1 execute --local` and `wrangler dev` use a local SQLite file, avoiding writes to production.
- **Prefer `first()` for single-row queries** — it's more efficient and returns `null` for no match rather than an empty array.
- **Keep schemas normalized** — D1 handles joins efficiently; avoid denormalization patterns carried over from NoSQL habits.

## Quick Example

```toml
# wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
```

```bash
# Apply migrations
wrangler d1 migrations apply my-app-db

# Apply to local dev
wrangler d1 migrations apply my-app-db --local
```
skilldb get edge-computing-skills/Cloudflare D1Full skill: 313 lines
Paste into your CLAUDE.md or agent config

Cloudflare D1 — Edge Computing

You are an expert in Cloudflare D1 for building edge-first applications with relational SQL databases that run close to users.

Overview

Cloudflare D1 is a serverless SQLite database built on Cloudflare's global network. It provides full SQL support via SQLite's engine, automatic read replication to edge locations, and seamless integration with Cloudflare Workers. D1 is ideal for applications that need relational queries, joins, and transactions at the edge without managing database infrastructure.

Key characteristics:

  • Engine: SQLite (with WAL mode)
  • Max database size: 10 GB (paid), 500 MB (free)
  • Max rows read per query: 1,000,000
  • Max rows written per query: 100,000
  • Consistency: Strong consistency for writes (routed to primary), eventual consistency for reads (served from nearest replica)
  • Pricing: Based on rows read and rows written

Core Concepts

Binding Configuration

# wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Creating and Managing Databases

# Create a database
wrangler d1 create my-app-db

# Run SQL directly
wrangler d1 execute my-app-db --command "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT UNIQUE)"

# Run SQL from a file
wrangler d1 execute my-app-db --file schema.sql

# Run against local dev database
wrangler d1 execute my-app-db --local --file seed.sql

Basic Queries

interface Env {
  DB: D1Database;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // SELECT multiple rows
    const { results } = await env.DB.prepare("SELECT id, name, email FROM users WHERE active = ?")
      .bind(1)
      .all<{ id: number; name: string; email: string }>();

    // SELECT single row
    const user = await env.DB.prepare("SELECT * FROM users WHERE id = ?")
      .bind(42)
      .first<{ id: number; name: string; email: string }>();

    // INSERT
    const insertResult = await env.DB.prepare("INSERT INTO users (name, email) VALUES (?, ?)")
      .bind("Alice", "alice@example.com")
      .run();
    // insertResult.meta.last_row_id, insertResult.meta.changes

    // UPDATE
    const updateResult = await env.DB.prepare("UPDATE users SET name = ? WHERE id = ?")
      .bind("Alice Smith", 42)
      .run();

    // DELETE
    await env.DB.prepare("DELETE FROM users WHERE id = ?").bind(42).run();

    return Response.json(results);
  },
};

Batch Queries (Transactions)

D1 executes batched statements as a single transaction:

const statements = [
  env.DB.prepare("INSERT INTO orders (user_id, total) VALUES (?, ?)").bind(1, 59.99),
  env.DB.prepare("INSERT INTO order_items (order_id, product_id, qty) VALUES (last_insert_rowid(), ?, ?)").bind(101, 2),
  env.DB.prepare("UPDATE inventory SET stock = stock - ? WHERE product_id = ?").bind(2, 101),
];

const results = await env.DB.batch(statements);
// All succeed or all fail — transactional semantics

Implementation Patterns

Schema Migrations

Create a migrations/ directory and apply them in order:

-- migrations/0001_init.sql
CREATE TABLE IF NOT EXISTS users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  name TEXT NOT NULL,
  email TEXT UNIQUE NOT NULL,
  created_at TEXT DEFAULT (datetime('now')),
  updated_at TEXT DEFAULT (datetime('now'))
);

CREATE TABLE IF NOT EXISTS posts (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  user_id INTEGER NOT NULL REFERENCES users(id),
  title TEXT NOT NULL,
  body TEXT NOT NULL,
  published INTEGER DEFAULT 0,
  created_at TEXT DEFAULT (datetime('now'))
);

CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_published ON posts(published);
# Apply migrations
wrangler d1 migrations apply my-app-db

# Apply to local dev
wrangler d1 migrations apply my-app-db --local

REST API with D1

interface Env {
  DB: D1Database;
}

async function handleGetUsers(env: Env, url: URL): Promise<Response> {
  const page = parseInt(url.searchParams.get("page") ?? "1");
  const limit = Math.min(parseInt(url.searchParams.get("limit") ?? "20"), 100);
  const offset = (page - 1) * limit;

  const [countResult, { results }] = await Promise.all([
    env.DB.prepare("SELECT COUNT(*) as total FROM users").first<{ total: number }>(),
    env.DB.prepare("SELECT id, name, email, created_at FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?")
      .bind(limit, offset)
      .all<{ id: number; name: string; email: string; created_at: string }>(),
  ]);

  return Response.json({
    data: results,
    pagination: {
      page,
      limit,
      total: countResult?.total ?? 0,
      pages: Math.ceil((countResult?.total ?? 0) / limit),
    },
  });
}

async function handleCreateUser(request: Request, env: Env): Promise<Response> {
  const body = await request.json<{ name: string; email: string }>();

  if (!body.name || !body.email) {
    return Response.json({ error: "name and email are required" }, { status: 400 });
  }

  try {
    const result = await env.DB.prepare("INSERT INTO users (name, email) VALUES (?, ?)")
      .bind(body.name, body.email)
      .run();

    return Response.json({ id: result.meta.last_row_id }, { status: 201 });
  } catch (err: any) {
    if (err.message?.includes("UNIQUE constraint failed")) {
      return Response.json({ error: "Email already exists" }, { status: 409 });
    }
    throw err;
  }
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    if (url.pathname === "/api/users" && request.method === "GET") {
      return handleGetUsers(env, url);
    }
    if (url.pathname === "/api/users" && request.method === "POST") {
      return handleCreateUser(request, env);
    }

    return new Response("Not Found", { status: 404 });
  },
};

Full-Text Search

SQLite FTS5 is available in D1:

-- Create FTS table
CREATE VIRTUAL TABLE posts_fts USING fts5(title, body, content=posts, content_rowid=id);

-- Populate FTS index
INSERT INTO posts_fts(posts_fts) VALUES('rebuild');

-- Keep FTS in sync with triggers
CREATE TRIGGER posts_ai AFTER INSERT ON posts BEGIN
  INSERT INTO posts_fts(rowid, title, body) VALUES (new.id, new.title, new.body);
END;

CREATE TRIGGER posts_ad AFTER DELETE ON posts BEGIN
  INSERT INTO posts_fts(posts_fts, rowid, title, body) VALUES('delete', old.id, old.title, old.body);
END;

CREATE TRIGGER posts_au AFTER UPDATE ON posts BEGIN
  INSERT INTO posts_fts(posts_fts, rowid, title, body) VALUES('delete', old.id, old.title, old.body);
  INSERT INTO posts_fts(rowid, title, body) VALUES (new.id, new.title, new.body);
END;
async function searchPosts(env: Env, query: string): Promise<Response> {
  const { results } = await env.DB.prepare(
    `SELECT p.id, p.title, snippet(posts_fts, 1, '<mark>', '</mark>', '...', 32) as excerpt,
            rank
     FROM posts_fts
     JOIN posts p ON p.id = posts_fts.rowid
     WHERE posts_fts MATCH ?
     ORDER BY rank
     LIMIT 20`
  )
    .bind(query)
    .all();

  return Response.json(results);
}

Using D1 with an ORM (Drizzle)

import { drizzle } from "drizzle-orm/d1";
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
import { eq } from "drizzle-orm";

const users = sqliteTable("users", {
  id: integer("id").primaryKey({ autoIncrement: true }),
  name: text("name").notNull(),
  email: text("email").notNull().unique(),
});

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const db = drizzle(env.DB);

    const allUsers = await db.select().from(users).where(eq(users.name, "Alice"));

    return Response.json(allUsers);
  },
};

Best Practices

  • Always use parameterized queries — never interpolate user input into SQL strings. D1's .bind() prevents SQL injection.
  • Use batch() for transactions — D1 does not support BEGIN/COMMIT directly. Batch multiple statements for atomic execution.
  • Create indexes for filtered/sorted columns — D1 uses SQLite's query planner. Missing indexes cause full table scans.
  • Use --local during developmentwrangler d1 execute --local and wrangler dev use a local SQLite file, avoiding writes to production.
  • Prefer first() for single-row queries — it's more efficient and returns null for no match rather than an empty array.
  • Keep schemas normalized — D1 handles joins efficiently; avoid denormalization patterns carried over from NoSQL habits.
  • Run migrations in CI/CD — use wrangler d1 migrations apply in your deployment pipeline, not manually.

Common Pitfalls

  • Treating D1 as a NoSQL store — D1 is SQLite. Use proper relational schema design with foreign keys, indexes, and normalized tables.
  • Exceeding row limits in a single query — Queries scanning more than 1M rows or writing more than 100K rows will fail. Use pagination and batch processing.
  • Ignoring eventual consistency for reads — Read replicas may serve slightly stale data. If you need read-after-write consistency, use the consistency: "strong" option or route reads through the primary.
  • Using SELECT * in production — Always specify column names to avoid pulling unnecessary data and to make your code resilient to schema changes.
  • Not handling UNIQUE constraint errors — Insert operations on columns with UNIQUE constraints throw errors on duplicates. Always catch and handle UNIQUE constraint failed errors.
  • Forgetting that SQLite types are flexible — SQLite uses type affinity, not strict types. A column declared as INTEGER can store text unless you use STRICT tables. Validate data in your application layer.

Core Philosophy

D1 is a relational database, and it deserves relational thinking. Design proper schemas with foreign keys, indexes, and normalized tables. The fact that D1 runs at the edge does not make it a key-value store or a document database. SQLite's query planner is sophisticated — give it proper indexes and it will serve your queries efficiently. Skip the indexes and every query becomes a full table scan that hits row limits.

Parameterized queries are non-negotiable. D1's .bind() method exists to prevent SQL injection, and there is never a valid reason to interpolate user input into SQL strings. This is not just a security practice — it also enables SQLite's prepared statement caching, improving performance for repeated queries.

Think of D1 as a globally distributed read cache with a single-region write primary. Reads are fast from any edge location because they hit nearby replicas. Writes are routed to the primary and may take a moment to propagate. Design your application to tolerate slightly stale reads, and use strong consistency only when read-after-write accuracy is critical.

Anti-Patterns

  • String-interpolating user input into SQL — constructing queries with template literals instead of .bind() opens the door to SQL injection and bypasses prepared statement caching.

  • Using SELECT * in production queries — pulling all columns wastes bandwidth, breaks when the schema changes, and makes it impossible to add columns without updating every query.

  • Treating D1 as a key-value store — storing JSON blobs in a single column instead of using proper normalized tables with foreign keys wastes D1's relational capabilities and makes queries slow and rigid.

  • Running migrations manually — applying schema changes by hand in production is error-prone and unreproducible; use wrangler d1 migrations apply in your deployment pipeline.

  • Ignoring eventual consistency for reads — assuming a write is immediately visible from all edge locations leads to stale-data bugs; use strong consistency when read-after-write accuracy matters.

Install this skill directly: skilldb add edge-computing-skills

Get CLI access →