Skip to main content
Technology & EngineeringEdge Computing298 lines

Cloudflare Kv

Cloudflare Workers KV for globally distributed key-value storage at the edge

Quick Summary18 lines
You are an expert in Cloudflare Workers KV for building edge-first applications that leverage globally distributed key-value storage.

## Key Points

- **Consistency model**: Eventually consistent (writes propagate globally within ~60 seconds)
- **Key size**: 512 bytes max
- **Value size**: 25 MB max
- **Metadata**: 1024 bytes of JSON metadata per key
- **TTL**: Optional expiration (minimum 60 seconds)
- **Operations**: get, put, delete, list
- **Design for eventual consistency** — KV is not suitable for data where two concurrent writes must not conflict (use Durable Objects for that).
- **Use metadata for secondary information** — store content types, ETags, or version numbers in metadata rather than encoding them in the value.
- **Set TTLs on ephemeral data** — sessions, caches, and rate-limit counters should always have an expiration to avoid unbounded namespace growth.
- **Batch reads with `Promise.all`** — KV supports concurrent reads within a single Worker invocation; fan out reads when you need multiple keys.
- **Key naming convention** — use colon-delimited prefixes (`user:123`, `config:flags`) to enable efficient `list` with prefix filtering.
- **Use the `cacheTtl` option on reads** — `kv.get(key, { cacheTtl: 60 })` tells the runtime to cache the value in the local edge PoP for up to 60 seconds, reducing KV read units.
skilldb get edge-computing-skills/Cloudflare KvFull skill: 298 lines
Paste into your CLAUDE.md or agent config

Cloudflare Workers KV — Edge Computing

You are an expert in Cloudflare Workers KV for building edge-first applications that leverage globally distributed key-value storage.

Overview

Cloudflare Workers KV is an eventually consistent, globally distributed key-value store designed for high-read, low-write workloads at the edge. Data written to KV propagates to all of Cloudflare's 300+ edge locations, enabling sub-millisecond reads from the nearest PoP. KV is ideal for configuration, feature flags, static asset metadata, and cached API responses.

Key characteristics:

  • Consistency model: Eventually consistent (writes propagate globally within ~60 seconds)
  • Key size: 512 bytes max
  • Value size: 25 MB max
  • Metadata: 1024 bytes of JSON metadata per key
  • TTL: Optional expiration (minimum 60 seconds)
  • Operations: get, put, delete, list

Core Concepts

Binding Configuration

# wrangler.toml
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123def456"

# For local development
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123def456"
preview_id = "preview789"

Basic Operations

interface Env {
  MY_KV: KVNamespace;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Write a value
    await env.MY_KV.put("user:123", JSON.stringify({ name: "Alice", role: "admin" }));

    // Write with TTL (seconds)
    await env.MY_KV.put("session:abc", "token-data", { expirationTtl: 3600 });

    // Write with exact expiration (Unix timestamp)
    await env.MY_KV.put("promo:summer", "active", {
      expiration: Math.floor(Date.now() / 1000) + 86400,
    });

    // Write with metadata
    await env.MY_KV.put("asset:logo.png", imageBuffer, {
      metadata: { contentType: "image/png", version: 3 },
    });

    // Read as text (default)
    const text = await env.MY_KV.get("user:123");

    // Read as JSON
    const user = await env.MY_KV.get<{ name: string; role: string }>("user:123", "json");

    // Read as ArrayBuffer
    const binary = await env.MY_KV.get("asset:logo.png", "arrayBuffer");

    // Read as ReadableStream
    const stream = await env.MY_KV.get("large-file", "stream");

    // Read with metadata
    const { value, metadata } = await env.MY_KV.getWithMetadata<string, { contentType: string }>(
      "asset:logo.png",
      "arrayBuffer"
    );

    // Delete
    await env.MY_KV.delete("session:abc");

    // List keys
    const listed = await env.MY_KV.list({ prefix: "user:", limit: 100 });
    // listed.keys = [{ name: "user:123", expiration?: number, metadata?: unknown }]
    // listed.list_complete = true/false
    // listed.cursor = "..." (for pagination)

    return new Response("OK");
  },
};

Pagination

async function listAllKeys(kv: KVNamespace, prefix: string): Promise<string[]> {
  const keys: string[] = [];
  let cursor: string | undefined;

  do {
    const result = await kv.list({ prefix, cursor, limit: 1000 });
    keys.push(...result.keys.map((k) => k.name));
    cursor = result.list_complete ? undefined : result.cursor;
  } while (cursor);

  return keys;
}

Implementation Patterns

Configuration Store

interface FeatureFlags {
  darkMode: boolean;
  newCheckout: boolean;
  maxUploadSizeMb: number;
}

async function getFeatureFlags(kv: KVNamespace): Promise<FeatureFlags> {
  const flags = await kv.get<FeatureFlags>("config:feature-flags", "json");

  return flags ?? {
    darkMode: false,
    newCheckout: false,
    maxUploadSizeMb: 10,
  };
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const flags = await getFeatureFlags(env.MY_KV);

    if (!flags.newCheckout) {
      return fetch("https://legacy-checkout.example.com" + new URL(request.url).pathname);
    }

    return new Response("New checkout", { status: 200 });
  },
};

Cache-Aside Pattern

Use KV as a cache layer in front of an origin API:

interface CacheEntry<T> {
  data: T;
  cachedAt: number;
}

async function cachedFetch<T>(
  kv: KVNamespace,
  cacheKey: string,
  originUrl: string,
  ttlSeconds: number
): Promise<T> {
  // Try cache first
  const cached = await kv.get<CacheEntry<T>>(cacheKey, "json");

  if (cached) {
    return cached.data;
  }

  // Fetch from origin
  const response = await fetch(originUrl);
  if (!response.ok) {
    throw new Error(`Origin returned ${response.status}`);
  }
  const data: T = await response.json();

  // Write to cache (fire and forget is fine here)
  await kv.put(
    cacheKey,
    JSON.stringify({ data, cachedAt: Date.now() }),
    { expirationTtl: ttlSeconds }
  );

  return data;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const productId = url.searchParams.get("id");

    if (!productId) {
      return new Response("Missing id", { status: 400 });
    }

    const product = await cachedFetch(
      env.MY_KV,
      `cache:product:${productId}`,
      `https://api.example.com/products/${productId}`,
      300 // 5 minute TTL
    );

    return Response.json(product);
  },
};

Static Site Hosting

interface AssetMetadata {
  contentType: string;
  etag: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    let path = url.pathname === "/" ? "/index.html" : url.pathname;

    const { value, metadata } = await env.MY_KV.getWithMetadata<AssetMetadata>(
      `site:${path}`,
      "arrayBuffer"
    );

    if (!value || !metadata) {
      return new Response("Not Found", { status: 404 });
    }

    // Handle conditional requests
    if (request.headers.get("if-none-match") === metadata.etag) {
      return new Response(null, { status: 304 });
    }

    return new Response(value, {
      headers: {
        "content-type": metadata.contentType,
        etag: metadata.etag,
        "cache-control": "public, max-age=3600",
      },
    });
  },
};

Bulk Write via the REST API

For migrating data or bulk updates, use the Cloudflare API rather than writing from Workers:

curl -X PUT \
  "https://api.cloudflare.com/client/v4/accounts/{account_id}/storage/kv/namespaces/{namespace_id}/bulk" \
  -H "Authorization: Bearer ${CF_API_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '[
    {"key": "user:1", "value": "{\"name\":\"Alice\"}", "expiration_ttl": 86400},
    {"key": "user:2", "value": "{\"name\":\"Bob\"}"}
  ]'

Best Practices

  • Design for eventual consistency — KV is not suitable for data where two concurrent writes must not conflict (use Durable Objects for that).
  • Use metadata for secondary information — store content types, ETags, or version numbers in metadata rather than encoding them in the value.
  • Set TTLs on ephemeral data — sessions, caches, and rate-limit counters should always have an expiration to avoid unbounded namespace growth.
  • Batch reads with Promise.all — KV supports concurrent reads within a single Worker invocation; fan out reads when you need multiple keys.
  • Key naming convention — use colon-delimited prefixes (user:123, config:flags) to enable efficient list with prefix filtering.
  • Use the cacheTtl option on readskv.get(key, { cacheTtl: 60 }) tells the runtime to cache the value in the local edge PoP for up to 60 seconds, reducing KV read units.

Common Pitfalls

  • Using KV for high-write workloads — KV is optimized for reads. Writes to the same key faster than once per second may result in lost writes due to eventual consistency.
  • Expecting immediate read-after-write globally — A write in one region may take up to 60 seconds to propagate. Design your application to tolerate stale reads from other regions.
  • Listing keys as a query mechanismlist is O(n) and scans sequentially. It is not a substitute for a database query. Keep key structures flat and predictable.
  • Storing large blobs without streaming — Values up to 25 MB are supported, but reading them as text or JSON buffers the entire value in memory. Use "stream" type for large values.
  • Forgetting the 1000-key limit per list call — Always implement cursor-based pagination when listing keys.
  • Not using preview_id for dev/staging — Without separate preview namespaces, wrangler dev will read/write to production data.

Core Philosophy

KV is a read-optimized, eventually consistent store. Design for this reality rather than wishing it were a database. KV excels at serving configuration, cached API responses, static assets, and feature flags — data that is read thousands of times for every write. If your access pattern involves frequent writes to the same key or requires immediate read-after-write consistency, KV is the wrong tool; consider Durable Objects or D1.

Think of KV metadata as free structured context. Every key can carry up to 1024 bytes of JSON metadata alongside its value. Use this for content types, ETags, version numbers, and timestamps rather than encoding this information inside the value itself. Metadata is returned with getWithMetadata without parsing the full value, making it efficient for conditional logic.

Key naming is your schema. Use colon-delimited prefixes (user:123, config:flags, cache:product:42) consistently across your application. This convention enables efficient prefix-based listing, makes the namespace self-documenting, and allows administrative tools to reason about the data without understanding application internals.

Anti-Patterns

  • Using KV for high-frequency writes — writing to the same key more than once per second risks lost writes due to eventual consistency; use Durable Objects for counters, locks, or any mutable state with high write frequency.

  • Relying on immediate read-after-write consistency — a value written in one region may take up to 60 seconds to propagate to all edge locations; design the application to tolerate stale reads or route through the primary.

  • Using list as a query mechanismlist scans keys sequentially and is O(n); it is not a substitute for database queries; keep key structures flat and predictable for efficient access.

  • Storing ephemeral data without TTL — sessions, rate-limit counters, and cache entries that never expire cause unbounded namespace growth and make cleanup impossible.

  • Buffering large values without streaming — reading 25MB values as text or JSON loads the entire payload into memory; use the "stream" type for large values to avoid hitting the 128MB memory limit.

Install this skill directly: skilldb add edge-computing-skills

Get CLI access →