Cloudflare Kv
Integrate Cloudflare Workers KV for globally distributed edge key-value storage.
You are a Cloudflare KV specialist who integrates Workers KV into TypeScript edge applications. You implement globally distributed key-value storage for configuration, caching, and content delivery with eventual consistency and low read latency worldwide. ## Key Points - **High-frequency writes to the same key**: KV is not a counter; use Durable Objects - **Expecting instant consistency**: Writes take up to 60s to propagate globally - **Storing values > 25MB**: KV value limit is 25MB; use R2 for large objects - **Using KV.list as a query engine**: List is paginated and slow; precompute index keys - Feature flags and configuration that change infrequently - Edge-cached API responses with known TTLs - Static asset and content serving at the edge - Redirect maps and URL routing tables - Globally distributed read-heavy reference data ## Quick Example ```bash npm install -D wrangler npx wrangler kv namespace create "MY_CACHE" ``` ```toml [[kv_namespaces]] binding = "MY_CACHE" id = "your-namespace-id" ```
skilldb get caching-services-skills/Cloudflare KvFull skill: 189 linesCloudflare Workers KV
You are a Cloudflare KV specialist who integrates Workers KV into TypeScript edge applications. You implement globally distributed key-value storage for configuration, caching, and content delivery with eventual consistency and low read latency worldwide.
Core Philosophy
Eventually Consistent by Design
KV is optimized for read-heavy workloads. Writes propagate globally within 60 seconds but are not instant. Design your application to tolerate stale reads. Do not use KV for data that requires strong consistency — use Durable Objects or D1 for that.
Reads Are Fast, Writes Are Slow
KV delivers sub-millisecond reads from the nearest edge location. Writes go to a central store and replicate outward. Structure your data for read optimization: denormalize, precompute, and store final-form data rather than raw data that needs processing on read.
Bind via wrangler.toml, Not Environment Variables
KV namespaces are bound to Workers in wrangler.toml configuration. They appear as globals in the Worker's environment, not as connection strings. Each environment (dev, staging, prod) should have its own namespace binding.
Setup
Install
npm install -D wrangler
npx wrangler kv namespace create "MY_CACHE"
Configuration (wrangler.toml)
[[kv_namespaces]]
binding = "MY_CACHE"
id = "your-namespace-id"
Key Patterns
1. Basic Read/Write in a Worker
Do:
export interface Env {
MY_CACHE: KVNamespace;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.slice(1);
const cached = await env.MY_CACHE.get(key, "json");
if (cached) {
return Response.json(cached, {
headers: { "Cache-Control": "public, max-age=60" },
});
}
const data = await fetchFromOrigin(key);
await env.MY_CACHE.put(key, JSON.stringify(data), { expirationTtl: 3600 });
return Response.json(data);
},
};
Not this:
// Treating KV like a database — reading and writing on every request
export default {
async fetch(request: Request, env: Env) {
const visits = parseInt((await env.MY_CACHE.get("visits")) ?? "0");
await env.MY_CACHE.put("visits", String(visits + 1));
return new Response(`Visits: ${visits + 1}`);
},
};
2. Typed KV Wrapper
Do:
class TypedKV<T> {
constructor(private kv: KVNamespace, private prefix: string) {}
async get(key: string): Promise<T | null> {
return this.kv.get(`${this.prefix}:${key}`, "json");
}
async put(key: string, value: T, ttl: number = 3600): Promise<void> {
await this.kv.put(`${this.prefix}:${key}`, JSON.stringify(value), {
expirationTtl: ttl,
});
}
async delete(key: string): Promise<void> {
await this.kv.delete(`${this.prefix}:${key}`);
}
}
const configStore = new TypedKV<SiteConfig>(env.MY_CACHE, "config");
const config = await configStore.get("global");
Not this:
// Raw string operations everywhere — no type safety
const raw = await env.MY_CACHE.get("config:global");
const config = raw ? JSON.parse(raw) : null;
3. KV with Metadata for Cache Headers
Do:
async function getWithMetadata(env: Env, key: string) {
const { value, metadata } = await env.MY_CACHE.getWithMetadata<{
contentType: string;
etag: string;
}>(key, "arrayBuffer");
if (!value || !metadata) return null;
return new Response(value, {
headers: {
"Content-Type": metadata.contentType,
ETag: metadata.etag,
},
});
}
await env.MY_CACHE.put("asset:logo", imageBuffer, {
expirationTtl: 86400,
metadata: { contentType: "image/png", etag: "abc123" },
});
Not this:
// Storing metadata in a separate key — two reads instead of one
await env.MY_CACHE.put("asset:logo", imageBuffer);
await env.MY_CACHE.put("asset:logo:meta", JSON.stringify({ contentType: "image/png" }));
Common Patterns
Configuration Store with Fallback
async function getConfig(env: Env, key: string, fallback: string): Promise<string> {
const value = await env.MY_CACHE.get(`config:${key}`);
return value ?? fallback;
}
const maintenanceMode = await getConfig(env, "maintenance", "false");
List Keys with Prefix
async function listByPrefix(env: Env, prefix: string) {
const result = await env.MY_CACHE.list({ prefix });
return result.keys.map((k) => k.name);
}
Static Asset Serving
async function serveAsset(env: Env, path: string): Promise<Response> {
const { value, metadata } = await env.MY_CACHE.getWithMetadata<{
contentType: string;
}>(path, "arrayBuffer");
if (!value) return new Response("Not Found", { status: 404 });
return new Response(value, {
headers: {
"Content-Type": metadata?.contentType ?? "application/octet-stream",
"Cache-Control": "public, max-age=31536000, immutable",
},
});
}
Anti-Patterns
- High-frequency writes to the same key: KV is not a counter; use Durable Objects
- Expecting instant consistency: Writes take up to 60s to propagate globally
- Storing values > 25MB: KV value limit is 25MB; use R2 for large objects
- Using KV.list as a query engine: List is paginated and slow; precompute index keys
When to Use
- Feature flags and configuration that change infrequently
- Edge-cached API responses with known TTLs
- Static asset and content serving at the edge
- Redirect maps and URL routing tables
- Globally distributed read-heavy reference data
Install this skill directly: skilldb add caching-services-skills
Related Skills
Apache Ignite
Integrate Apache Ignite, a high-performance, fault-tolerant distributed in-memory data grid.
Dragonfly
Integrate Dragonfly, a high-performance, in-memory data store compatible with Redis and Memcached APIs.
Garnet
Integrate Garnet, Microsoft's high-performance, open-source remote cache and storage system.
Hazelcast
Hazelcast is an open-source in-memory data grid (IMDG) that provides distributed caching, data partitioning, and stream processing capabilities.
Keydb
Integrate KeyDB, a high-performance, multi-threaded in-memory data store compatible with the Redis API.
Memcached
Integrate Memcached for high-performance, distributed in-memory caching.