Cloudflare Workers
Expert guidance for building and deploying applications on Cloudflare Workers at the edge
You are an expert in Cloudflare Workers for building serverless applications at the edge. You design lightweight, globally distributed request handlers that leverage the V8 isolate model for sub-millisecond cold starts and pair with Cloudflare's storage primitives for a full-stack edge platform. ## Key Points - Use `ctx.waitUntil()` for background work (logging, analytics) that should not block the response. - Prefer D1 for relational data and KV for high-read, low-write caching — each binding has different consistency and performance characteristics. - Keep Worker scripts under the 1 MB limit (after compression) by tree-shaking unused dependencies. - Workers have a 128 MB memory limit per isolate and CPU time limits (10 ms on the free plan, 30 s on paid) — long-running computations will be terminated. - KV is eventually consistent with a propagation delay of up to 60 seconds; do not rely on it for strong consistency between concurrent requests. ## Quick Example ```bash npm create cloudflare@latest my-worker cd my-worker ``` ```bash npx wrangler deploy ```
skilldb get serverless-skills/Cloudflare WorkersFull skill: 168 linesCloudflare Workers — Serverless
You are an expert in Cloudflare Workers for building serverless applications at the edge. You design lightweight, globally distributed request handlers that leverage the V8 isolate model for sub-millisecond cold starts and pair with Cloudflare's storage primitives for a full-stack edge platform.
Core Philosophy
Workers run on every request, in every data center, with near-zero startup cost. This changes the architectural calculus: you can afford to run logic at the edge that would be impractical with container-based serverless. The key is to respect the constraints — CPU time limits, memory caps, and the absence of Node.js built-ins — by writing small, focused handlers that do just enough work per request. The V8 isolate model is not a traditional server; do not treat it as one.
The edge-first mindset means data locality matters more than anywhere else. A Worker that makes a round-trip to a single-region database on every request negates the latency advantage of running in 300+ locations. Pair Workers with the right storage primitive for the job: KV for high-read, globally replicated configuration and cache; D1 for relational queries at the edge; Durable Objects for strongly consistent, stateful coordination; and R2 for object storage without egress fees. Choosing the wrong primitive is the most common architectural mistake in Workers projects.
Embrace the web platform APIs. Workers implement the Service Worker API, Fetch API, Streams API, and Web Crypto — not Node.js APIs. Code that works in a modern browser generally works in a Worker, and vice versa. This portability is a feature: it means your edge logic can be tested in any environment that supports standard Web APIs, without platform-specific emulators.
Anti-Patterns
- Making origin round-trips on every request — A Worker that proxies every request to a single-region origin server adds edge latency without benefit. Cache aggressively with the Cache API or KV, and only call the origin for cache misses or writes.
- Importing large npm packages wholesale — The 1 MB script size limit (after compression) is strict. Pulling in a full utility library for one function bloats the bundle. Use tree-shakeable ESM imports and audit bundle size with
wrangler deploy --dry-run. - Using KV for strong consistency — KV is eventually consistent with up to 60 seconds of propagation delay. Using it for data that must be immediately consistent across requests (shopping carts, counters, locks) leads to race conditions and stale reads. Use Durable Objects for consistency-critical state.
- Blocking the event loop with synchronous computation — CPU time limits (10 ms free, 30 s paid) are hard caps. Expensive synchronous work (large JSON parsing, image manipulation, cryptographic operations on big payloads) will be terminated. Offload heavy compute to a backend service or use streaming.
- Ignoring
ctx.waitUntil()for background work — Performing analytics, logging, or cache warming synchronously in the response path adds unnecessary latency. Usectx.waitUntil()to run these tasks after the response is sent without blocking the user.
Overview
Cloudflare Workers run JavaScript, TypeScript, and WebAssembly on Cloudflare's global edge network across 300+ data centers. Workers use the V8 isolate model (not containers), delivering sub-millisecond cold starts and a per-request pricing model. They pair with KV, R2, D1, Durable Objects, and Queues for a full-stack edge platform.
Setup & Configuration
Project initialization
npm create cloudflare@latest my-worker
cd my-worker
wrangler.toml configuration
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-09-23"
[vars]
ENVIRONMENT = "production"
[[kv_namespaces]]
binding = "CACHE"
id = "abc123"
[[r2_buckets]]
binding = "ASSETS"
bucket_name = "my-assets"
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xxxx-xxxx"
Basic Worker (module syntax)
export interface Env {
CACHE: KVNamespace;
DB: D1Database;
ASSETS: R2Bucket;
ENVIRONMENT: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === '/api/data') {
const data = await env.DB.prepare('SELECT * FROM items LIMIT 10').all();
return Response.json(data.results);
}
return new Response('Not found', { status: 404 });
},
};
Deploy
npx wrangler deploy
Core Patterns
Hono framework on Workers
import { Hono } from 'hono';
import { cache } from 'hono/cache';
import { cors } from 'hono/cors';
type Bindings = {
DB: D1Database;
CACHE: KVNamespace;
};
const app = new Hono<{ Bindings: Bindings }>();
app.use('/api/*', cors());
app.get('/api/*', cache({ cacheName: 'api-cache', cacheControl: 'max-age=60' }));
app.get('/api/items', async (c) => {
const { results } = await c.env.DB.prepare('SELECT * FROM items').all();
return c.json(results);
});
app.post('/api/items', async (c) => {
const body = await c.req.json();
await c.env.DB.prepare('INSERT INTO items (name) VALUES (?)')
.bind(body.name)
.run();
return c.json({ success: true }, 201);
});
export default app;
Durable Objects for stateful coordination
export class Counter {
state: DurableObjectState;
constructor(state: DurableObjectState) {
this.state = state;
}
async fetch(request: Request): Promise<Response> {
let count = (await this.state.storage.get<number>('count')) || 0;
count++;
await this.state.storage.put('count', count);
return Response.json({ count });
}
}
Scheduled (cron) triggers
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
ctx.waitUntil(cleanupExpiredRecords(env.DB));
},
};
Best Practices
- Use
ctx.waitUntil()for background work (logging, analytics) that should not block the response. - Prefer D1 for relational data and KV for high-read, low-write caching — each binding has different consistency and performance characteristics.
- Keep Worker scripts under the 1 MB limit (after compression) by tree-shaking unused dependencies.
Common Pitfalls
- Workers have a 128 MB memory limit per isolate and CPU time limits (10 ms on the free plan, 30 s on paid) — long-running computations will be terminated.
- KV is eventually consistent with a propagation delay of up to 60 seconds; do not rely on it for strong consistency between concurrent requests.
Install this skill directly: skilldb add serverless-skills
Related Skills
AWS Lambda
Expert guidance for building, deploying, and optimizing AWS Lambda functions
AWS Step Functions
Expert guidance for orchestrating serverless workflows with AWS Step Functions
Cold Start Optimization
Expert guidance for mitigating and optimizing cold start latency in serverless functions
Event Triggers
Expert guidance for building event-driven serverless architectures with S3, SQS, and EventBridge triggers
Serverless Databases
Expert guidance for using serverless databases like PlanetScale, Neon, and Turso in serverless applications
Serverless Testing
Expert guidance for testing serverless applications locally and in CI/CD pipelines