Cloudflare Workers
Cloudflare Workers for serverless edge compute using the V8 isolate model
You are an expert in Cloudflare Workers for building edge-first applications that run on Cloudflare's global network across 300+ data centers.
## Key Points
- CPU time limit: 10ms (free) / 30s (paid) per invocation
- Memory: 128 MB per isolate
- No Node.js APIs by default (use `nodejs_compat` compatibility flag for a subset)
- Request/response body size: 100 MB max
- **Use `wrangler dev` for local development** — it simulates the Workers runtime faithfully including bindings.
- **Keep CPU work minimal** — offload heavy computation to Durable Objects, Queues, or origin servers.
- **Use `ctx.waitUntil`** for fire-and-forget tasks like logging or analytics so they don't block the response.
- **Set `compatibility_date`** explicitly in `wrangler.toml` and update it intentionally to avoid breaking changes.
- **Structure projects with Hono or itty-router** for anything beyond a trivial Worker; raw fetch handlers get unwieldy.
- **Use secrets via `wrangler secret put`** — never commit secrets in `wrangler.toml` vars.
- **Pin third-party dependencies carefully** — not all npm packages work in the Workers runtime due to missing Node.js APIs.
- **Assuming Node.js globals exist** — `Buffer`, `process`, `__dirname` are not available by default. Enable `nodejs_compat` flag and import explicitly.
## Quick Example
```typescript
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
ctx.waitUntil(logAnalytics(request, env));
return new Response("OK");
}
```
```toml
[triggers]
crons = ["0 * * * *"] # Every hour
```skilldb get edge-computing-skills/Cloudflare WorkersFull skill: 212 linesCloudflare Workers — Edge Computing
You are an expert in Cloudflare Workers for building edge-first applications that run on Cloudflare's global network across 300+ data centers.
Overview
Cloudflare Workers execute JavaScript, TypeScript, or WebAssembly at the edge using V8 isolates rather than containers. They start in under 5ms, have zero cold starts after initial deployment, and run within milliseconds of end users. Workers use the Service Worker API and the Fetch event model.
Key constraints:
- CPU time limit: 10ms (free) / 30s (paid) per invocation
- Memory: 128 MB per isolate
- No Node.js APIs by default (use
nodejs_compatcompatibility flag for a subset) - Request/response body size: 100 MB max
Core Concepts
The Fetch Handler
Every Worker exports a fetch handler that receives a Request and returns a Response:
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/api/hello") {
return Response.json({ message: "Hello from the edge" });
}
return new Response("Not Found", { status: 404 });
},
};
Bindings and Environment
Workers access external services through bindings declared in wrangler.toml:
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-09-23"
[vars]
API_KEY = "public-value"
[[kv_namespaces]]
binding = "CACHE"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "def456"
Bindings are injected as the env parameter — never use global variables or process.env.
ExecutionContext
ctx.waitUntil(promise) allows background work after the response is sent:
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
ctx.waitUntil(logAnalytics(request, env));
return new Response("OK");
}
Implementation Patterns
Router Pattern
type RouteHandler = (req: Request, env: Env, ctx: ExecutionContext) => Promise<Response>;
const routes: Record<string, Record<string, RouteHandler>> = {
"/api/users": {
GET: handleListUsers,
POST: handleCreateUser,
},
"/api/health": {
GET: async () => Response.json({ status: "ok" }),
},
};
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
const route = routes[url.pathname];
if (!route) {
return new Response("Not Found", { status: 404 });
}
const handler = route[request.method];
if (!handler) {
return new Response("Method Not Allowed", { status: 405 });
}
try {
return await handler(request, env, ctx);
} catch (err) {
return new Response("Internal Server Error", { status: 500 });
}
},
};
Subrequest Fan-Out
Workers can make up to 1000 subrequests per invocation (paid plan):
async function aggregateData(env: Env): Promise<Response> {
const [users, products, orders] = await Promise.all([
fetch("https://api.example.com/users").then((r) => r.json()),
fetch("https://api.example.com/products").then((r) => r.json()),
fetch("https://api.example.com/orders").then((r) => r.json()),
]);
return Response.json({ users, products, orders });
}
HTMLRewriter for On-the-Fly Transforms
async function handleRequest(request: Request): Promise<Response> {
const response = await fetch(request);
return new HTMLRewriter()
.on("title", {
element(el) {
el.setInnerContent("Modified Title");
},
})
.on("script[src]", {
element(el) {
const src = el.getAttribute("src");
if (src?.startsWith("http://")) {
el.setAttribute("src", src.replace("http://", "https://"));
}
},
})
.transform(response);
}
Scheduled (Cron) Workers
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
ctx.waitUntil(cleanupExpiredSessions(env));
},
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
return new Response("OK");
},
};
[triggers]
crons = ["0 * * * *"] # Every hour
Best Practices
- Use
wrangler devfor local development — it simulates the Workers runtime faithfully including bindings. - Keep CPU work minimal — offload heavy computation to Durable Objects, Queues, or origin servers.
- Use
ctx.waitUntilfor fire-and-forget tasks like logging or analytics so they don't block the response. - Set
compatibility_dateexplicitly inwrangler.tomland update it intentionally to avoid breaking changes. - Structure projects with Hono or itty-router for anything beyond a trivial Worker; raw fetch handlers get unwieldy.
- Use secrets via
wrangler secret put— never commit secrets inwrangler.tomlvars. - Pin third-party dependencies carefully — not all npm packages work in the Workers runtime due to missing Node.js APIs.
Common Pitfalls
- Assuming Node.js globals exist —
Buffer,process,__dirnameare not available by default. Enablenodejs_compatflag and import explicitly. - Blocking the event loop with synchronous work — Workers share an isolate; long-running sync code triggers CPU time limits.
- Exceeding subrequest limits — Free plan allows only 50 subrequests; paid allows 1000. Batch calls accordingly.
- Using
new Date()for cache keys — Time resolution inside Workers can be coarsened; use request-derived keys instead. - Forgetting that Workers are stateless — Global variables may persist across requests within the same isolate but are not guaranteed. Never rely on in-memory state between requests.
- Large response bodies without streaming — Buffering entire large responses into memory hits the 128 MB limit. Use
TransformStreamfor streaming.
Core Philosophy
Workers are not small servers — they are functions that run at the edge. Every invocation starts fresh, shares nothing with previous requests in a guaranteed way, and is bounded by strict CPU and memory limits. Design your Worker as a pure function of its inputs (request, bindings, context), not as a stateful service that accumulates knowledge across requests. This stateless model is what makes Workers scale to millions of requests per second across 300+ data centers.
The fetch handler is your API surface. Structure it cleanly from the beginning with proper routing, error handling, and response construction. For anything beyond a trivial endpoint, use a lightweight router (Hono, itty-router) rather than growing a chain of if/else statements. The raw fetch handler is a foundation to build on, not a pattern to scale.
Use ctx.waitUntil() strategically. It allows background work (logging, analytics, cache population) to continue after the response is sent, keeping time-to-first-byte fast. But it is not a task queue — if the background work fails, there is no retry. For work that must succeed, process it synchronously before responding, or use Cloudflare Queues for durable async processing.
Anti-Patterns
-
Relying on global variables for cross-request state — global variables may persist within an isolate between requests but are not guaranteed; building features on this assumption creates intermittent, unreproducible bugs.
-
Assuming Node.js APIs exist —
Buffer,process,__dirname, andrequireare not available by default; enablenodejs_compatand import explicitly, or use Web Standard APIs instead. -
Committing secrets in
wrangler.tomlvars — environment variables inwrangler.tomlare checked into source control; usewrangler secret putfor sensitive values. -
Making synchronous blocking calls in hot paths — long-running synchronous computation triggers CPU time limits and blocks the isolate, affecting other requests sharing the same V8 instance.
-
Not setting
compatibility_dateexplicitly — omitting the compatibility date means the Worker gets the latest runtime behavior on every deployment, which may introduce breaking changes without warning.
Install this skill directly: skilldb add edge-computing-skills
Related Skills
Cloudflare D1
Cloudflare D1 for running SQLite databases at the edge with SQL query support
Cloudflare Kv
Cloudflare Workers KV for globally distributed key-value storage at the edge
Deno Deploy
Deno Deploy for globally distributed edge applications using the Deno runtime
Edge Auth
Authentication and authorization at the edge for securing requests before they reach the origin
Edge Caching
Edge caching strategies for optimizing content delivery and reducing origin load
Geolocation Routing
Geo-based routing and personalization for delivering localized content at the edge