Workers R2
Cloudflare R2 object storage with S3-compatible API, covering bucket operations, multipart uploads, presigned URLs, public buckets, lifecycle rules, event notifications, and cost optimization compared to S3.
You are an expert in Cloudflare R2, an S3-compatible object storage service with zero egress fees, accessible both through the Workers binding API and standard S3-compatible clients. ## Key Points - **Egress**: R2 is free. S3 charges $0.09/GB after the first 100 GB. - **Storage**: R2 is $0.015/GB/month. S3 Standard is $0.023/GB/month. - **Class A operations** (writes): R2 $4.50/million. S3 $5.00/million. - **Class B operations** (reads): R2 $0.36/million. S3 $0.40/million. - **Free tier**: R2 gives 10 GB storage, 10M reads, 1M writes per month. ## Quick Example ```bash npx wrangler r2 bucket create my-bucket # List buckets npx wrangler r2 bucket list ``` ```toml [[r2_buckets]] binding = "BUCKET" bucket_name = "my-bucket" preview_bucket_name = "my-bucket-dev" ```
skilldb get cloudflare-workers-skills/Workers R2Full skill: 417 linesWorkers R2 — Cloudflare Workers
You are an expert in Cloudflare R2, an S3-compatible object storage service with zero egress fees, accessible both through the Workers binding API and standard S3-compatible clients.
Core Philosophy
Overview
R2 provides durable object storage with an S3-compatible API. The key differentiator is zero egress fees — you pay only for storage and operations, never for bandwidth. R2 is ideal for storing user uploads, media files, backups, static assets, and any large binary data that Workers KV's 25 MiB limit cannot accommodate.
R2 vs S3 cost comparison
- Egress: R2 is free. S3 charges $0.09/GB after the first 100 GB.
- Storage: R2 is $0.015/GB/month. S3 Standard is $0.023/GB/month.
- Class A operations (writes): R2 $4.50/million. S3 $5.00/million.
- Class B operations (reads): R2 $0.36/million. S3 $0.40/million.
- Free tier: R2 gives 10 GB storage, 10M reads, 1M writes per month.
Setup
Create a bucket
npx wrangler r2 bucket create my-bucket
# List buckets
npx wrangler r2 bucket list
Bind in wrangler.toml
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
preview_bucket_name = "my-bucket-dev"
TypeScript binding
export interface Env {
BUCKET: R2Bucket;
}
Basic Operations
Upload an object
// Upload from request body
export default {
async fetch(request: Request, env: Env): Promise<Response> {
if (request.method === "PUT") {
const url = new URL(request.url);
const key = url.pathname.slice(1); // Remove leading /
const object = await env.BUCKET.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get("content-type") || "application/octet-stream",
},
customMetadata: {
uploadedBy: request.headers.get("x-user-id") || "anonymous",
uploadedAt: new Date().toISOString(),
},
});
return Response.json({
key: object.key,
size: object.size,
etag: object.etag,
});
}
},
};
Download an object
async function getObject(env: Env, key: string, request: Request): Promise<Response> {
const object = await env.BUCKET.get(key);
if (!object) {
return new Response("Not Found", { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("etag", object.httpEtag);
headers.set("cache-control", "public, max-age=31536000, immutable");
// Support conditional requests (If-None-Match)
const ifNoneMatch = request.headers.get("if-none-match");
if (ifNoneMatch === object.httpEtag) {
return new Response(null, { status: 304, headers });
}
// Support range requests
const range = request.headers.get("range");
if (range) {
const rangeObject = await env.BUCKET.get(key, { range: parseRange(range) });
if (rangeObject) {
headers.set("content-range", `bytes ${range}/${object.size}`);
return new Response(rangeObject.body, { status: 206, headers });
}
}
return new Response(object.body, { headers });
}
Delete objects
// Delete a single object
await env.BUCKET.delete("uploads/photo.jpg");
// Delete multiple objects
await env.BUCKET.delete([
"uploads/photo1.jpg",
"uploads/photo2.jpg",
"uploads/photo3.jpg",
]);
List objects
async function listObjects(env: Env, prefix: string, cursor?: string) {
const listed = await env.BUCKET.list({
prefix,
limit: 100,
cursor,
delimiter: "/", // Simulate directory listing
include: ["httpMetadata", "customMetadata"],
});
return {
objects: listed.objects.map((obj) => ({
key: obj.key,
size: obj.size,
uploaded: obj.uploaded,
etag: obj.etag,
})),
// "Directories" (common prefixes when using delimiter)
directories: listed.delimitedPrefixes,
truncated: listed.truncated,
cursor: listed.truncated ? listed.cursor : undefined,
};
}
Head (metadata only)
const head = await env.BUCKET.head("uploads/photo.jpg");
if (head) {
console.log(head.size, head.etag, head.httpMetadata, head.customMetadata);
}
Multipart Upload
For files larger than ~100 MB, use multipart upload:
async function multipartUpload(env: Env, key: string, request: Request): Promise<Response> {
// Create multipart upload
const multipart = await env.BUCKET.createMultipartUpload(key, {
httpMetadata: { contentType: "application/octet-stream" },
});
try {
const parts: R2UploadedPart[] = [];
let partNumber = 1;
const PART_SIZE = 10 * 1024 * 1024; // 10 MB per part
// Read body in chunks
const reader = request.body!.getReader();
let buffer = new Uint8Array(0);
while (true) {
const { done, value } = await reader.read();
if (value) {
const newBuffer = new Uint8Array(buffer.length + value.length);
newBuffer.set(buffer);
newBuffer.set(value, buffer.length);
buffer = newBuffer;
}
while (buffer.length >= PART_SIZE) {
const chunk = buffer.slice(0, PART_SIZE);
buffer = buffer.slice(PART_SIZE);
const part = await multipart.uploadPart(partNumber, chunk);
parts.push(part);
partNumber++;
}
if (done) {
// Upload remaining data
if (buffer.length > 0) {
const part = await multipart.uploadPart(partNumber, buffer);
parts.push(part);
}
break;
}
}
// Complete the upload
const object = await multipart.complete(parts);
return Response.json({ key: object.key, size: object.size, etag: object.etag });
} catch (err) {
// Abort on failure to clean up parts
await multipart.abort();
throw err;
}
}
Presigned URLs
Use the S3-compatible API to generate presigned URLs for direct client uploads/downloads:
import { AwsClient } from "aws4fetch";
function getR2Client(env: Env): AwsClient {
return new AwsClient({
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
});
}
async function generatePresignedUploadUrl(
env: Env,
key: string,
expiresIn: number = 3600
): Promise<string> {
const client = getR2Client(env);
const endpoint = `https://${env.ACCOUNT_ID}.r2.cloudflarestorage.com/${env.BUCKET_NAME}/${key}`;
const url = new URL(endpoint);
url.searchParams.set("X-Amz-Expires", expiresIn.toString());
const signed = await client.sign(
new Request(url, { method: "PUT" }),
{ aws: { signQuery: true } }
);
return signed.url;
}
// API endpoint for generating upload URLs
async function handleUploadRequest(request: Request, env: Env): Promise<Response> {
const { filename, contentType } = await request.json<{ filename: string; contentType: string }>();
const key = `uploads/${crypto.randomUUID()}/${filename}`;
const uploadUrl = await generatePresignedUploadUrl(env, key);
return Response.json({ uploadUrl, key });
}
Public Buckets
Enable public access
# Enable public access for a bucket
npx wrangler r2 bucket update my-bucket --public-access allow
Or use a Worker to serve files with custom logic:
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.slice(1);
if (!key) {
return new Response("Not Found", { status: 404 });
}
const object = await env.BUCKET.get(key);
if (!object) {
return new Response("Not Found", { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("etag", object.httpEtag);
headers.set("cache-control", "public, max-age=86400");
return new Response(object.body, { headers });
},
};
Lifecycle Rules
Configure automatic deletion of old objects via the Cloudflare dashboard or API:
# Via wrangler (or use the dashboard)
npx wrangler r2 bucket lifecycle set my-bucket --rules '[
{
"id": "delete-temp-files",
"enabled": true,
"conditions": { "prefix": "tmp/" },
"actions": { "deleteAfterDays": 1 }
},
{
"id": "archive-old-logs",
"enabled": true,
"conditions": { "prefix": "logs/" },
"actions": { "deleteAfterDays": 90 }
}
]'
Event Notifications
R2 can trigger Workers on object changes:
# wrangler.toml
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
# Receive events when objects are created or deleted
[[r2_buckets.event_notifications]]
queue = "r2-events"
actions = ["PutObject", "DeleteObject"]
prefix = "uploads/"
export default {
async queue(batch: MessageBatch<R2EventNotification>, env: Env): Promise<void> {
for (const message of batch.messages) {
const event = message.body;
console.log(`Action: ${event.action}, Key: ${event.object.key}, Size: ${event.object.size}`);
if (event.action === "PutObject" && event.object.key.endsWith(".jpg")) {
// Trigger image processing
await processImage(env, event.object.key);
}
message.ack();
}
},
};
Image Upload API — Full Example
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (request.method === "POST" && url.pathname === "/upload") {
const formData = await request.formData();
const file = formData.get("file") as File | null;
if (!file) {
return Response.json({ error: "No file provided" }, { status: 400 });
}
const maxSize = 10 * 1024 * 1024; // 10 MB
if (file.size > maxSize) {
return Response.json({ error: "File too large" }, { status: 413 });
}
const allowedTypes = ["image/jpeg", "image/png", "image/webp", "image/gif"];
if (!allowedTypes.includes(file.type)) {
return Response.json({ error: "Invalid file type" }, { status: 415 });
}
const ext = file.name.split(".").pop();
const key = `images/${crypto.randomUUID()}.${ext}`;
await env.BUCKET.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
customMetadata: { originalName: file.name },
});
return Response.json({ key, url: `https://cdn.example.com/${key}` });
}
return new Response("Not Found", { status: 404 });
},
};
Limits
| Resource | Limit |
|---|---|
| Max object size (single PUT) | 5 GB |
| Max object size (multipart) | 5 TB |
| Min multipart part size | 5 MB (except last part) |
| Max parts per multipart | 10,000 |
| Max buckets per account | 1,000 |
| Max custom metadata per object | 2,048 bytes |
Install this skill directly: skilldb add cloudflare-workers-skills
Related Skills
Durable Objects
Cloudflare Durable Objects for stateful edge computing, covering constructor patterns, storage API, WebSocket support, alarm handlers, consistency guarantees, and use cases like rate limiting, collaboration, and game state.
Workers AI
Cloudflare Workers AI for running inference at the edge, covering supported models, text generation, embeddings, image generation, speech-to-text, AI bindings, and streaming responses.
Workers D1
Cloudflare D1 serverless SQLite database for Workers, covering schema management, migrations, queries, prepared statements, batch operations, local development, replication, backups, and performance optimization.
Workers Fundamentals
Cloudflare Workers runtime fundamentals including V8 isolates, wrangler CLI, project setup, local development, deployment, environment variables, secrets, and compatibility dates.
Workers KV
Cloudflare Workers KV namespace for globally distributed key-value storage, including read/write patterns, caching strategies, TTL, list operations, metadata, bulk operations, and the eventual consistency model.
Workers Patterns
Production patterns for Cloudflare Workers including queue consumers, cron triggers, email workers, browser rendering, Hyperdrive database connection pooling, Vectorize vector search, and the analytics engine.