Skip to main content
Technology & EngineeringStorage Services173 lines

Cloudflare R2

Build with Cloudflare R2 for S3-compatible object storage with zero egress fees.

Quick Summary27 lines
You are a storage specialist who integrates Cloudflare R2 into projects. R2 is
Cloudflare's S3-compatible object storage with zero egress fees — you pay for
storage and operations, never for bandwidth. It works with any S3 SDK and
integrates natively with Cloudflare Workers.

## Key Points

- Use presigned URLs for client uploads — same pattern as S3
- Enable public access for static assets — zero egress cost
- Use Workers bindings when running on Cloudflare — faster than S3 API calls
- Set `Cache-Control` headers for public assets — leverage Cloudflare's CDN
- Use custom domains for public buckets — cleaner URLs than the `.r2.dev` domain
- Migrate from S3 by just changing the endpoint — code stays the same
- Using S3 API from Workers when a binding is available — bindings are faster
- Not enabling public access for assets that should be public — presigned URLs add complexity
- Ignoring Cache-Control headers — miss out on CDN caching
- Using `r2.dev` domain in production — use a custom domain for branding
- Paying S3 egress when R2 would be free — evaluate the cost difference
- Not setting Content-Type on upload — files serve as `application/octet-stream`

## Quick Example

```bash
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
```
skilldb get storage-services-skills/Cloudflare R2Full skill: 173 lines
Paste into your CLAUDE.md or agent config

Cloudflare R2 Integration

You are a storage specialist who integrates Cloudflare R2 into projects. R2 is Cloudflare's S3-compatible object storage with zero egress fees — you pay for storage and operations, never for bandwidth. It works with any S3 SDK and integrates natively with Cloudflare Workers.

Core Philosophy

Zero egress fees

R2's pricing model charges for storage and write operations only. Reads and bandwidth are free. This makes it ideal for serving public assets, user uploads, and any read-heavy workload where S3 egress costs add up.

S3-compatible API

R2 speaks the S3 protocol. Your existing S3 code, SDKs, and tools work with R2 by changing the endpoint URL. Migration from S3 is a configuration change, not a code rewrite.

Workers binding for edge access

In Cloudflare Workers, R2 is available as a direct binding — no HTTP calls, no credentials. Access storage at the edge with sub-millisecond overhead.

Setup

Using AWS SDK (S3 compatible)

npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
import { S3Client } from '@aws-sdk/client-s3';

const r2 = new S3Client({
  region: 'auto',
  endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

Key Techniques

Upload and download (S3 API)

import { PutObjectCommand, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const BUCKET = process.env.R2_BUCKET!;

// Presigned upload URL
const key = `uploads/${crypto.randomUUID()}-${filename}`;
const command = new PutObjectCommand({ Bucket: BUCKET, Key: key, ContentType: contentType });
const uploadUrl = await getSignedUrl(r2, command, { expiresIn: 3600 });

// Presigned download URL
const downloadUrl = await getSignedUrl(r2, new GetObjectCommand({ Bucket: BUCKET, Key: key }), { expiresIn: 3600 });

// Server-side upload
await r2.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: 'assets/logo.png',
  Body: buffer,
  ContentType: 'image/png',
}));

// Delete
await r2.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: 'uploads/old.jpg' }));

Public bucket (custom domain)

// R2 public buckets serve files via a Cloudflare-managed domain or custom domain
// Enable in Dashboard: R2 > Bucket > Settings > Public access

// Public URL pattern:
// https://pub-{hash}.r2.dev/{key}
// Or with custom domain: https://assets.yourdomain.com/{key}

// Upload to public bucket — no presigned URL needed for reads
await r2.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: `avatars/${userId}.webp`,
  Body: imageBuffer,
  ContentType: 'image/webp',
}));

// Read URL is just the public domain + key
const publicUrl = `https://assets.yourdomain.com/avatars/${userId}.webp`;

Workers binding (edge)

// wrangler.toml
// [[r2_buckets]]
// binding = "BUCKET"
// bucket_name = "my-bucket"

// Worker
export default {
  async fetch(request: Request, env: { BUCKET: R2Bucket }) {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);

    if (request.method === 'GET') {
      const object = await env.BUCKET.get(key);
      if (!object) return new Response('Not found', { status: 404 });

      return new Response(object.body, {
        headers: {
          'Content-Type': object.httpMetadata?.contentType ?? 'application/octet-stream',
          'Cache-Control': 'public, max-age=31536000',
        },
      });
    }

    if (request.method === 'PUT') {
      await env.BUCKET.put(key, request.body, {
        httpMetadata: { contentType: request.headers.get('Content-Type') ?? undefined },
      });
      return new Response('OK');
    }

    if (request.method === 'DELETE') {
      await env.BUCKET.delete(key);
      return new Response('Deleted');
    }

    return new Response('Method not allowed', { status: 405 });
  },
};

List objects

import { ListObjectsV2Command } from '@aws-sdk/client-s3';

const { Contents } = await r2.send(new ListObjectsV2Command({
  Bucket: BUCKET,
  Prefix: 'uploads/',
  MaxKeys: 100,
}));

Best Practices

  • Use presigned URLs for client uploads — same pattern as S3
  • Enable public access for static assets — zero egress cost
  • Use Workers bindings when running on Cloudflare — faster than S3 API calls
  • Set Cache-Control headers for public assets — leverage Cloudflare's CDN
  • Use custom domains for public buckets — cleaner URLs than the .r2.dev domain
  • Migrate from S3 by just changing the endpoint — code stays the same

Anti-Patterns

  • Using S3 API from Workers when a binding is available — bindings are faster
  • Not enabling public access for assets that should be public — presigned URLs add complexity
  • Ignoring Cache-Control headers — miss out on CDN caching
  • Using r2.dev domain in production — use a custom domain for branding
  • Paying S3 egress when R2 would be free — evaluate the cost difference
  • Not setting Content-Type on upload — files serve as application/octet-stream

Install this skill directly: skilldb add storage-services-skills

Get CLI access →