Skip to main content
Technology & EngineeringStorage Services207 lines

Backblaze B2

Build with Backblaze B2 for low-cost S3-compatible object storage.

Quick Summary25 lines
You are an expert in integrating Backblaze B2 for file and object storage. B2
is Backblaze's cloud object storage service offering S3-compatible API access
at roughly one-fifth the cost of AWS S3. It pairs with Cloudflare's Bandwidth
Alliance for free egress through Cloudflare CDN.

## Key Points

1. Log in to the Backblaze dashboard
2. Go to **App Keys** under **Account**
3. Create a new application key scoped to a specific bucket (preferred) or all buckets
4. Save both the `keyID` (access key) and the `applicationKey` (secret key) — the secret is shown only once
- Scope application keys to a single bucket — limits blast radius if credentials leak
- Use Cloudflare in front of public buckets — egress through the Bandwidth Alliance is free
- Set lifecycle rules on temporary upload prefixes — avoid accumulating abandoned uploads that cost storage fees
- Using part sizes below 5 MB for multipart uploads — B2 requires a minimum 5 MB part size (100 MB recommended for performance)
- Forgetting that B2 charges Class B transactions on downloads — frequent small-file reads add up; batch or cache where possible
- Not URL-encoding object keys with special characters — B2's S3-compatible API is stricter than AWS S3 about key encoding

## Quick Example

```bash
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
```
skilldb get storage-services-skills/Backblaze B2Full skill: 207 lines
Paste into your CLAUDE.md or agent config

Backblaze B2 — Storage Integration

You are an expert in integrating Backblaze B2 for file and object storage. B2 is Backblaze's cloud object storage service offering S3-compatible API access at roughly one-fifth the cost of AWS S3. It pairs with Cloudflare's Bandwidth Alliance for free egress through Cloudflare CDN.

Core Philosophy

Overview

Backblaze B2 provides two APIs: the native B2 API and an S3-compatible API. The S3-compatible API lets you use existing AWS SDK code with minimal changes. B2 is priced at $6/TB/month for storage and $0.01 per 10,000 downloads (Class B transactions), with the first 10 GB of storage free. Egress through Cloudflare is free via the Bandwidth Alliance partnership.

Setup & Configuration

Using the S3-compatible API (recommended)

npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
import { S3Client } from '@aws-sdk/client-s3';

// Find your endpoint in B2 bucket details — region is in the endpoint URL
// e.g., s3.us-west-004.backblazeb2.com
const b2 = new S3Client({
  region: process.env.B2_REGION!, // e.g., 'us-west-004'
  endpoint: `https://s3.${process.env.B2_REGION}.backblazeb2.com`,
  credentials: {
    accessKeyId: process.env.B2_APPLICATION_KEY_ID!,
    secretAccessKey: process.env.B2_APPLICATION_KEY!,
  },
});

Create application keys

  1. Log in to the Backblaze dashboard
  2. Go to App Keys under Account
  3. Create a new application key scoped to a specific bucket (preferred) or all buckets
  4. Save both the keyID (access key) and the applicationKey (secret key) — the secret is shown only once

Core Patterns

Upload and download

import {
  PutObjectCommand,
  GetObjectCommand,
  DeleteObjectCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const BUCKET = process.env.B2_BUCKET_NAME!;

// Server-side upload
await b2.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: `uploads/${crypto.randomUUID()}-${filename}`,
  Body: buffer,
  ContentType: 'image/png',
}));

// Presigned upload URL for client-side uploads
const putCommand = new PutObjectCommand({
  Bucket: BUCKET,
  Key: `uploads/${crypto.randomUUID()}-${filename}`,
  ContentType: contentType,
});
const uploadUrl = await getSignedUrl(b2, putCommand, { expiresIn: 3600 });

// Presigned download URL
const getCommand = new GetObjectCommand({ Bucket: BUCKET, Key: key });
const downloadUrl = await getSignedUrl(b2, getCommand, { expiresIn: 3600 });

// Delete
await b2.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: key }));

Public bucket with Cloudflare CDN

// 1. Set bucket to "Public" in B2 dashboard
// 2. The native public URL is:
//    https://f004.backblazeb2.com/file/{bucket-name}/{key}
// 3. For free egress, put Cloudflare in front:
//    - Add a CNAME record pointing to f004.backblazeb2.com
//    - Set up a Cloudflare Transform Rule to rewrite the URL path

// Upload a public asset
await b2.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: 'assets/hero.webp',
  Body: imageBuffer,
  ContentType: 'image/webp',
  CacheControl: 'public, max-age=31536000, immutable',
}));

// Direct B2 URL
const b2Url = `https://f004.backblazeb2.com/file/${BUCKET}/assets/hero.webp`;

// Cloudflare CDN URL (after CNAME setup)
const cdnUrl = `https://assets.yourdomain.com/assets/hero.webp`;

List objects

import { ListObjectsV2Command } from '@aws-sdk/client-s3';

const { Contents } = await b2.send(new ListObjectsV2Command({
  Bucket: BUCKET,
  Prefix: 'uploads/',
  MaxKeys: 1000,
}));

Lifecycle rules

import { PutBucketLifecycleConfigurationCommand } from '@aws-sdk/client-s3';

// Auto-delete temporary uploads after 7 days
await b2.send(new PutBucketLifecycleConfigurationCommand({
  Bucket: BUCKET,
  LifecycleConfiguration: {
    Rules: [
      {
        ID: 'delete-temp-uploads',
        Prefix: 'tmp/',
        Status: 'Enabled',
        Expiration: { Days: 7 },
      },
    ],
  },
}));

Multipart upload for large files

import {
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand,
} from '@aws-sdk/client-s3';

const PART_SIZE = 100 * 1024 * 1024; // 100 MB minimum recommended part size for B2

const { UploadId } = await b2.send(new CreateMultipartUploadCommand({
  Bucket: BUCKET,
  Key: 'backups/large-archive.tar.gz',
  ContentType: 'application/gzip',
}));

const parts = [];
for (let i = 0; i < totalParts; i++) {
  const { ETag } = await b2.send(new UploadPartCommand({
    Bucket: BUCKET,
    Key: 'backups/large-archive.tar.gz',
    UploadId,
    PartNumber: i + 1,
    Body: chunks[i],
  }));
  parts.push({ ETag, PartNumber: i + 1 });
}

await b2.send(new CompleteMultipartUploadCommand({
  Bucket: BUCKET,
  Key: 'backups/large-archive.tar.gz',
  UploadId,
  MultipartUpload: { Parts: parts },
}));

Best Practices

  • Scope application keys to a single bucket — limits blast radius if credentials leak
  • Use Cloudflare in front of public buckets — egress through the Bandwidth Alliance is free
  • Set lifecycle rules on temporary upload prefixes — avoid accumulating abandoned uploads that cost storage fees

Common Pitfalls

  • Using part sizes below 5 MB for multipart uploads — B2 requires a minimum 5 MB part size (100 MB recommended for performance)
  • Forgetting that B2 charges Class B transactions on downloads — frequent small-file reads add up; batch or cache where possible
  • Not URL-encoding object keys with special characters — B2's S3-compatible API is stricter than AWS S3 about key encoding

Anti-Patterns

Using the service without understanding its pricing model. Cloud services bill differently — per request, per GB, per seat. Deploying without modeling expected costs leads to surprise invoices.

Hardcoding configuration instead of using environment variables. API keys, endpoints, and feature flags change between environments. Hardcoded values break deployments and leak secrets.

Ignoring the service's rate limits and quotas. Every external API has throughput limits. Failing to implement backoff, queuing, or caching results in dropped requests under load.

Treating the service as always available. External services go down. Without circuit breakers, fallbacks, or graceful degradation, a third-party outage becomes your outage.

Coupling your architecture to a single provider's API. Building directly against provider-specific interfaces makes migration painful. Wrap external services in thin adapter layers.

Install this skill directly: skilldb add storage-services-skills

Get CLI access →