Skip to main content
Technology & EngineeringStorage Services169 lines

Tigris

Build with Tigris for globally distributed S3-compatible object storage.

Quick Summary29 lines
You are an expert in integrating Tigris for file and object storage. Tigris is
a globally distributed, S3-compatible object storage service that automatically
caches and replicates data close to users. It charges zero egress fees and
works with any S3 SDK.

## Key Points

- Use presigned URLs for client-side uploads — same pattern as S3, works identically
- Set `Cache-Control` headers on upload — Tigris uses them for its global caching layer
- Rely on automatic geo-replication instead of manually creating multi-region setups — Tigris pulls data to the nearest edge on first access
- Use `fly storage create` on Fly.io — credentials and endpoint are configured automatically
- Specifying a region other than `auto` — Tigris manages regions internally; setting a specific region causes connection errors
- Not setting `ContentType` on upload — objects default to `application/octet-stream` and browsers will download instead of display
- Creating separate buckets per region — Tigris buckets are already global; extra buckets add management overhead without benefit

## Quick Example

```bash
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
```

```bash
fly storage create
# Creates a Tigris bucket and sets AWS_ACCESS_KEY_ID,
# AWS_SECRET_ACCESS_KEY, BUCKET_NAME, and AWS_ENDPOINT_URL_S3
# as secrets on your Fly app.
```
skilldb get storage-services-skills/TigrisFull skill: 169 lines
Paste into your CLAUDE.md or agent config

Tigris — Storage Integration

You are an expert in integrating Tigris for file and object storage. Tigris is a globally distributed, S3-compatible object storage service that automatically caches and replicates data close to users. It charges zero egress fees and works with any S3 SDK.

Core Philosophy

Overview

Tigris stores objects in a single global namespace and automatically geo- replicates data to regions where it is accessed. There is no need to configure multi-region buckets or CDN layers — Tigris handles caching at the edge transparently. It is S3-compatible, so existing code using the AWS SDK works with a simple endpoint change.

Setup & Configuration

Install the AWS S3 SDK

npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner

Create the client

import { S3Client } from '@aws-sdk/client-s3';

const tigris = new S3Client({
  region: 'auto',
  endpoint: 'https://fly.storage.tigris.dev',
  credentials: {
    accessKeyId: process.env.TIGRIS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.TIGRIS_SECRET_ACCESS_KEY!,
  },
});

Fly.io integration

When running on Fly.io, Tigris credentials are injected automatically via fly storage create:

fly storage create
# Creates a Tigris bucket and sets AWS_ACCESS_KEY_ID,
# AWS_SECRET_ACCESS_KEY, BUCKET_NAME, and AWS_ENDPOINT_URL_S3
# as secrets on your Fly app.
// On Fly.io the env vars are set automatically
const tigris = new S3Client({
  region: 'auto',
  endpoint: process.env.AWS_ENDPOINT_URL_S3,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
});

Core Patterns

Upload and download

import {
  PutObjectCommand,
  GetObjectCommand,
  DeleteObjectCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const BUCKET = process.env.BUCKET_NAME!;

// Server-side upload
await tigris.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: `uploads/${crypto.randomUUID()}-${filename}`,
  Body: buffer,
  ContentType: 'image/png',
}));

// Presigned upload URL for client-side uploads
const putCommand = new PutObjectCommand({
  Bucket: BUCKET,
  Key: `uploads/${crypto.randomUUID()}-${filename}`,
  ContentType: contentType,
});
const uploadUrl = await getSignedUrl(tigris, putCommand, { expiresIn: 3600 });

// Presigned download URL
const getCommand = new GetObjectCommand({ Bucket: BUCKET, Key: key });
const downloadUrl = await getSignedUrl(tigris, getCommand, { expiresIn: 3600 });

// Delete
await tigris.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: key }));

Public bucket access

// Enable public access on the bucket via the Tigris dashboard or CLI.
// Objects are then accessible at:
// https://{bucket}.fly.storage.tigris.dev/{key}

await tigris.send(new PutObjectCommand({
  Bucket: BUCKET,
  Key: `assets/logo.webp`,
  Body: imageBuffer,
  ContentType: 'image/webp',
  CacheControl: 'public, max-age=31536000, immutable',
}));

const publicUrl = `https://${BUCKET}.fly.storage.tigris.dev/assets/logo.webp`;

List objects

import { ListObjectsV2Command } from '@aws-sdk/client-s3';

const { Contents } = await tigris.send(new ListObjectsV2Command({
  Bucket: BUCKET,
  Prefix: 'uploads/',
  MaxKeys: 100,
}));

Conditional reads with metadata

// Tigris supports standard S3 conditional headers for cache validation
const result = await tigris.send(new GetObjectCommand({
  Bucket: BUCKET,
  Key: 'data/config.json',
  IfNoneMatch: previousEtag,
}));

Best Practices

  • Use presigned URLs for client-side uploads — same pattern as S3, works identically
  • Set Cache-Control headers on upload — Tigris uses them for its global caching layer
  • Rely on automatic geo-replication instead of manually creating multi-region setups — Tigris pulls data to the nearest edge on first access
  • Use fly storage create on Fly.io — credentials and endpoint are configured automatically

Common Pitfalls

  • Specifying a region other than auto — Tigris manages regions internally; setting a specific region causes connection errors
  • Not setting ContentType on upload — objects default to application/octet-stream and browsers will download instead of display
  • Creating separate buckets per region — Tigris buckets are already global; extra buckets add management overhead without benefit

Anti-Patterns

Using the service without understanding its pricing model. Cloud services bill differently — per request, per GB, per seat. Deploying without modeling expected costs leads to surprise invoices.

Hardcoding configuration instead of using environment variables. API keys, endpoints, and feature flags change between environments. Hardcoded values break deployments and leak secrets.

Ignoring the service's rate limits and quotas. Every external API has throughput limits. Failing to implement backoff, queuing, or caching results in dropped requests under load.

Treating the service as always available. External services go down. Without circuit breakers, fallbacks, or graceful degradation, a third-party outage becomes your outage.

Coupling your architecture to a single provider's API. Building directly against provider-specific interfaces makes migration painful. Wrap external services in thin adapter layers.

Install this skill directly: skilldb add storage-services-skills

Get CLI access →