AWS S3
Build with AWS S3 for object storage. Use this skill when the project needs to
You are a storage specialist who integrates AWS S3 into projects. S3 is Amazon's object storage service — the most widely used cloud storage, supporting file uploads, static hosting, presigned URLs, lifecycle policies, and CDN integration via CloudFront. ## Key Points - Use presigned URLs for client uploads — don't proxy through your server - Set short expiry on presigned URLs (1 hour) — don't leave them open indefinitely - Use CloudFront in front of S3 for public reads — faster and cheaper - Set `Content-Type` on upload — S3 doesn't auto-detect MIME types - Use lifecycle policies to auto-delete temporary files - Use server-side encryption (SSE-S3 or SSE-KMS) for sensitive data - Proxying uploads through your API server — use presigned URLs for direct upload - Making buckets public — use presigned URLs or CloudFront with OAI - Not setting Content-Type — files get served as `application/octet-stream` - Using `ListObjects` instead of `ListObjectsV2` — v1 is legacy - Storing credentials in client-side code — presigned URLs are the secure pattern - Not handling multipart uploads for large files — uploads fail above 5GB ## Quick Example ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ```
skilldb get storage-services-skills/AWS S3Full skill: 205 linesAWS S3 Integration
You are a storage specialist who integrates AWS S3 into projects. S3 is Amazon's object storage service — the most widely used cloud storage, supporting file uploads, static hosting, presigned URLs, lifecycle policies, and CDN integration via CloudFront.
Core Philosophy
Presigned URLs for direct uploads
Don't proxy file uploads through your server. Generate a presigned URL server-side and let the client upload directly to S3. This keeps large files off your server and scales infinitely.
Bucket policies over IAM complexity
S3 bucket policies control access at the bucket level. For most web applications, a combination of presigned URLs (for uploads) and CloudFront (for reads) covers all access patterns without complex IAM roles.
Setup
Install
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Initialize
import { S3Client } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
const BUCKET = process.env.S3_BUCKET!;
Key Techniques
Presigned upload URL
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
export async function POST(req: Request) {
const { filename, contentType } = await req.json();
const key = `uploads/${crypto.randomUUID()}-${filename}`;
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: key,
ContentType: contentType,
});
const url = await getSignedUrl(s3, command, { expiresIn: 3600 });
return Response.json({ url, key });
}
// Client-side: upload directly to S3
const { url, key } = await fetch('/api/upload', {
method: 'POST',
body: JSON.stringify({ filename: file.name, contentType: file.type }),
}).then(r => r.json());
await fetch(url, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
});
Presigned download URL
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
async function getDownloadUrl(key: string) {
const command = new GetObjectCommand({ Bucket: BUCKET, Key: key });
return getSignedUrl(s3, command, { expiresIn: 3600 });
}
Upload from server
import { PutObjectCommand } from '@aws-sdk/client-s3';
await s3.send(new PutObjectCommand({
Bucket: BUCKET,
Key: 'reports/monthly.pdf',
Body: buffer,
ContentType: 'application/pdf',
Metadata: { userId: '123', generatedAt: new Date().toISOString() },
}));
Download / read
import { GetObjectCommand } from '@aws-sdk/client-s3';
const response = await s3.send(new GetObjectCommand({
Bucket: BUCKET,
Key: 'reports/monthly.pdf',
}));
const body = await response.Body?.transformToString(); // or transformToByteArray()
Delete
import { DeleteObjectCommand, DeleteObjectsCommand } from '@aws-sdk/client-s3';
// Single
await s3.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: 'uploads/old-file.jpg' }));
// Bulk
await s3.send(new DeleteObjectsCommand({
Bucket: BUCKET,
Delete: {
Objects: [{ Key: 'file1.jpg' }, { Key: 'file2.jpg' }],
},
}));
List objects
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
const { Contents } = await s3.send(new ListObjectsV2Command({
Bucket: BUCKET,
Prefix: 'uploads/',
MaxKeys: 100,
}));
const files = Contents?.map(obj => ({
key: obj.Key,
size: obj.Size,
lastModified: obj.LastModified,
}));
Multipart upload (large files)
import { CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand } from '@aws-sdk/client-s3';
const { UploadId } = await s3.send(new CreateMultipartUploadCommand({
Bucket: BUCKET,
Key: 'videos/large-file.mp4',
ContentType: 'video/mp4',
}));
const partSize = 10 * 1024 * 1024; // 10MB parts
const parts = [];
for (let i = 0; i < buffer.length; i += partSize) {
const part = buffer.slice(i, i + partSize);
const { ETag } = await s3.send(new UploadPartCommand({
Bucket: BUCKET,
Key: 'videos/large-file.mp4',
UploadId,
PartNumber: parts.length + 1,
Body: part,
}));
parts.push({ ETag, PartNumber: parts.length + 1 });
}
await s3.send(new CompleteMultipartUploadCommand({
Bucket: BUCKET,
Key: 'videos/large-file.mp4',
UploadId,
MultipartUpload: { Parts: parts },
}));
Best Practices
- Use presigned URLs for client uploads — don't proxy through your server
- Set short expiry on presigned URLs (1 hour) — don't leave them open indefinitely
- Use CloudFront in front of S3 for public reads — faster and cheaper
- Set
Content-Typeon upload — S3 doesn't auto-detect MIME types - Use lifecycle policies to auto-delete temporary files
- Use server-side encryption (SSE-S3 or SSE-KMS) for sensitive data
Anti-Patterns
- Proxying uploads through your API server — use presigned URLs for direct upload
- Making buckets public — use presigned URLs or CloudFront with OAI
- Not setting Content-Type — files get served as
application/octet-stream - Using
ListObjectsinstead ofListObjectsV2— v1 is legacy - Storing credentials in client-side code — presigned URLs are the secure pattern
- Not handling multipart uploads for large files — uploads fail above 5GB
Install this skill directly: skilldb add storage-services-skills
Related Skills
Backblaze B2
Build with Backblaze B2 for low-cost S3-compatible object storage.
Cloudflare R2
Build with Cloudflare R2 for S3-compatible object storage with zero egress fees.
Cloudinary
Build with Cloudinary for image and video management. Use this skill when the
Imagekit
Build with ImageKit for real-time image optimization and delivery. Use this skill
Tigris
Build with Tigris for globally distributed S3-compatible object storage.
Uploadthing
Build with UploadThing for file uploads in Next.js and React. Use this skill when