Cloudflare Pages Deployment
Cloudflare Pages and Workers expertise — edge-first deployments, full-stack apps with Workers functions, KV/D1/R2 bindings, preview URLs, custom domains, and global CDN distribution
Cloudflare Pages deploys static sites and full-stack applications to Cloudflare's global edge network. Static assets are served from 300+ data centers with automatic CDN caching. Dynamic server-side logic runs on Cloudflare Workers — lightweight V8 isolates that execute at the edge with sub-millisecond cold starts. Pages integrates with Workers bindings (KV, D1, R2, Durable Objects, Queues) to build full-stack applications without managing any servers. The platform targets developers who want globally distributed, fast-by-default deployments with a generous free tier. ## Key Points - **Use bindings, not external API calls** — access D1, KV, R2, and other Cloudflare services through bindings rather than HTTP; bindings are faster and do not count against subrequest limits. - **Cache aggressively with KV** — edge-cached KV reads are nearly free; cache database query results and computed responses with appropriate TTLs. - **Keep functions small** — Workers have a 1MB limit for free plans and 10MB for paid; use code splitting and avoid bundling large libraries. - **Use D1 for relational data** — D1 is SQLite at the edge with automatic read replication; it handles most application database needs without external database services. - **Set compatibility_date** — pin your `compatibility_date` in wrangler.toml to avoid unexpected behavior changes when Cloudflare updates the Workers runtime. - **Separate preview and production bindings** — use different D1 databases and KV namespaces for preview vs. production to avoid polluting production data during testing. - **Use _headers and _redirects files** — static configuration for headers and redirects is faster than handling them in functions and works for static assets too. - **Structure functions by route** — the `functions/` directory maps directly to URL paths; keep the file structure clean and predictable. - **Using `process.env` in functions** — Cloudflare Workers do not run Node.js; environment variables come through the `env` parameter in the function context, not `process.env`. - **Storing large files in KV** — KV values are limited to 25MB and are optimized for reads; use R2 for file storage and KV for metadata and cache. - **Ignoring CPU time limits** — Workers have a 10ms CPU time limit on free plans (50ms on paid); long-running computations will be terminated. Offload heavy work to Queues or Durable Objects. - **Deploying without testing bindings locally** — `wrangler pages dev` simulates all bindings locally; skipping local testing leads to runtime errors in production that are hard to debug.
skilldb get deployment-hosting-services-skills/Cloudflare Pages DeploymentFull skill: 312 linesCloudflare Pages Deployment
Core Philosophy
Cloudflare Pages deploys static sites and full-stack applications to Cloudflare's global edge network. Static assets are served from 300+ data centers with automatic CDN caching. Dynamic server-side logic runs on Cloudflare Workers — lightweight V8 isolates that execute at the edge with sub-millisecond cold starts. Pages integrates with Workers bindings (KV, D1, R2, Durable Objects, Queues) to build full-stack applications without managing any servers. The platform targets developers who want globally distributed, fast-by-default deployments with a generous free tier.
Setup & Configuration
Project Initialization
# Install Wrangler (Cloudflare's CLI)
npm install -g wrangler
# Authenticate with Cloudflare
wrangler login
# Create a new Pages project
# Option 1: Connect a Git repository through the Cloudflare dashboard
# Dashboard → Pages → Create a project → Connect to Git
# Option 2: Direct upload via CLI
wrangler pages project create my-app
wrangler pages deploy ./dist
# Option 3: Use a framework-specific create command
npm create cloudflare@latest my-app -- --framework=next
# Supports: Next.js, Nuxt, SvelteKit, Astro, Remix, Qwik, SolidStart, and more
Wrangler Configuration
# wrangler.toml — project configuration
name = "my-app"
compatibility_date = "2024-12-01"
pages_build_output_dir = "./dist"
# Workers Functions (server-side code in /functions directory)
# Automatically mapped to API routes:
# functions/api/users.ts → /api/users
# functions/api/users/[id].ts → /api/users/:id
# Bindings — connect to Cloudflare services
[[kv_namespaces]]
binding = "CACHE"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "def456"
[[r2_buckets]]
binding = "STORAGE"
bucket_name = "my-app-uploads"
[vars]
ENVIRONMENT = "production"
API_VERSION = "v1"
Framework Integration (Next.js Example)
// next.config.js — configure for Cloudflare Pages
// Use @cloudflare/next-on-pages for full Next.js support
/** @type {import('next').NextConfig} */
const nextConfig = {
// No special config needed — @cloudflare/next-on-pages handles the adapter
};
module.exports = nextConfig;
// package.json scripts
// {
// "build": "npx @cloudflare/next-on-pages",
// "preview": "wrangler pages dev .vercel/output/static",
// "deploy": "wrangler pages deploy .vercel/output/static"
// }
Key Techniques
Functions (Server-Side API Routes)
// functions/api/users.ts — API route handler
// File-based routing: this handles GET/POST to /api/users
interface Env {
DB: D1Database;
CACHE: KVNamespace;
API_KEY: string;
}
export const onRequestGet: PagesFunction<Env> = async (context) => {
const { env, request } = context;
const url = new URL(request.url);
const page = parseInt(url.searchParams.get("page") ?? "1");
// Check KV cache first
const cacheKey = `users:page:${page}`;
const cached = await env.CACHE.get(cacheKey, "json");
if (cached) {
return Response.json(cached);
}
// Query D1 database
const limit = 20;
const offset = (page - 1) * limit;
const { results } = await env.DB.prepare(
"SELECT id, name, email FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?"
)
.bind(limit, offset)
.all();
// Cache for 5 minutes
await env.CACHE.put(cacheKey, JSON.stringify(results), {
expirationTtl: 300,
});
return Response.json(results);
};
export const onRequestPost: PagesFunction<Env> = async (context) => {
const { env, request } = context;
const body = await request.json<{ name: string; email: string }>();
const result = await env.DB.prepare(
"INSERT INTO users (name, email) VALUES (?, ?) RETURNING id"
)
.bind(body.name, body.email)
.first();
return Response.json(result, { status: 201 });
};
Dynamic Routes
// functions/api/users/[id].ts — dynamic route parameter
interface Env {
DB: D1Database;
}
export const onRequestGet: PagesFunction<Env> = async (context) => {
const userId = context.params.id as string;
const user = await context.env.DB.prepare(
"SELECT * FROM users WHERE id = ?"
)
.bind(userId)
.first();
if (!user) {
return Response.json({ error: "User not found" }, { status: 404 });
}
return Response.json(user);
};
D1 Database (SQLite at the Edge)
# Create a D1 database
wrangler d1 create my-app-db
# Apply migrations
wrangler d1 migrations create my-app-db init
# Edit the generated SQL file, then:
wrangler d1 migrations apply my-app-db # production
wrangler d1 migrations apply my-app-db --local # local dev
-- migrations/0001_init.sql
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_users_email ON users(email);
R2 Object Storage (File Uploads)
// functions/api/upload.ts — handle file uploads with R2
interface Env {
STORAGE: R2Bucket;
}
export const onRequestPost: PagesFunction<Env> = async (context) => {
const { env, request } = context;
const formData = await request.formData();
const file = formData.get("file") as File;
if (!file) {
return Response.json({ error: "No file provided" }, { status: 400 });
}
const key = `uploads/${Date.now()}-${file.name}`;
await env.STORAGE.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
});
return Response.json({ key, url: `/api/files/${key}` }, { status: 201 });
};
Custom Domains and Redirects
# Add custom domain in Cloudflare dashboard:
# Pages project → Custom domains → Set up a custom domain
# Cloudflare handles DNS and SSL automatically if the domain is on Cloudflare
# _redirects file (place in output directory)
# /old-path /new-path 301
# /blog/* /posts/:splat 302
# /api/* https://api.example.com/:splat 200
# _headers file (place in output directory)
# /api/*
# Access-Control-Allow-Origin: *
# Cache-Control: no-cache
# /assets/*
# Cache-Control: public, max-age=31536000, immutable
Environment Variables and Secrets
# Set environment variables (plaintext)
wrangler pages secret put API_KEY # prompts for value (encrypted)
wrangler pages secret put DATABASE_TOKEN # stored encrypted at rest
# Environment-specific variables — set in dashboard:
# Pages project → Settings → Environment variables
# Separate values for Production and Preview environments
# Access in functions via env parameter (never process.env)
# export const onRequest: PagesFunction<Env> = async ({ env }) => {
# const apiKey = env.API_KEY; // correct
# };
Preview Deployments
# Every non-production branch push creates a preview deployment
# URL format: <commit-hash>.<project>.pages.dev
# Branch-specific: <branch-name>.<project>.pages.dev
# Preview deployments get their own:
# - Unique URL
# - Environment variables (preview-specific values)
# - Database bindings (can point to separate preview databases)
# Useful for PR reviews — each push gets a live URL automatically
# Preview URLs are shareable and persist until the project is deleted
# Control preview branches in dashboard:
# Pages project → Settings → Build & deployments → Preview branch control
Local Development
# Run Pages locally with full bindings support
wrangler pages dev ./dist
# With a framework dev server (e.g., Vite)
wrangler pages dev -- npm run dev
# Local D1 and KV are simulated automatically
# Use --local flag for D1 migrations during development
wrangler d1 migrations apply my-app-db --local
# Bindings work identically in local dev and production
Best Practices
- Use bindings, not external API calls — access D1, KV, R2, and other Cloudflare services through bindings rather than HTTP; bindings are faster and do not count against subrequest limits.
- Cache aggressively with KV — edge-cached KV reads are nearly free; cache database query results and computed responses with appropriate TTLs.
- Keep functions small — Workers have a 1MB limit for free plans and 10MB for paid; use code splitting and avoid bundling large libraries.
- Use D1 for relational data — D1 is SQLite at the edge with automatic read replication; it handles most application database needs without external database services.
- Set compatibility_date — pin your
compatibility_datein wrangler.toml to avoid unexpected behavior changes when Cloudflare updates the Workers runtime. - Separate preview and production bindings — use different D1 databases and KV namespaces for preview vs. production to avoid polluting production data during testing.
- Use _headers and _redirects files — static configuration for headers and redirects is faster than handling them in functions and works for static assets too.
- Structure functions by route — the
functions/directory maps directly to URL paths; keep the file structure clean and predictable.
Anti-Patterns
- Using
process.envin functions — Cloudflare Workers do not run Node.js; environment variables come through theenvparameter in the function context, notprocess.env. - Storing large files in KV — KV values are limited to 25MB and are optimized for reads; use R2 for file storage and KV for metadata and cache.
- Ignoring CPU time limits — Workers have a 10ms CPU time limit on free plans (50ms on paid); long-running computations will be terminated. Offload heavy work to Queues or Durable Objects.
- Deploying without testing bindings locally —
wrangler pages devsimulates all bindings locally; skipping local testing leads to runtime errors in production that are hard to debug. - Hardcoding Cloudflare account IDs in code — use wrangler.toml for configuration and secrets for sensitive values; never commit account-specific IDs to source control.
- Not handling D1 row limits — D1 returns a maximum of 5MB per query result; paginate large queries and avoid
SELECT *on tables with many columns.
Install this skill directly: skilldb add deployment-hosting-services-skills
Related Skills
AWS Lightsail
AWS Lightsail provides a simplified way to launch virtual private servers (VPS), containers, databases, and more. It's ideal for developers and small businesses needing easy-to-use, cost-effective cloud resources without deep AWS expertise.
Coolify Deployment
Coolify self-hosted PaaS expertise — Docker-based deployments, Git integration, automatic SSL, database provisioning, server management, and Heroku/Netlify alternative on your own hardware
Digital Ocean App Platform
DigitalOcean App Platform is a fully managed Platform-as-a-Service (PaaS) that allows you to quickly build, deploy, and scale web applications, static sites, APIs, and background services. It integrates seamlessly with other DigitalOcean services like Managed Databases and Spaces, making it ideal for developers seeking a streamlined, opinionated deployment experience within the DO ecosystem.
Fly.io Deployment
Fly.io platform expertise — container deployment, global edge distribution, Dockerfiles, volumes, secrets, scaling, PostgreSQL, and multi-region patterns
Google Cloud Run
Google Cloud Run is a fully managed serverless platform for containerized applications. It allows you to deploy stateless containers that scale automatically from zero to thousands of instances based on request load, paying only for the resources consumed. Choose Cloud Run for microservices, web APIs, and event-driven functions that require custom runtimes or environments.
Kamal
Kamal (formerly MRSK) simplifies deploying web applications to servers via SSH, leveraging Docker and Traefik (or Caddy) for zero-downtime, rolling updates. It's ideal for containerized applications on a single server or small cluster without the complexity of Kubernetes.