Render Deployment
Render platform expertise — web services, static sites, background workers, cron jobs, managed databases, blueprints (IaC), and auto-scaling
Render is a unified cloud platform that replaces the patchwork of Heroku, AWS, and Netlify with a single interface. It provides native support for web services, static sites, background workers, cron jobs, and managed databases — all with automatic deploys from Git. Blueprints define your entire infrastructure as code in a single YAML file. The platform prioritizes simplicity and convention over configuration: push code, get a running service with HTTPS, CDN, and auto-deploys. Zero DevOps, production-ready defaults. ## Key Points - **Use Blueprints (`render.yaml`)** — define all infrastructure as code for reproducible environments, team onboarding, and disaster recovery. - **Set health check paths** — Render uses health checks to determine deploy success and route traffic. Always implement `/health` endpoints. - **Use environment variable groups** — share common config (JWT secrets, API keys) across services via `envVarGroups` instead of duplicating values. - **Handle SIGTERM gracefully** — Render sends SIGTERM before stopping services. Close database connections, finish in-flight requests, and drain queues. - **Use `generateValue: true` for secrets** — let Render generate random values for JWT secrets and API keys in Blueprints. - **Configure auto-scaling thresholds** — set CPU and memory targets at 70% to leave headroom for traffic spikes before new instances spin up. - **Enable branch deploys for staging** — configure a separate service that deploys from a `staging` branch for pre-production testing. - **Use Render Disks sparingly** — prefer object storage for files. Disks are tied to a single instance and do not replicate across scaled instances. - **Not using SSL for database connections** — Render managed Postgres requires SSL in production. Connections without `ssl: true` will fail. - **Using Render Disks with auto-scaling** — disks are per-instance. If you scale to multiple instances, each gets its own disk. Use S3 or similar for shared storage. - **Ignoring zero-downtime deploy settings** — without health checks, Render cannot verify the new deploy works before routing traffic. Set `healthCheckPath`. - **Running migrations in the start command** — with multiple instances, all run migrations simultaneously. Use a pre-deploy command or a separate one-off job.
skilldb get deployment-hosting-services-skills/Render DeploymentFull skill: 466 linesRender Deployment
Core Philosophy
Render is a unified cloud platform that replaces the patchwork of Heroku, AWS, and Netlify with a single interface. It provides native support for web services, static sites, background workers, cron jobs, and managed databases — all with automatic deploys from Git. Blueprints define your entire infrastructure as code in a single YAML file. The platform prioritizes simplicity and convention over configuration: push code, get a running service with HTTPS, CDN, and auto-deploys. Zero DevOps, production-ready defaults.
Setup
Project Configuration
// render.yaml — Blueprint (Infrastructure as Code)
// Defines all services, databases, and environment groups in one file
// services:
// - type: web
// name: api
// runtime: node
// region: oregon
// plan: starter
// buildCommand: npm ci && npm run build
// startCommand: node dist/server.js
// healthCheckPath: /health
// envVars:
// - key: NODE_ENV
// value: production
// - key: DATABASE_URL
// fromDatabase:
// name: main-db
// property: connectionString
// - key: REDIS_URL
// fromService:
// name: cache
// type: redis
// property: connectionString
//
// - type: worker
// name: job-processor
// runtime: node
// buildCommand: npm ci && npm run build
// startCommand: node dist/worker.js
// envVars:
// - key: DATABASE_URL
// fromDatabase:
// name: main-db
// property: connectionString
//
// - type: cron
// name: daily-cleanup
// runtime: node
// buildCommand: npm ci && npm run build
// startCommand: node dist/cron/cleanup.js
// schedule: "0 3 * * *"
//
// - type: web
// name: frontend
// runtime: static
// buildCommand: npm ci && npm run build
// staticPublishPath: dist
// routes:
// - type: rewrite
// source: /*
// destination: /index.html
//
// databases:
// - name: main-db
// plan: starter
// region: oregon
// databaseName: myapp
// user: myapp
CLI and Deployment
// Render auto-deploys on git push — no CLI required for standard workflow
// For manual operations, use the Render API
// src/scripts/deploy-check.ts — verify deployment via API
const RENDER_API = "https://api.render.com/v1";
const API_KEY = process.env.RENDER_API_KEY!;
interface RenderService {
id: string;
name: string;
type: string;
}
interface RenderDeploy {
id: string;
status: string;
createdAt: string;
finishedAt: string | null;
}
async function getLatestDeploy(serviceId: string): Promise<RenderDeploy> {
const response = await fetch(`${RENDER_API}/services/${serviceId}/deploys?limit=1`, {
headers: { Authorization: `Bearer ${API_KEY}` },
});
const deploys = await response.json();
return deploys[0].deploy;
}
async function triggerDeploy(serviceId: string): Promise<RenderDeploy> {
const response = await fetch(`${RENDER_API}/services/${serviceId}/deploys`, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ clearCache: false }),
});
return (await response.json()).deploy;
}
Key Techniques
Web Services
// src/server.ts — Express app configured for Render
import express from "express";
import helmet from "helmet";
import compression from "compression";
const app = express();
const PORT = parseInt(process.env.PORT ?? "3000", 10);
app.use(helmet());
app.use(compression());
app.use(express.json({ limit: "10mb" }));
// Health check — Render pings this to verify service is running
app.get("/health", async (req, res) => {
try {
await db.query("SELECT 1");
res.json({
status: "healthy",
service: process.env.RENDER_SERVICE_NAME,
region: process.env.RENDER_REGION,
instance: process.env.RENDER_INSTANCE_ID,
});
} catch (error) {
res.status(503).json({ status: "unhealthy" });
}
});
app.listen(PORT, "0.0.0.0", () => {
console.log(`API listening on port ${PORT}`);
console.log(`Service: ${process.env.RENDER_SERVICE_NAME}`);
console.log(`Environment: ${process.env.NODE_ENV}`);
});
export default app;
Background Workers
// src/worker.ts — background job processor
import { Queue, Worker, Job } from "bullmq";
import Redis from "ioredis";
const connection = new Redis(process.env.REDIS_URL!);
// Define the worker
const worker = new Worker(
"email-queue",
async (job: Job) => {
const { to, subject, body } = job.data;
console.log(`Processing email job ${job.id}: ${to}`);
await sendEmail(to, subject, body);
console.log(`Email sent to ${to}`);
},
{
connection,
concurrency: 5,
limiter: { max: 100, duration: 60000 },
}
);
worker.on("completed", (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on("failed", (job, error) => {
console.error(`Job ${job?.id} failed:`, error.message);
});
// Graceful shutdown
process.on("SIGTERM", async () => {
console.log("SIGTERM received, closing worker...");
await worker.close();
process.exit(0);
});
console.log("Worker started, waiting for jobs...");
Cron Jobs
// src/cron/cleanup.ts — runs on a schedule defined in render.yaml
import prisma from "../db";
async function cleanup() {
const cutoffDate = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
// Delete expired sessions
const deletedSessions = await prisma.session.deleteMany({
where: { expiresAt: { lt: new Date() } },
});
console.log(`Deleted ${deletedSessions.count} expired sessions`);
// Archive old audit logs
const archivedLogs = await prisma.auditLog.updateMany({
where: { createdAt: { lt: cutoffDate }, archived: false },
data: { archived: true },
});
console.log(`Archived ${archivedLogs.count} old audit logs`);
// Clean up orphaned uploads
const orphanedFiles = await prisma.upload.findMany({
where: { attachedTo: null, createdAt: { lt: cutoffDate } },
});
for (const file of orphanedFiles) {
await deleteFromStorage(file.storageKey);
await prisma.upload.delete({ where: { id: file.id } });
}
console.log(`Cleaned up ${orphanedFiles.length} orphaned files`);
}
cleanup()
.then(() => {
console.log("Cleanup complete");
process.exit(0);
})
.catch((err) => {
console.error("Cleanup failed:", err);
process.exit(1);
});
Static Sites with SPA Routing
// render.yaml — static site configuration
// services:
// - type: web
// name: frontend
// runtime: static
// buildCommand: npm ci && npm run build
// staticPublishPath: dist
// headers:
// - path: /*
// name: X-Frame-Options
// value: DENY
// - path: /assets/*
// name: Cache-Control
// value: public, max-age=31536000, immutable
// routes:
// - type: rewrite
// source: /*
// destination: /index.html
// envVars:
// - key: VITE_API_URL
// value: https://api.example.com
// vite.config.ts — build configuration for Render
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
export default defineConfig({
plugins: [react()],
build: {
outDir: "dist",
sourcemap: true,
rollupOptions: {
output: {
manualChunks: {
vendor: ["react", "react-dom", "react-router-dom"],
},
},
},
},
});
Managed Databases
// src/db.ts — connect to Render managed Postgres
import { Pool, PoolConfig } from "pg";
const poolConfig: PoolConfig = {
connectionString: process.env.DATABASE_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
// Render managed Postgres requires SSL in production
ssl: process.env.NODE_ENV === "production"
? { rejectUnauthorized: false }
: false,
};
const pool = new Pool(poolConfig);
pool.on("error", (err) => {
console.error("Unexpected pool error:", err);
});
export async function query<T>(sql: string, params?: unknown[]): Promise<T[]> {
const client = await pool.connect();
try {
const result = await client.query(sql, params);
return result.rows as T[];
} finally {
client.release();
}
}
export async function transaction<T>(
fn: (query: (sql: string, params?: unknown[]) => Promise<unknown>) => Promise<T>
): Promise<T> {
const client = await pool.connect();
try {
await client.query("BEGIN");
const result = await fn((sql, params) => client.query(sql, params));
await client.query("COMMIT");
return result;
} catch (error) {
await client.query("ROLLBACK");
throw error;
} finally {
client.release();
}
}
Auto-Scaling
// render.yaml — auto-scaling configuration
// services:
// - type: web
// name: api
// runtime: node
// plan: standard
// scaling:
// minInstances: 2
// maxInstances: 10
// targetMemoryPercent: 70
// targetCPUPercent: 70
// buildCommand: npm ci && npm run build
// startCommand: node dist/server.js
// src/middleware/metrics.ts — expose metrics for scaling decisions
import { Request, Response, NextFunction } from "express";
let requestCount = 0;
let totalResponseTime = 0;
export function metricsMiddleware(req: Request, res: Response, next: NextFunction) {
const start = Date.now();
requestCount++;
res.on("finish", () => {
totalResponseTime += Date.now() - start;
});
next();
}
export function getMetrics() {
const avgResponseTime = requestCount > 0 ? totalResponseTime / requestCount : 0;
return {
requestCount,
avgResponseTime: Math.round(avgResponseTime),
memoryUsage: process.memoryUsage(),
uptime: process.uptime(),
};
}
Blueprints for Full-Stack Architecture
// render.yaml — complete full-stack blueprint
// envVarGroups:
// - name: shared-config
// envVars:
// - key: JWT_SECRET
// generateValue: true
// - key: APP_NAME
// value: MyApp
//
// services:
// - type: web
// name: api
// runtime: node
// repo: https://github.com/org/repo
// rootDir: apps/api
// buildCommand: npm ci && npm run build
// startCommand: node dist/server.js
// healthCheckPath: /health
// scaling:
// minInstances: 2
// maxInstances: 8
// envVars:
// - fromGroup: shared-config
// - key: DATABASE_URL
// fromDatabase:
// name: main-db
// property: connectionString
//
// - type: web
// name: frontend
// runtime: static
// repo: https://github.com/org/repo
// rootDir: apps/web
// buildCommand: npm ci && npm run build
// staticPublishPath: dist
//
// - type: worker
// name: queue-worker
// runtime: node
// repo: https://github.com/org/repo
// rootDir: apps/worker
// buildCommand: npm ci && npm run build
// startCommand: node dist/worker.js
//
// databases:
// - name: main-db
// plan: standard
// databaseName: myapp
Best Practices
- Use Blueprints (
render.yaml) — define all infrastructure as code for reproducible environments, team onboarding, and disaster recovery. - Set health check paths — Render uses health checks to determine deploy success and route traffic. Always implement
/healthendpoints. - Use environment variable groups — share common config (JWT secrets, API keys) across services via
envVarGroupsinstead of duplicating values. - Handle SIGTERM gracefully — Render sends SIGTERM before stopping services. Close database connections, finish in-flight requests, and drain queues.
- Use
generateValue: truefor secrets — let Render generate random values for JWT secrets and API keys in Blueprints. - Configure auto-scaling thresholds — set CPU and memory targets at 70% to leave headroom for traffic spikes before new instances spin up.
- Enable branch deploys for staging — configure a separate service that deploys from a
stagingbranch for pre-production testing. - Use Render Disks sparingly — prefer object storage for files. Disks are tied to a single instance and do not replicate across scaled instances.
Anti-Patterns
- Not using SSL for database connections — Render managed Postgres requires SSL in production. Connections without
ssl: truewill fail. - Using Render Disks with auto-scaling — disks are per-instance. If you scale to multiple instances, each gets its own disk. Use S3 or similar for shared storage.
- Ignoring zero-downtime deploy settings — without health checks, Render cannot verify the new deploy works before routing traffic. Set
healthCheckPath. - Running migrations in the start command — with multiple instances, all run migrations simultaneously. Use a pre-deploy command or a separate one-off job.
- Hardcoding internal service URLs — use
fromServicereferences in Blueprints to automatically wire service URLs. Hardcoded URLs break across environments. - Skipping graceful shutdown — not handling SIGTERM causes in-flight requests to drop and database connections to leak during deploys.
- Over-provisioning plans — start with Starter plan and scale up. Render makes it easy to change plans without downtime.
Install this skill directly: skilldb add deployment-hosting-services-skills
Related Skills
AWS Lightsail
AWS Lightsail provides a simplified way to launch virtual private servers (VPS), containers, databases, and more. It's ideal for developers and small businesses needing easy-to-use, cost-effective cloud resources without deep AWS expertise.
Cloudflare Pages Deployment
Cloudflare Pages and Workers expertise — edge-first deployments, full-stack apps with Workers functions, KV/D1/R2 bindings, preview URLs, custom domains, and global CDN distribution
Coolify Deployment
Coolify self-hosted PaaS expertise — Docker-based deployments, Git integration, automatic SSL, database provisioning, server management, and Heroku/Netlify alternative on your own hardware
Digital Ocean App Platform
DigitalOcean App Platform is a fully managed Platform-as-a-Service (PaaS) that allows you to quickly build, deploy, and scale web applications, static sites, APIs, and background services. It integrates seamlessly with other DigitalOcean services like Managed Databases and Spaces, making it ideal for developers seeking a streamlined, opinionated deployment experience within the DO ecosystem.
Fly.io Deployment
Fly.io platform expertise — container deployment, global edge distribution, Dockerfiles, volumes, secrets, scaling, PostgreSQL, and multi-region patterns
Google Cloud Run
Google Cloud Run is a fully managed serverless platform for containerized applications. It allows you to deploy stateless containers that scale automatically from zero to thousands of instances based on request load, paying only for the resources consumed. Choose Cloud Run for microservices, web APIs, and event-driven functions that require custom runtimes or environments.