Railway Deployment
Railway platform expertise — instant deployment, managed databases, environment variables, cron jobs, private networking, monorepo support, and templates
Railway provides instant infrastructure with zero configuration. Push code and Railway detects the language, installs dependencies, builds, and deploys automatically. Databases are one click away — Postgres, MySQL, Redis, and MongoDB run as first-class services in the same project. Environment variables flow between services automatically via reference variables. The platform targets developers who want production-grade infrastructure without writing infrastructure code. Think of it as the fastest path from `git push` to a running production service.
## Key Points
- **Use reference variables** — link services with `${{ServiceName.VARIABLE}}` syntax instead of hardcoding connection strings. This keeps environments isolated.
- **Configure health checks** — set `healthcheckPath` in `railway.toml` so Railway can detect failed deployments and roll back automatically.
- **Use separate services for workers** — run background jobs as a separate Railway service sharing the same database, not as threads in your API process.
- **Exit cron jobs explicitly** — call `process.exit(0)` when done so Railway stops the container and does not charge for idle time.
- **Leverage Nixpacks defaults** — Railway auto-detects Node.js, Python, Go, and more. Only add a Dockerfile when Nixpacks cannot handle your needs.
- **Use Railway environments for staging** — create a staging environment with its own databases rather than using feature flags for pre-production testing.
- **Set restart policies** — configure `restartPolicyType: "on_failure"` with max retries to handle transient crashes gracefully.
- **Use private networking** — communicate between services over internal DNS instead of public URLs for lower latency and no egress costs.
- **Hardcoding database URLs** — connection strings change between environments. Always use reference variables or environment-specific variables.
- **Running migrations at startup** — if you scale to multiple replicas, they all run migrations simultaneously. Use a separate one-off command or a deploy hook.
- **Forgetting to set PORT** — Railway assigns a dynamic port via the `PORT` environment variable. Hardcoding `3000` will fail in production.
- **Using filesystem for persistence** — Railway containers are ephemeral. Store files in object storage (S3, R2) or use Railway volumes.skilldb get deployment-hosting-services-skills/Railway DeploymentFull skill: 373 linesRailway Deployment
Core Philosophy
Railway provides instant infrastructure with zero configuration. Push code and Railway detects the language, installs dependencies, builds, and deploys automatically. Databases are one click away — Postgres, MySQL, Redis, and MongoDB run as first-class services in the same project. Environment variables flow between services automatically via reference variables. The platform targets developers who want production-grade infrastructure without writing infrastructure code. Think of it as the fastest path from git push to a running production service.
Setup
Project Initialization
// Install Railway CLI
// $ npm i -g @railway/cli
// $ railway login
// Create a new project
// $ railway init
// Link existing repo
// $ railway link
// Deploy from local directory
// $ railway up
// railway.toml — project configuration
// [build]
// builder = "nixpacks"
// buildCommand = "npm run build"
//
// [deploy]
// startCommand = "npm start"
// healthcheckPath = "/health"
// healthcheckTimeout = 120
// restartPolicyType = "on_failure"
// restartPolicyMaxRetries = 5
Nixpacks Configuration
// Railway uses Nixpacks for auto-detection builds
// nixpacks.toml — customize the build
// [phases.setup]
// nixPkgs = ["...", "openssl"]
//
// [phases.install]
// cmds = ["npm ci"]
//
// [phases.build]
// cmds = ["npm run build", "npx prisma generate"]
//
// [start]
// cmd = "npm start"
// Or use a Dockerfile — Railway auto-detects it
// Dockerfile takes precedence over Nixpacks when present
Environment Variables
// Set variables via CLI
// $ railway variables set DATABASE_URL="postgres://..."
// $ railway variables set NODE_ENV=production
// Reference variables — link services together
// In the Railway dashboard, use reference syntax:
// DATABASE_URL = ${{Postgres.DATABASE_URL}}
// REDIS_URL = ${{Redis.REDIS_URL}}
// INTERNAL_API = ${{api-service.RAILWAY_PRIVATE_DOMAIN}}:3000
// src/config.ts — typed configuration
interface Config {
port: number;
databaseUrl: string;
redisUrl: string;
environment: string;
railwayEnvironment: string;
}
export function getConfig(): Config {
return {
port: parseInt(process.env.PORT ?? "3000", 10),
databaseUrl: process.env.DATABASE_URL!,
redisUrl: process.env.REDIS_URL!,
environment: process.env.NODE_ENV ?? "development",
railwayEnvironment: process.env.RAILWAY_ENVIRONMENT_NAME ?? "local",
};
}
Key Techniques
Database Services
// Add databases from the Railway dashboard or CLI
// $ railway add --plugin postgres
// $ railway add --plugin redis
// $ railway add --plugin mysql
// Railway auto-injects connection variables:
// DATABASE_URL, PGHOST, PGPORT, PGUSER, PGPASSWORD, PGDATABASE
// REDIS_URL, REDISHOST, REDISPORT, REDISPASSWORD
// src/db.ts — Prisma with Railway Postgres
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient({
datasources: {
db: { url: process.env.DATABASE_URL },
},
log: process.env.NODE_ENV === "development" ? ["query"] : ["error"],
});
export async function healthCheck(): Promise<boolean> {
try {
await prisma.$queryRaw`SELECT 1`;
return true;
} catch {
return false;
}
}
export default prisma;
// src/cache.ts — Redis integration
import { createClient } from "redis";
const redis = createClient({ url: process.env.REDIS_URL });
redis.on("error", (err) => console.error("Redis error:", err));
export async function connectRedis() {
if (!redis.isOpen) await redis.connect();
return redis;
}
export async function cacheGet<T>(key: string): Promise<T | null> {
const client = await connectRedis();
const value = await client.get(key);
return value ? JSON.parse(value) : null;
}
export async function cacheSet(key: string, value: unknown, ttlSeconds = 300) {
const client = await connectRedis();
await client.set(key, JSON.stringify(value), { EX: ttlSeconds });
}
Cron Jobs
// railway.toml — configure a cron service
// [deploy]
// startCommand = "npm run cron"
// cronSchedule = "0 */6 * * *"
// src/cron.ts — the cron job entry point
import prisma from "./db";
async function runSyncJob() {
console.log(`[${new Date().toISOString()}] Starting sync job`);
const staleRecords = await prisma.record.findMany({
where: { updatedAt: { lt: new Date(Date.now() - 24 * 60 * 60 * 1000) } },
});
for (const record of staleRecords) {
const freshData = await fetchExternalAPI(record.externalId);
await prisma.record.update({
where: { id: record.id },
data: { ...freshData, updatedAt: new Date() },
});
}
console.log(`[${new Date().toISOString()}] Synced ${staleRecords.length} records`);
process.exit(0); // Exit so Railway stops the cron container
}
runSyncJob().catch((err) => {
console.error("Cron job failed:", err);
process.exit(1);
});
Private Networking
// Railway services in the same project communicate over private network
// Each service gets: RAILWAY_PRIVATE_DOMAIN (e.g., api-service.railway.internal)
// src/services/auth-client.ts
const AUTH_SERVICE_URL = process.env.AUTH_SERVICE_INTERNAL_URL
?? `http://${process.env.AUTH_RAILWAY_PRIVATE_DOMAIN}:3001`;
export async function validateSession(sessionId: string) {
const response = await fetch(`${AUTH_SERVICE_URL}/validate`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ sessionId }),
});
if (!response.ok) {
throw new Error(`Auth validation failed: ${response.status}`);
}
return response.json() as Promise<{ userId: string; roles: string[] }>;
}
Monorepo Deployment
// Railway supports monorepos — configure root directory per service
// railway.toml (in apps/api/)
// [build]
// builder = "nixpacks"
// buildCommand = "cd ../.. && npm run build --workspace=apps/api"
//
// [deploy]
// startCommand = "node dist/server.js"
// Or set the root directory in the Railway dashboard:
// Service Settings → Root Directory → apps/api
// package.json at monorepo root (using npm workspaces)
// {
// "workspaces": ["apps/*", "packages/*"],
// "scripts": {
// "build": "turbo build",
// "build:api": "turbo build --filter=api"
// }
// }
// Each service in Railway points to a different workspace directory:
// Service "api" → Root Directory: apps/api
// Service "web" → Root Directory: apps/web
// Service "worker" → Root Directory: apps/worker
Health Checks and Observability
// src/health.ts — health check endpoint
import { Router } from "express";
import prisma from "./db";
import { connectRedis } from "./cache";
const health = Router();
health.get("/health", async (req, res) => {
const checks: Record<string, boolean> = {};
try {
await prisma.$queryRaw`SELECT 1`;
checks.database = true;
} catch {
checks.database = false;
}
try {
const redis = await connectRedis();
await redis.ping();
checks.redis = true;
} catch {
checks.redis = false;
}
const allHealthy = Object.values(checks).every(Boolean);
res.status(allHealthy ? 200 : 503).json({
status: allHealthy ? "healthy" : "degraded",
checks,
environment: process.env.RAILWAY_ENVIRONMENT_NAME,
service: process.env.RAILWAY_SERVICE_NAME,
deployId: process.env.RAILWAY_DEPLOYMENT_ID,
uptime: process.uptime(),
});
});
export default health;
Templates and Multi-Service Setup
// railway.json — template definition for one-click deploys
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "NIXPACKS",
"buildCommand": "npm run build"
},
"deploy": {
"startCommand": "npm start",
"healthcheckPath": "/health",
"restartPolicyType": "ON_FAILURE"
}
}
// Multi-service architecture in a single Railway project:
// 1. API Service — Express/Fastify app
// 2. Worker Service — background job processor
// 3. Postgres — managed database
// 4. Redis — cache and job queue
//
// Link them with reference variables:
// API: DATABASE_URL=${{Postgres.DATABASE_URL}}, REDIS_URL=${{Redis.REDIS_URL}}
// Worker: DATABASE_URL=${{Postgres.DATABASE_URL}}, REDIS_URL=${{Redis.REDIS_URL}}
Deployment Environments
// Railway supports multiple environments (staging, production, etc.)
// Each environment gets isolated databases, variables, and deployments
// $ railway environment create staging
// $ railway environment use staging
// Environment-specific variable overrides in the dashboard
// Production: API_URL=https://api.example.com
// Staging: API_URL=https://staging-api.example.com
// src/server.ts — environment-aware startup
import express from "express";
import { getConfig } from "./config";
import health from "./health";
const app = express();
const config = getConfig();
app.use(health);
app.use(express.json());
// Log environment info on startup
app.listen(config.port, () => {
console.log(`Server started on port ${config.port}`);
console.log(`Environment: ${config.railwayEnvironment}`);
console.log(`Node env: ${config.environment}`);
});
Best Practices
- Use reference variables — link services with
${{ServiceName.VARIABLE}}syntax instead of hardcoding connection strings. This keeps environments isolated. - Configure health checks — set
healthcheckPathinrailway.tomlso Railway can detect failed deployments and roll back automatically. - Use separate services for workers — run background jobs as a separate Railway service sharing the same database, not as threads in your API process.
- Exit cron jobs explicitly — call
process.exit(0)when done so Railway stops the container and does not charge for idle time. - Leverage Nixpacks defaults — Railway auto-detects Node.js, Python, Go, and more. Only add a Dockerfile when Nixpacks cannot handle your needs.
- Use Railway environments for staging — create a staging environment with its own databases rather than using feature flags for pre-production testing.
- Set restart policies — configure
restartPolicyType: "on_failure"with max retries to handle transient crashes gracefully. - Use private networking — communicate between services over internal DNS instead of public URLs for lower latency and no egress costs.
Anti-Patterns
- Hardcoding database URLs — connection strings change between environments. Always use reference variables or environment-specific variables.
- Running migrations at startup — if you scale to multiple replicas, they all run migrations simultaneously. Use a separate one-off command or a deploy hook.
- Forgetting to set PORT — Railway assigns a dynamic port via the
PORTenvironment variable. Hardcoding3000will fail in production. - Using filesystem for persistence — Railway containers are ephemeral. Store files in object storage (S3, R2) or use Railway volumes.
- Not configuring restart policies — without restart policies, a crashed service stays down until the next deploy. Set
on_failurewith retries. - Over-provisioning resources — Railway charges by usage. Start small and scale up based on metrics rather than guessing capacity needs.
- Deploying without health checks — without health checks, Railway cannot distinguish between a successful deploy and one that crashes on startup. Broken deploys will go live.
Install this skill directly: skilldb add deployment-hosting-services-skills
Related Skills
AWS Lightsail
AWS Lightsail provides a simplified way to launch virtual private servers (VPS), containers, databases, and more. It's ideal for developers and small businesses needing easy-to-use, cost-effective cloud resources without deep AWS expertise.
Cloudflare Pages Deployment
Cloudflare Pages and Workers expertise — edge-first deployments, full-stack apps with Workers functions, KV/D1/R2 bindings, preview URLs, custom domains, and global CDN distribution
Coolify Deployment
Coolify self-hosted PaaS expertise — Docker-based deployments, Git integration, automatic SSL, database provisioning, server management, and Heroku/Netlify alternative on your own hardware
Digital Ocean App Platform
DigitalOcean App Platform is a fully managed Platform-as-a-Service (PaaS) that allows you to quickly build, deploy, and scale web applications, static sites, APIs, and background services. It integrates seamlessly with other DigitalOcean services like Managed Databases and Spaces, making it ideal for developers seeking a streamlined, opinionated deployment experience within the DO ecosystem.
Fly.io Deployment
Fly.io platform expertise — container deployment, global edge distribution, Dockerfiles, volumes, secrets, scaling, PostgreSQL, and multi-region patterns
Google Cloud Run
Google Cloud Run is a fully managed serverless platform for containerized applications. It allows you to deploy stateless containers that scale automatically from zero to thousands of instances based on request load, paying only for the resources consumed. Choose Cloud Run for microservices, web APIs, and event-driven functions that require custom runtimes or environments.