Coolify Deployment
Coolify self-hosted PaaS expertise — Docker-based deployments, Git integration, automatic SSL, database provisioning, server management, and Heroku/Netlify alternative on your own hardware
Coolify is an open-source, self-hosted platform-as-a-service that gives you Heroku-like convenience on your own servers. You install Coolify on a VPS or bare-metal machine and it manages Docker containers, automatic SSL via Let's Encrypt, Git-based deployments, databases, and more. It supports deploying from GitHub, GitLab, Bitbucket, or any Git repository. Coolify targets developers and teams who want full control over their infrastructure without giving up the push-to-deploy workflow. ## Key Points - **Start with a dedicated VPS** — run Coolify on a separate server from your applications when possible, so Coolify updates do not affect production workloads. - **Enable automatic backups** — configure S3-compatible backup destinations for databases with a retention policy; local-only backups are lost if the server fails. - **Use Docker Compose for multi-service apps** — rather than deploying individual containers, define the full stack in docker-compose.yml so Coolify manages networking and dependencies together. - **Set health check paths** — Coolify uses health checks to verify deployments succeeded and to route traffic only to healthy containers. - **Pin Docker image tags** — use `postgres:16` instead of `postgres:latest` to avoid unexpected breaking changes on rebuilds. - **Configure resource limits** — set CPU and memory limits per container in Coolify to prevent a single app from consuming all server resources. - **Use wildcard DNS for preview deployments** — set up `*.app.example.com` to enable automatic subdomain routing for PR previews. - **Keep Coolify updated** — one-click updates in the UI ensure you get security patches and new features without manual intervention. - **Running Coolify on undersized servers** — Coolify itself plus Traefik and PostgreSQL need at least 2GB RAM; running applications on the same server requires more. - **Skipping SSL configuration** — Coolify auto-provisions SSL via Let's Encrypt, but you must point DNS records to the server first; deploying without valid DNS leaves apps on plain HTTP. - **Storing uploads in the container filesystem** — containers are ephemeral; use persistent volumes or external object storage for user-uploaded files. - **Not setting environment variables as secrets** — plain-text secrets in docker-compose.yml get committed to Git; always use Coolify's encrypted environment variable UI.
skilldb get deployment-hosting-services-skills/Coolify DeploymentFull skill: 227 linesCoolify Deployment
Core Philosophy
Coolify is an open-source, self-hosted platform-as-a-service that gives you Heroku-like convenience on your own servers. You install Coolify on a VPS or bare-metal machine and it manages Docker containers, automatic SSL via Let's Encrypt, Git-based deployments, databases, and more. It supports deploying from GitHub, GitLab, Bitbucket, or any Git repository. Coolify targets developers and teams who want full control over their infrastructure without giving up the push-to-deploy workflow.
Setup & Configuration
Server Installation
# Install Coolify on a fresh Ubuntu/Debian VPS (minimum 2 CPU, 2GB RAM)
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
# Coolify installs on port 8000 by default
# Visit http://your-server-ip:8000 to complete setup
# After initial setup, Coolify manages itself — updates are one-click from the UI
# The instance runs as Docker containers: coolify, coolify-proxy (Traefik), coolify-db (PostgreSQL)
Adding Remote Servers
# Coolify can deploy to multiple servers from one dashboard
# 1. Generate SSH key on Coolify server (or use existing)
ssh-keygen -t ed25519 -C "coolify"
# 2. Copy public key to target server
ssh-copy-id root@remote-server-ip
# 3. In Coolify UI: Servers → Add Server → paste private key and IP
# Coolify will install Docker on the remote server automatically
# Supported server providers: any VPS — Hetzner, DigitalOcean, AWS EC2, etc.
# Coolify itself can run on one server and deploy to many others
Project Configuration
// coolify.json (optional — placed in repo root for build customization)
// Most configuration is done through the Coolify web UI
// Environment variables, domains, and resources are set per-application
// Docker Compose deployments — place docker-compose.yml in your repo
// Coolify detects it and deploys all services defined in the compose file
// Dockerfile deployments — Coolify auto-detects Dockerfile in repo root
// Nixpacks deployments — if no Dockerfile, Coolify uses Nixpacks (same as Railway)
# docker-compose.yml — multi-service app deployed via Coolify
version: "3.8"
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@db:5432/myapp
depends_on:
- db
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=myapp
redis:
image: redis:7-alpine
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
Git Integration
# Option 1: GitHub App (recommended)
# In Coolify UI: Sources → Add GitHub App
# This creates a GitHub App with webhook integration for automatic deploys
# Option 2: Deploy key
# Coolify generates a deploy key for private repo access
# Add the public key to your repository's deploy keys
# Option 3: Public repository
# Paste the HTTPS URL — no authentication needed
# Webhooks trigger automatic deployments on push to the configured branch
# You can configure branch filters and deployment triggers per application
Key Techniques
Application Deployment
# In the Coolify UI: Projects → Add New Resource → Application
# Choose source: GitHub, GitLab, or any Git URL
# Coolify auto-detects: Dockerfile, docker-compose.yml, or falls back to Nixpacks
# Build settings configurable in UI:
# - Build command: npm run build
# - Start command: npm start
# - Base directory: / (or subdirectory for monorepos)
# - Port exposed: 3000
# - Health check path: /health
# Deploy manually or enable auto-deploy on push
# Rollback: click any previous deployment in the UI to redeploy that version
Database Provisioning
# Coolify provides one-click managed databases:
# PostgreSQL, MySQL, MariaDB, MongoDB, Redis, DragonFly, KeyDB, ClickHouse
# In Coolify UI: Projects → Add New Resource → Database → PostgreSQL
# Coolify generates credentials and connection strings automatically
# Databases persist data on Docker volumes with automatic backup support
# Backups — configure in UI:
# - Schedule: cron expression (e.g., 0 2 * * * for daily at 2 AM)
# - Retention: number of backups to keep
# - Destination: local filesystem or S3-compatible storage
Custom Domains and SSL
# In Coolify UI: Application → Settings → Domains
# Add domain: app.example.com
# Coolify automatically provisions Let's Encrypt SSL certificates
# DNS setup — point your domain to the Coolify server:
# A record: app.example.com → your-server-ip
# Or CNAME: app.example.com → your-server-hostname
# Wildcard domains supported:
# *.example.com → deploy multiple apps under subdomains
# Traefik reverse proxy handles routing, SSL termination, and load balancing
# Custom Traefik labels can be set per application for advanced routing
Environment Variables and Secrets
# Set environment variables per application in the Coolify UI
# Variables can be marked as:
# - Build-time: available during docker build
# - Runtime: injected into the running container
# - Preview: only used in preview deployments (pull request deploys)
# Shared variables — define once, use across multiple applications
# In Coolify UI: Projects → Shared Variables
# Reference with: ${SHARED_VARIABLE_NAME}
# Secrets are encrypted at rest in Coolify's database
# .env file support: paste entire .env contents into the bulk editor
Preview Deployments (Pull Request Previews)
# Enable preview deployments per application:
# Application → Settings → Preview Deployments → Enable
# Every pull request gets a unique URL:
# pr-42.app.example.com (if wildcard domain is configured)
# Or: pr-42-random.coolify-instance.com
# Preview deployments have their own environment variables
# They are automatically cleaned up when the PR is merged or closed
# Useful for team review workflows — each PR gets a live preview
Persistent Storage
# Coolify supports persistent volumes for stateful applications
# Application → Storages → Add Storage
# Configure:
# - Source path (on host): /data/myapp/uploads
# - Destination path (in container): /app/uploads
# Docker volumes are also supported for database services
# Backups and volume management available through the UI
Best Practices
- Start with a dedicated VPS — run Coolify on a separate server from your applications when possible, so Coolify updates do not affect production workloads.
- Enable automatic backups — configure S3-compatible backup destinations for databases with a retention policy; local-only backups are lost if the server fails.
- Use Docker Compose for multi-service apps — rather than deploying individual containers, define the full stack in docker-compose.yml so Coolify manages networking and dependencies together.
- Set health check paths — Coolify uses health checks to verify deployments succeeded and to route traffic only to healthy containers.
- Pin Docker image tags — use
postgres:16instead ofpostgres:latestto avoid unexpected breaking changes on rebuilds. - Configure resource limits — set CPU and memory limits per container in Coolify to prevent a single app from consuming all server resources.
- Use wildcard DNS for preview deployments — set up
*.app.example.comto enable automatic subdomain routing for PR previews. - Keep Coolify updated — one-click updates in the UI ensure you get security patches and new features without manual intervention.
Anti-Patterns
- Running Coolify on undersized servers — Coolify itself plus Traefik and PostgreSQL need at least 2GB RAM; running applications on the same server requires more.
- Skipping SSL configuration — Coolify auto-provisions SSL via Let's Encrypt, but you must point DNS records to the server first; deploying without valid DNS leaves apps on plain HTTP.
- Storing uploads in the container filesystem — containers are ephemeral; use persistent volumes or external object storage for user-uploaded files.
- Not setting environment variables as secrets — plain-text secrets in docker-compose.yml get committed to Git; always use Coolify's encrypted environment variable UI.
- Ignoring Traefik logs — when routing fails or SSL renewal breaks, Traefik logs on the Coolify server contain the root cause; check them before debugging application code.
- Running database migrations inside the entrypoint with multiple replicas — use a one-off command or a separate init container to avoid concurrent migration conflicts.
Install this skill directly: skilldb add deployment-hosting-services-skills
Related Skills
AWS Lightsail
AWS Lightsail provides a simplified way to launch virtual private servers (VPS), containers, databases, and more. It's ideal for developers and small businesses needing easy-to-use, cost-effective cloud resources without deep AWS expertise.
Cloudflare Pages Deployment
Cloudflare Pages and Workers expertise — edge-first deployments, full-stack apps with Workers functions, KV/D1/R2 bindings, preview URLs, custom domains, and global CDN distribution
Digital Ocean App Platform
DigitalOcean App Platform is a fully managed Platform-as-a-Service (PaaS) that allows you to quickly build, deploy, and scale web applications, static sites, APIs, and background services. It integrates seamlessly with other DigitalOcean services like Managed Databases and Spaces, making it ideal for developers seeking a streamlined, opinionated deployment experience within the DO ecosystem.
Fly.io Deployment
Fly.io platform expertise — container deployment, global edge distribution, Dockerfiles, volumes, secrets, scaling, PostgreSQL, and multi-region patterns
Google Cloud Run
Google Cloud Run is a fully managed serverless platform for containerized applications. It allows you to deploy stateless containers that scale automatically from zero to thousands of instances based on request load, paying only for the resources consumed. Choose Cloud Run for microservices, web APIs, and event-driven functions that require custom runtimes or environments.
Kamal
Kamal (formerly MRSK) simplifies deploying web applications to servers via SSH, leveraging Docker and Traefik (or Caddy) for zero-downtime, rolling updates. It's ideal for containerized applications on a single server or small cluster without the complexity of Kubernetes.