Jenkins
Jenkins declarative and scripted pipelines including shared libraries, agents, and plugin-based CI/CD workflows
You are an expert in Jenkins for continuous integration and deployment. ## Key Points - Use Declarative Pipeline syntax over Scripted for readability and linting support. - Store `Jenkinsfile` in source control; avoid configuring pipelines in the Jenkins UI. - Use Shared Libraries for common patterns across teams to reduce duplication. - Run builds inside Docker containers for reproducible, isolated environments. - Use `credentials()` and `withCredentials` for secret management; never hardcode secrets. - Set `timeout` and `disableConcurrentBuilds` in `options` to prevent runaway and conflicting builds. - Use `post` blocks for cleanup, notifications, and archiving regardless of build outcome. - Configure `buildDiscarder` to keep storage under control. - Pin plugin versions and test upgrades in a staging Jenkins instance first. - Use `agent none` at the pipeline level and assign agents per stage to optimize resource usage. - Scripted Pipeline's flexibility leads to unmaintainable Groovy spaghetti; prefer Declarative with `script` blocks only when necessary. - Jenkins controller running builds directly instead of delegating to agents causes performance and security issues.
skilldb get cicd-patterns-skills/JenkinsFull skill: 283 linesJenkins — CI/CD
You are an expert in Jenkins for continuous integration and deployment.
Overview
Jenkins is a self-hosted, open-source automation server. Pipelines are defined in Jenkinsfile using either Declarative or Scripted syntax (both Groovy-based). Jenkins uses a controller-agent architecture where the controller orchestrates builds and agents execute them. Its extensive plugin ecosystem (1800+ plugins) provides integrations with virtually every tool and platform.
Setup & Configuration
Pipelines are defined in a Jenkinsfile at the repository root. Jenkins discovers this file through Multibranch Pipeline or Organization Folder job types.
Basic Declarative Pipeline:
pipeline {
agent any
environment {
NODE_ENV = 'production'
DEPLOY_CREDS = credentials('deploy-credentials-id')
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10'))
}
stages {
stage('Build') {
steps {
sh 'npm ci'
sh 'npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
post {
always {
junit 'test-results/*.xml'
}
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh './deploy.sh'
}
}
}
post {
failure {
mail to: 'team@example.com',
subject: "Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Check: ${env.BUILD_URL}"
}
}
}
Core Patterns
Parallel Stages
Run independent stages concurrently:
stage('Test') {
parallel {
stage('Unit Tests') {
agent { label 'linux' }
steps {
sh 'npm run test:unit'
}
}
stage('Integration Tests') {
agent { label 'linux' }
steps {
sh 'npm run test:integration'
}
}
stage('Lint') {
agent { label 'linux' }
steps {
sh 'npm run lint'
}
}
}
}
Docker-Based Agents
Run stages inside containers:
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'node:20-alpine'
args '-v $HOME/.npm:/root/.npm'
}
}
steps {
sh 'npm ci && npm run build'
}
}
stage('Build Image') {
agent { label 'docker' }
steps {
script {
def image = docker.build("myapp:${env.BUILD_NUMBER}")
docker.withRegistry('https://registry.example.com', 'registry-creds') {
image.push()
image.push('latest')
}
}
}
}
}
}
Shared Libraries
Create reusable pipeline code in a shared library (vars/deployPipeline.groovy):
// vars/deployPipeline.groovy
def call(Map config) {
pipeline {
agent any
stages {
stage('Build') {
steps {
sh config.buildCommand ?: 'npm ci && npm run build'
}
}
stage('Deploy') {
when { branch 'main' }
steps {
sh "deploy.sh ${config.environment}"
}
}
}
}
}
Use in a Jenkinsfile:
@Library('my-shared-lib') _
deployPipeline(
buildCommand: 'make build',
environment: 'production'
)
Input and Approval Gates
stage('Deploy to Production') {
steps {
timeout(time: 15, unit: 'MINUTES') {
input message: 'Deploy to production?',
ok: 'Deploy',
submitter: 'admin,release-managers'
}
sh './deploy.sh production'
}
}
Credentials Handling
stage('Deploy') {
steps {
withCredentials([
usernamePassword(
credentialsId: 'docker-hub',
usernameVariable: 'DOCKER_USER',
passwordVariable: 'DOCKER_PASS'
),
string(
credentialsId: 'api-token',
variable: 'API_TOKEN'
),
file(
credentialsId: 'kubeconfig',
variable: 'KUBECONFIG'
)
]) {
sh 'docker login -u $DOCKER_USER -p $DOCKER_PASS'
sh 'kubectl apply -f manifests/'
}
}
}
Stash and Unstash for Sharing Files
stage('Build') {
steps {
sh 'npm run build'
stash includes: 'dist/**', name: 'build-output'
}
}
stage('Deploy') {
agent { label 'deploy-server' }
steps {
unstash 'build-output'
sh './deploy.sh'
}
}
Core Philosophy
Jenkins is the Swiss Army knife of CI/CD — it can do almost anything, which is both its greatest strength and its most dangerous quality. The plugin ecosystem and Scripted Pipeline's Groovy flexibility mean there is always a way to solve a problem, but the path of least resistance often leads to unmaintainable, imperative build scripts that only their author understands. The discipline required with Jenkins is to constrain yourself deliberately: use Declarative Pipeline syntax, extract shared logic into Shared Libraries, and treat the Jenkinsfile as code that must be readable by the entire team, not just the person who wrote it.
The controller-agent architecture is Jenkins's operational foundation. The controller should orchestrate — schedule builds, serve the UI, manage plugins — but never execute build workloads directly. Running builds on the controller creates a single point of failure where a misbehaving build can crash the entire CI system, and where build processes have access to Jenkins's own credentials and configuration. Agents should be ephemeral where possible (containers, cloud VMs that scale to zero) and purpose-tagged so jobs route to agents with the right capabilities. This separation is not optional; it is a security and reliability requirement.
Shared Libraries are how Jenkins scales across teams without configuration drift. When every repository's Jenkinsfile contains its own deployment logic, a security fix or process change requires updating dozens of files across dozens of repositories. A Shared Library centralizes this logic into a single, versioned, tested codebase that Jenkinsfiles consume with a one-line import. The library becomes the organization's CI/CD standard — opinionated about how builds, tests, and deploys work — while individual Jenkinsfiles specify only what is unique to their project.
Anti-Patterns
-
Building on the controller. Running jobs directly on the Jenkins controller instead of delegating to agents means build failures can destabilize the entire CI system, and build processes have unnecessary access to Jenkins internals. Always use
agent noneat the pipeline level and assign agents per stage. -
Scripted Pipeline spaghetti. Using Scripted Pipeline syntax for its flexibility without structure leads to hundreds of lines of imperative Groovy that is impossible to lint, test, or review. Use Declarative Pipeline as the default and drop into
scriptblocks only for logic that genuinely cannot be expressed declaratively. -
Snowflake Jenkins instances. Configuring jobs, plugins, and credentials through the Jenkins UI means the configuration is not version-controlled, not reproducible, and lost if the controller fails. Use Configuration as Code (JCasC), Jenkinsfiles in source control, and Shared Libraries to make the Jenkins instance rebuildable from scratch.
-
Input steps holding agents. Using
inputinside anodeblock keeps an agent occupied while waiting for human approval, which can block other builds for hours. Wrapinputsteps outside ofnodeblocks or usetimeoutto prevent agents from being held indefinitely. -
Plugin sprawl without governance. Installing plugins freely without testing compatibility or tracking versions leads to plugin conflicts, security vulnerabilities, and upgrade breakage. Maintain a curated plugin list, test upgrades on a staging Jenkins instance first, and pin plugin versions.
Best Practices
- Use Declarative Pipeline syntax over Scripted for readability and linting support.
- Store
Jenkinsfilein source control; avoid configuring pipelines in the Jenkins UI. - Use Shared Libraries for common patterns across teams to reduce duplication.
- Run builds inside Docker containers for reproducible, isolated environments.
- Use
credentials()andwithCredentialsfor secret management; never hardcode secrets. - Set
timeoutanddisableConcurrentBuildsinoptionsto prevent runaway and conflicting builds. - Use
postblocks for cleanup, notifications, and archiving regardless of build outcome. - Configure
buildDiscarderto keep storage under control. - Pin plugin versions and test upgrades in a staging Jenkins instance first.
- Use
agent noneat the pipeline level and assign agents per stage to optimize resource usage.
Common Pitfalls
- Scripted Pipeline's flexibility leads to unmaintainable Groovy spaghetti; prefer Declarative with
scriptblocks only when necessary. - Jenkins controller running builds directly instead of delegating to agents causes performance and security issues.
- Plugin version conflicts after updates can break pipelines; always back up before upgrading.
Jenkinsfilechanges on branches are not picked up until the branch is scanned; trigger a scan or wait for the polling interval.shsteps run in the workspace directory which is shared; concurrent builds on the same agent can interfere unless workspaces are isolated.- The CPS (Continuation Passing Style) transform in Scripted Pipelines causes unexpected serialization errors with certain Groovy constructs (closures, non-serializable objects).
inputsteps hold an executor/agent while waiting; wrap them in anode-less context or usetimeoutto avoid blocking agents indefinitely.- Not cleaning workspaces between builds leads to stale artifacts and flaky results.
Install this skill directly: skilldb add cicd-patterns-skills
Related Skills
Artifact Management
Build artifact handling, dependency caching, container image management, and artifact registry patterns for CI/CD
Buildkite
Buildkite pipelines including dynamic pipeline generation, agent targeting, plugins, and hybrid cloud CI/CD
Circleci
CircleCI configuration including orbs, workflows, executors, and pipeline parameterization for CI/CD
Deployment Strategies
Blue/green, canary, rolling, and feature-flag deployment strategies with platform-specific implementation patterns
Environment Management
Managing secrets, environment variables, deployment environments, and configuration across CI/CD pipelines
Github Actions
GitHub Actions workflows for CI/CD automation including reusable workflows, matrix builds, and deployment pipelines