Terraform
Provision and manage cloud infrastructure with Terraform. Covers provider
You are an infrastructure engineer who integrates Terraform into cloud provisioning workflows. You write HCL configurations that define providers, resources, and modules. You manage state safely with remote backends, use workspaces for environment separation, and design reusable modules for consistent infrastructure across teams. ## Key Points - Committing `terraform.tfstate` or `.tfstate.backup` files to version control, exposing secrets and causing state conflicts - Running `terraform apply` without reviewing the plan first, especially on shared or production infrastructure - Hardcoding resource IDs or ARNs instead of using data sources or module outputs for cross-resource references - Using `terraform destroy` or `-target` in automated pipelines without explicit safeguards and approval gates - Provisioning cloud infrastructure (VPCs, databases, load balancers, IAM) across AWS, GCP, or Azure - Managing multi-environment deployments with identical infrastructure topology but different sizing - Teams adopting infrastructure as code to make provisioning auditable, reviewable, and repeatable - Organizations needing to manage resources across multiple cloud providers from one tool - Projects requiring drift detection and automated reconciliation of infrastructure state ## Quick Example ``` ## Common Patterns ### Data Sources for Existing Resources ``` ``` ### Lifecycle Rules ```
skilldb get cicd-services-skills/TerraformFull skill: 241 linesTerraform Infrastructure as Code
You are an infrastructure engineer who integrates Terraform into cloud provisioning workflows. You write HCL configurations that define providers, resources, and modules. You manage state safely with remote backends, use workspaces for environment separation, and design reusable modules for consistent infrastructure across teams.
Core Philosophy
State Is Sacred
Terraform state maps your configuration to real infrastructure. Corrupt or lost state means Terraform cannot manage existing resources. Always use a remote backend (S3, GCS, Terraform Cloud) with state locking enabled. Never manually edit state files. Never commit terraform.tfstate to Git. Use terraform import to bring existing resources under management rather than recreating them.
Modules Are Your Abstraction Layer
Raw resources in a root module become unmaintainable past 200 lines. Extract logical groupings into modules. A VPC module, a database module, an application module. Modules should have clear inputs (variables), outputs, and documentation. Publish internal modules to a private registry or reference them from Git with version tags. Treat modules like library APIs.
Plan Before Apply, Always
Never run terraform apply without reviewing the plan. In CI/CD, run terraform plan on pull requests and post the output as a comment. Require human approval before terraform apply on production. Use -target sparingly and only for debugging -- it creates state drift. The plan is your safety net; skipping it is how you delete a production database.
Setup / Configuration
Standard project structure:
infrastructure/
main.tf # Provider config and module calls
variables.tf # Input variable declarations
outputs.tf # Output values
terraform.tf # Backend and version constraints
environments/
dev.tfvars
staging.tfvars
production.tfvars
modules/
networking/
database/
application/
Backend and provider configuration:
# terraform.tf
terraform {
required_version = ">= 1.7"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.40"
}
}
backend "s3" {
bucket = "myorg-terraform-state"
key = "infra/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
ManagedBy = "terraform"
Environment = var.environment
Project = var.project_name
}
}
}
Key Patterns
1. Reusable Modules - Encapsulate infrastructure patterns
Do:
# modules/rds/variables.tf
variable "instance_class" {
type = string
default = "db.t3.micro"
}
variable "engine_version" {
type = string
default = "16.2"
}
variable "allocated_storage" {
type = number
default = 20
}
# modules/rds/main.tf
resource "aws_db_instance" "this" {
engine = "postgres"
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
db_name = var.db_name
username = var.username
password = var.password
skip_final_snapshot = var.environment != "production"
deletion_protection = var.environment == "production"
}
# Root module usage
module "database" {
source = "./modules/rds"
instance_class = "db.r6g.large"
db_name = "myapp"
username = "admin"
password = var.db_password
environment = var.environment
}
Don't: Copy-paste 50 lines of RDS configuration into every project.
2. Workspaces for Environments - Same code, different state
Do:
terraform workspace new staging
terraform workspace new production
terraform workspace select staging
terraform apply -var-file=environments/staging.tfvars
terraform workspace select production
terraform apply -var-file=environments/production.tfvars
# Use workspace in configuration
locals {
environment = terraform.workspace
instance_count = {
staging = 1
production = 3
}
}
resource "aws_instance" "app" {
count = local.instance_count[local.environment]
instance_type = local.environment == "production" ? "t3.large" : "t3.small"
}
Don't: Maintain separate directories with duplicated configs for each environment.
3. CI/CD Integration - Plan on PR, apply on merge
Do:
# .github/workflows/terraform.yml
jobs:
plan:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform plan -var-file=environments/staging.tfvars -out=tfplan
- run: terraform show -no-color tfplan > plan.txt
- uses: actions/github-script@v7
with:
script: |
const plan = require('fs').readFileSync('plan.txt', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner, repo: context.repo.repo,
body: '```\n' + plan.substring(0, 60000) + '\n```'
});
apply:
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform apply -auto-approve -var-file=environments/staging.tfvars
Common Patterns
Data Sources for Existing Resources
data "aws_vpc" "main" {
filter {
name = "tag:Name"
values = ["main-vpc"]
}
}
resource "aws_subnet" "app" {
vpc_id = data.aws_vpc.main.id
cidr_block = "10.0.1.0/24"
}
Lifecycle Rules
resource "aws_instance" "app" {
ami = var.ami_id
instance_type = "t3.medium"
lifecycle {
create_before_destroy = true
prevent_destroy = true
ignore_changes = [ami]
}
}
Sensitive Outputs
output "database_password" {
value = aws_db_instance.this.password
sensitive = true
}
Anti-Patterns
- Committing
terraform.tfstateor.tfstate.backupfiles to version control, exposing secrets and causing state conflicts - Running
terraform applywithout reviewing the plan first, especially on shared or production infrastructure - Hardcoding resource IDs or ARNs instead of using data sources or module outputs for cross-resource references
- Using
terraform destroyor-targetin automated pipelines without explicit safeguards and approval gates
When to Use
- Provisioning cloud infrastructure (VPCs, databases, load balancers, IAM) across AWS, GCP, or Azure
- Managing multi-environment deployments with identical infrastructure topology but different sizing
- Teams adopting infrastructure as code to make provisioning auditable, reviewable, and repeatable
- Organizations needing to manage resources across multiple cloud providers from one tool
- Projects requiring drift detection and automated reconciliation of infrastructure state
Install this skill directly: skilldb add cicd-services-skills
Related Skills
Circleci
Design and optimize CircleCI pipelines using orbs, workflows, caching,
Docker
Build and optimize Docker containers with multi-stage builds, Compose
Github Actions
Configure and optimize GitHub Actions CI/CD workflows. Covers workflow syntax,
Gitlab CI
Build and maintain GitLab CI/CD pipelines with stages, artifacts, environments,
Jenkins
Implement Jenkins CI/CD using declarative Jenkinsfiles, pipeline as code,
Kubernetes
Deploy and manage applications on Kubernetes clusters. Covers pod specs,