Skip to main content
Technology & EngineeringTerraform236 lines

Terraform Basics

Terraform fundamentals including providers, resources, data sources, and core workflow

Quick Summary18 lines
You are an expert in Terraform fundamentals for infrastructure as code.

## Key Points

- **Pin provider versions** using `~>` constraints to avoid breaking changes.
- **Use `for_each` over `count`** when resources have meaningful identifiers; reordering a list with `count` can cause unnecessary recreation.
- **Store state remotely** with locking from day one, even for small projects.
- **Separate configuration into files** by concern: `main.tf`, `variables.tf`, `outputs.tf`, `providers.tf`, `versions.tf`.
- **Use `terraform fmt`** and `terraform validate` in CI to enforce consistent style.
- **Tag all resources** using the provider's `default_tags` when available.
- **Never hard-code credentials** in configuration; rely on environment variables or instance profiles.
- **Forgetting `terraform init` after adding a provider or module.** Terraform will fail with a confusing error about missing plugins.
- **Modifying state manually.** Use `terraform state mv` or `terraform import` rather than editing the state file directly.
- **Ignoring plan output.** Always review the plan before applying. A resource showing `destroy` then `create` (forces replacement) may cause downtime.
- **Circular dependencies.** If two resources reference each other, Terraform cannot determine the order. Break the cycle by extracting shared values into variables or data sources.
- **Large blast radius.** Putting all infrastructure in a single state file means one bad apply can affect everything. Split into smaller, focused state files per component or environment.
skilldb get terraform-skills/Terraform BasicsFull skill: 236 lines
Paste into your CLAUDE.md or agent config

Terraform Basics — Terraform

You are an expert in Terraform fundamentals for infrastructure as code.

Overview

Terraform is a declarative infrastructure as code tool by HashiCorp that uses HCL (HashiCorp Configuration Language) to define and provision infrastructure across cloud providers and services. The core workflow is Write -> Plan -> Apply, and Terraform maintains a state file that maps real-world resources to your configuration.

Core Concepts

Providers

Providers are plugins that interact with APIs of cloud platforms, SaaS tools, and other services. Each provider supplies a set of resource types and data sources.

terraform {
  required_version = ">= 1.5.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.5"
    }
  }
}

provider "aws" {
  region = "us-east-1"

  default_tags {
    tags = {
      ManagedBy   = "terraform"
      Environment = var.environment
    }
  }
}

Resources

Resources are the most important element. Each resource block declares a piece of infrastructure.

resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "${var.project}-vpc"
  }
}

resource "aws_subnet" "public" {
  count = length(var.availability_zones)

  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
  availability_zone = var.availability_zones[count.index]

  map_public_ip_on_launch = true

  tags = {
    Name = "${var.project}-public-${count.index}"
  }
}

Data Sources

Data sources let Terraform fetch information defined outside your configuration or by other Terraform configurations.

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

data "aws_caller_identity" "current" {}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  tags = {
    Name    = "web-server"
    Account = data.aws_caller_identity.current.account_id
  }
}

Terraform Block and Backend

terraform {
  required_version = ">= 1.5.0"

  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

Implementation Patterns

Resource Dependencies

Terraform automatically infers dependencies from references. Use depends_on only when there is a hidden dependency that Terraform cannot detect.

resource "aws_iam_role_policy" "example" {
  name   = "example-policy"
  role   = aws_iam_role.example.id
  policy = data.aws_iam_policy_document.example.json
}

resource "aws_instance" "example" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  # Explicit dependency: ensure the policy is attached before the instance starts
  depends_on = [aws_iam_role_policy.example]
}

Lifecycle Rules

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  lifecycle {
    create_before_destroy = true
    prevent_destroy       = true
    ignore_changes        = [ami, tags["UpdatedAt"]]
  }
}

Count vs for_each

# count — use for identical resources differing only by index
resource "aws_subnet" "public" {
  count             = 3
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet("10.0.0.0/16", 8, count.index)
  availability_zone = data.aws_availability_zones.available.names[count.index]
}

# for_each — use when each instance has a meaningful key
resource "aws_iam_user" "team" {
  for_each = toset(["alice", "bob", "carol"])
  name     = each.key
}

The Core Workflow

# Initialize the working directory (download providers, set up backend)
terraform init

# Preview changes without applying
terraform plan -out=tfplan

# Apply the saved plan
terraform apply tfplan

# Destroy all managed infrastructure
terraform destroy

Best Practices

  • Pin provider versions using ~> constraints to avoid breaking changes.
  • Use for_each over count when resources have meaningful identifiers; reordering a list with count can cause unnecessary recreation.
  • Store state remotely with locking from day one, even for small projects.
  • Separate configuration into files by concern: main.tf, variables.tf, outputs.tf, providers.tf, versions.tf.
  • Use terraform fmt and terraform validate in CI to enforce consistent style.
  • Tag all resources using the provider's default_tags when available.
  • Never hard-code credentials in configuration; rely on environment variables or instance profiles.

Core Philosophy

Terraform embodies the principle that infrastructure should be defined declaratively, versioned like application code, and applied through a predictable, reviewable workflow. Rather than scripting imperative steps to create servers and networks, you describe the desired end state and let Terraform figure out the operations needed to get there. This shift from "how" to "what" makes infrastructure reproducible, auditable, and safe to change.

The Write-Plan-Apply workflow is sacred. Every change should be planned before it is applied, and every plan should be reviewed before it is approved. This discipline catches destructive changes, prevents drift, and builds confidence that what you see in the plan is what will happen in production. Skipping the plan review is the single most common source of infrastructure incidents in Terraform-managed environments.

Terraform is not a configuration management tool and should not be used as one. It excels at provisioning and lifecycle management of cloud resources, but tasks like installing packages on a VM or configuring application settings belong to tools like Ansible, cloud-init, or baked AMIs. Keeping Terraform focused on what it does best leads to simpler, more maintainable configurations.

Anti-Patterns

  • ClickOps alongside Terraform. Making manual changes in the cloud console while Terraform manages the same resources creates drift that causes confusing plan output and can lead to Terraform reverting your manual changes on the next apply. All changes to Terraform-managed resources should go through Terraform.

  • Monolithic root module. Putting every resource for an entire organization into a single Terraform configuration creates a massive blast radius, slow plans, and painful state files. Split infrastructure into focused components (networking, compute, data) with separate state files.

  • Hard-coded values everywhere. Embedding account IDs, region names, AMI IDs, and IP ranges directly in resource blocks makes configurations brittle and impossible to reuse across environments. Use variables, data sources, and locals to parameterize everything that varies.

  • Ignoring the dependency graph. Overusing depends_on or, worse, assuming resources will be created in the order they appear in the file indicates a misunderstanding of how Terraform works. Trust the implicit dependency graph built from resource references and only use explicit dependencies for truly hidden relationships.

  • Treating state as disposable. Deleting or losing the state file does not delete infrastructure; it orphans it. Terraform loses track of what it manages, and you must import every resource manually to regain control. Protect state with remote backends, versioning, and access controls from day one.

Common Pitfalls

  • Forgetting terraform init after adding a provider or module. Terraform will fail with a confusing error about missing plugins.
  • Using count with a list that may be reordered. Removing an item from the middle of the list causes all subsequent resources to be destroyed and recreated. Use for_each with a set or map instead.
  • Modifying state manually. Use terraform state mv or terraform import rather than editing the state file directly.
  • Ignoring plan output. Always review the plan before applying. A resource showing destroy then create (forces replacement) may cause downtime.
  • Circular dependencies. If two resources reference each other, Terraform cannot determine the order. Break the cycle by extracting shared values into variables or data sources.
  • Large blast radius. Putting all infrastructure in a single state file means one bad apply can affect everything. Split into smaller, focused state files per component or environment.

Install this skill directly: skilldb add terraform-skills

Get CLI access →