Skip to main content

Terraform on AWS — Better Than CloudFormation?

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Every DevOps engineer on AWS eventually faces this question: Terraform or CloudFormation? Both define infrastructure as code. Both create the same resources. But they think about the problem differently, and that difference changes how your team works, how you handle state, and how portable your skills become. After running both in production for years, here's an honest comparison — not a fanboy argument.

The Head-to-Head Comparison

Let's get the comparison table out of the way first, because this is what everyone wants to see:

FeatureTerraformCloudFormation
LanguageHCL (HashiCorp Configuration Language)JSON / YAML
State ManagementExplicit state file (you manage it)Managed by AWS automatically
Multi-CloudYes (AWS, Azure, GCP, 3000+ providers)AWS only
Drift Detectionterraform plan (manual)Drift detection (built-in)
RollbackNo automatic rollbackAutomatic rollback on failure
Preview Changesterraform plan (excellent)Change sets (decent)
Secret HandlingState file contains secrets (encrypt!)Parameter Store / Secrets Manager integration
Community ModulesTerraform Registry (massive)AWS Solutions Library (smaller)
Learning CurveModerate (HCL is intuitive)Moderate (YAML is verbose)
CostFree (open source) / Paid (Terraform Cloud)Free
Import Existing Resourcesterraform importresource import (newer)
IDE SupportExcellent (VS Code, JetBrains)Good (cfn-lint, VS Code)

Neither is universally better. But Terraform wins on developer experience and multi-cloud, while CloudFormation wins on AWS-native integration and state management simplicity.

AWS Provider Setup

Every Terraform project for AWS starts with the provider configuration:

# versions.tf
terraform {
required_version = ">= 1.5.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

# provider.tf
provider "aws" {
region = var.aws_region

default_tags {
tags = {
ManagedBy = "terraform"
Project = var.project_name
Environment = var.environment
}
}
}

# For multi-region resources (like CloudFront + ACM)
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}

The default_tags block is a game-changer — every resource you create automatically gets tagged. No more untagged resources floating in your account.

Authentication follows the standard AWS credential chain. In production, use IAM roles (EC2 instance profiles, ECS task roles, or OIDC for CI/CD):

# Local development — use AWS profiles
export AWS_PROFILE=production
terraform plan

# CI/CD — use OIDC (GitHub Actions example)
# No static credentials stored anywhere

Common AWS Resources in Terraform

Here's a real-world pattern for a VPC with public and private subnets, an EC2 instance, an RDS database, and an S3 bucket:

# vpc.tf — Network foundation
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true

tags = { Name = "${var.project_name}-vpc" }
}

resource "aws_subnet" "private" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = var.availability_zones[count.index]

tags = { Name = "${var.project_name}-private-${count.index + 1}" }
}

resource "aws_subnet" "public" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 100)
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true

tags = { Name = "${var.project_name}-public-${count.index + 1}" }
}
# ec2.tf — Application server
resource "aws_instance" "app" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = aws_subnet.private[0].id
vpc_security_group_ids = [aws_security_group.app.id]
iam_instance_profile = aws_iam_instance_profile.app.name

root_block_device {
volume_size = 30
volume_type = "gp3"
encrypted = true
}

user_data = templatefile("${path.module}/scripts/init.sh", {
db_endpoint = aws_db_instance.main.endpoint
s3_bucket = aws_s3_bucket.assets.id
})

tags = { Name = "${var.project_name}-app" }
}

# Automatically find the latest Amazon Linux 2023 AMI
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]

filter {
name = "name"
values = ["al2023-ami-*-x86_64"]
}
}
# rds.tf — Database
resource "aws_db_instance" "main" {
identifier = "${var.project_name}-db"
engine = "postgres"
engine_version = "16.1"
instance_class = "db.t3.medium"

allocated_storage = 50
max_allocated_storage = 200
storage_encrypted = true

db_name = var.db_name
username = var.db_username
password = var.db_password # Better: use aws_secretsmanager_secret

db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.db.id]

backup_retention_period = 7
multi_az = var.environment == "production"
skip_final_snapshot = var.environment != "production"

tags = { Name = "${var.project_name}-db" }
}

Remote State with S3 + DynamoDB

Local state files are dangerous. Someone runs terraform apply from their laptop, the state file is only on their machine, and now nobody else can manage the infrastructure. Remote state fixes this:

# backend.tf
terraform {
backend "s3" {
bucket = "mycompany-terraform-state"
key = "production/infrastructure/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}

Create the backend resources first (chicken-and-egg problem — use a one-time script):

# bootstrap.sh — Run once to create the state backend
aws s3api create-bucket \
--bucket mycompany-terraform-state \
--region us-east-1

aws s3api put-bucket-versioning \
--bucket mycompany-terraform-state \
--versioning-configuration Status=Enabled

aws s3api put-bucket-encryption \
--bucket mycompany-terraform-state \
--server-side-encryption-configuration \
'{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"aws:kms"}}]}'

aws dynamodb create-table \
--table-name terraform-state-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST

The DynamoDB table provides locking — if two engineers run terraform apply simultaneously, one gets a lock error instead of corrupting the state. Non-negotiable for team environments.

AWS-Specific Patterns and Data Sources

Terraform's data sources let you reference existing AWS resources without managing them:

# Look up the current AWS account and region
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}

# Reference an existing Route 53 zone
data "aws_route53_zone" "main" {
name = "example.com"
}

# Find subnets by tags
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main.id]
}
filter {
name = "tag:Tier"
values = ["private"]
}
}

# Use them in resources
resource "aws_route53_record" "app" {
zone_id = data.aws_route53_zone.main.zone_id
name = "app.${data.aws_route53_zone.main.name}"
type = "A"

alias {
name = aws_lb.app.dns_name
zone_id = aws_lb.app.zone_id
evaluate_target_health = true
}
}

Reusable Modules for AWS

Modules are Terraform's answer to code reuse. The community-maintained AWS modules on the Terraform Registry save hundreds of hours:

# Use the community VPC module instead of writing 200 lines
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"

name = "${var.project_name}-vpc"
cidr = "10.0.0.0/16"

azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

enable_nat_gateway = true
single_nat_gateway = var.environment != "production"
enable_dns_hostnames = true

tags = {
Environment = var.environment
}
}

# Community EKS module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.0.0"

cluster_name = "${var.project_name}-cluster"
cluster_version = "1.29"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_groups = {
general = {
instance_types = ["t3.large"]
min_size = 2
max_size = 10
desired_size = 3
}
}
}

When CloudFormation Still Wins

Terraform isn't always the right choice. CloudFormation has real advantages in specific scenarios:

  1. StackSets — Deploy the same template across 50 AWS accounts and 10 regions simultaneously. Terraform can do this with workspaces and CI/CD, but StackSets are built for it.

  2. AWS Service Catalog — If your organization uses Service Catalog for self-service infrastructure, it only supports CloudFormation templates natively.

  3. No state file management — CloudFormation tracks state automatically. No S3 bucket, no DynamoDB table, no state corruption risk.

  4. Same-day support — When AWS launches a new service, CloudFormation support is available day one. Terraform providers usually follow within days to weeks.

  5. CDK (Cloud Development Kit) — If your team prefers writing infrastructure in TypeScript, Python, or Java, CDK generates CloudFormation under the hood. It's genuinely excellent.

# CDK is worth considering as a CloudFormation alternative
npm install -g aws-cdk
cdk init app --language typescript

# CDK compiles to CloudFormation — best of both worlds
cdk synth # generates CloudFormation template
cdk deploy # deploys via CloudFormation

So, is Terraform better than CloudFormation? For most teams, yes — HCL is more readable than YAML, terraform plan is better than change sets, and multi-cloud portability matters even if you're AWS-only today. But if you're deeply invested in the AWS ecosystem, using Service Catalog, or deploying across dozens of accounts with StackSets, CloudFormation (especially with CDK) is the pragmatic choice. The best tool is the one your team will actually use consistently.