Terraform Modules — Reusable Infrastructure Components
You have been writing Terraform for a while now. Your main.tf started at 50 lines, grew to 200, then 600, and now nobody wants to touch it. Worse, every new project copies the same VPC, subnet, and security group blocks with minor tweaks. Modules solve both problems: they let you package infrastructure into self-contained, versioned, shareable components — like functions in a programming language.
What Is a Module?
A module is just a directory with .tf files. Seriously, that is it. Every Terraform configuration you have ever written is already a module — the root module. When you call another module from your root, that called module is a child module.
# This is your root module (main.tf in your working directory)
# It calls a child module located at ./modules/vpc
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
environment = "production"
}
The distinction matters because root modules are what you run terraform apply against. Child modules are building blocks consumed by root modules. You never run terraform apply inside a child module directory.
Module Structure
A well-structured module follows a consistent file layout:
modules/vpc/
main.tf # Resources — the core logic
variables.tf # Input variables — what the caller provides
outputs.tf # Output values — what the module exposes
versions.tf # Required provider versions
README.md # Documentation (optional but encouraged)
Here is a minimal module that creates an S3 bucket:
# modules/s3-bucket/variables.tf
variable "bucket_name" {
type = string
description = "Name of the S3 bucket"
}
variable "environment" {
type = string
description = "Deployment environment"
default = "dev"
}
variable "enable_versioning" {
type = bool
description = "Enable object versioning"
default = true
}
# modules/s3-bucket/main.tf
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
tags = {
Environment = var.environment
ManagedBy = "terraform"
}
}
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = var.enable_versioning ? "Enabled" : "Suspended"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
# modules/s3-bucket/outputs.tf
output "bucket_id" {
value = aws_s3_bucket.this.id
description = "The name of the bucket"
}
output "bucket_arn" {
value = aws_s3_bucket.this.arn
description = "ARN of the S3 bucket"
}
output "bucket_domain_name" {
value = aws_s3_bucket.this.bucket_domain_name
description = "The domain name of the bucket"
}
Calling Modules — Source Types
The source argument tells Terraform where to find the module. You have several options:
# Local path
module "vpc" {
source = "./modules/vpc"
}
# Git repository (HTTPS)
module "vpc" {
source = "git::https://github.com/your-org/terraform-modules.git//modules/vpc?ref=v1.2.0"
}
# Git repository (SSH)
module "vpc" {
source = "git::ssh://git@github.com/your-org/terraform-modules.git//modules/vpc?ref=v1.2.0"
}
# Terraform Registry
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
}
# S3 bucket
module "vpc" {
source = "s3::https://my-bucket.s3.amazonaws.com/modules/vpc.zip"
}
After changing source, you must run terraform init to download the module.
Terraform Registry Modules
The Terraform Registry hosts thousands of community and verified modules. Verified modules have a blue checkmark and are maintained by HashiCorp partners.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
tags = {
Terraform = "true"
Environment = "production"
}
}
The registry module above replaces 150+ lines of hand-written VPC configuration. That is the power of modules.
Module Versioning
Always pin module versions in production. Without a version constraint, Terraform uses the latest, which can break your infrastructure on the next init.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0" # Any 5.x version (5.0, 5.1, 5.5, etc.)
}
module "s3" {
source = "terraform-aws-modules/s3-bucket/aws"
version = ">= 3.0, < 4.0" # At least 3.0 but less than 4.0
}
For Git-based modules, pin with a tag ref:
module "vpc" {
source = "git::https://github.com/org/modules.git//vpc?ref=v2.1.0"
}
Using Module Outputs
Modules expose values through outputs. Use them to wire modules together:
module "vpc" {
source = "./modules/vpc"
cidr = "10.0.0.0/16"
}
module "app_server" {
source = "./modules/ec2"
subnet_id = module.vpc.private_subnet_ids[0] # Output from VPC module
sg_id = module.vpc.default_sg_id # Another output
}
output "app_ip" {
value = module.app_server.private_ip
}
This is module composition — small modules snap together like building blocks.
Building a Reusable VPC Module
Let us build a real module that creates a VPC with public and private subnets across multiple availability zones:
# modules/vpc/variables.tf
variable "vpc_cidr" {
type = string
description = "CIDR block for the VPC"
}
variable "environment" {
type = string
description = "Environment name (dev, staging, production)"
}
variable "availability_zones" {
type = list(string)
description = "List of AZs to use"
}
variable "public_subnet_cidrs" {
type = list(string)
description = "CIDR blocks for public subnets"
}
variable "private_subnet_cidrs" {
type = list(string)
description = "CIDR blocks for private subnets"
}
# modules/vpc/main.tf
resource "aws_vpc" "this" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "${var.environment}-vpc"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.this.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.environment}-public-${var.availability_zones[count.index]}"
}
}
resource "aws_subnet" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.this.id
cidr_block = var.private_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
tags = {
Name = "${var.environment}-private-${var.availability_zones[count.index]}"
}
}
resource "aws_internet_gateway" "this" {
vpc_id = aws_vpc.this.id
tags = {
Name = "${var.environment}-igw"
}
}
# modules/vpc/outputs.tf
output "vpc_id" {
value = aws_vpc.this.id
description = "ID of the VPC"
}
output "public_subnet_ids" {
value = aws_subnet.public[*].id
description = "List of public subnet IDs"
}
output "private_subnet_ids" {
value = aws_subnet.private[*].id
description = "List of private subnet IDs"
}
Now call it from your root module:
# main.tf (root module)
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
environment = "production"
availability_zones = ["us-east-1a", "us-east-1b"]
public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnet_cidrs = ["10.0.10.0/24", "10.0.20.0/24"]
}
When to Modularize vs Keep Flat
Not everything needs to be a module. Here is a quick decision guide:
| Scenario | Recommendation |
|---|---|
| Code used in one project only | Keep flat — modularize later if needed |
| Same pattern across 2+ projects | Create a local module |
| Shared across teams | Publish to private registry or Git |
| Complex resource with 5+ related resources | Module for encapsulation |
| Single resource with no logic | Keep flat — a module adds overhead |
| Prototype or proof of concept | Keep flat — do not over-engineer early |
The rule of thumb: if you catch yourself copying a block of .tf code between projects, it is time for a module.
Wrapping Up
Modules are the single most impactful Terraform feature for team productivity. They reduce duplication, enforce standards, and make your infrastructure composable. Start with local modules for your own projects, graduate to Git-sourced modules shared across your team, and lean on the Terraform Registry for battle-tested community patterns.
Next up, we will tackle Terraform workspaces — managing multiple environments (dev, staging, production) from a single codebase without duplicating your configuration.
