Terraform Remote State with S3 and DynamoDB — Team Collaboration
The moment a second person runs terraform apply on the same project, local state files become a disaster. One person's laptop has the truth, the other has a stale copy, and the next apply either duplicates resources or deletes them. Remote state fixes this by storing your state in a shared location with locking, so two people can never write to it simultaneously. For AWS teams, the S3 + DynamoDB pattern is the industry standard.
Why Remote State Matters
Local state has three fatal flaws for teams:
- No sharing — the state file lives on one person's machine. If they are on vacation, nobody else can safely run
terraform plan. - No locking — if two engineers run
applyat the same time, they can corrupt the state or create duplicate resources. - No versioning — if someone accidentally deletes the state file, you lose track of all managed resources. Recovery is painful.
Remote state solves all three: S3 provides shared storage with versioning, and DynamoDB provides locking.
Step 1 — Bootstrap the Backend Infrastructure
You need an S3 bucket and a DynamoDB table before you can configure the backend. This is the classic chicken-and-egg problem — you need infrastructure to manage your infrastructure. The solution is a small bootstrap configuration:
# bootstrap/main.tf — Run this ONCE manually
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "my-company-terraform-state"
lifecycle {
prevent_destroy = true
}
tags = {
Name = "Terraform State"
Purpose = "Remote state storage"
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = "Terraform State Locks"
Purpose = "State locking"
}
}
Run this with local state first: terraform init && terraform apply. Once the bucket and table exist, you can configure all your other projects to use them.
Step 2 — Configure the Backend
In your actual project, add a backend block inside the terraform block:
# backend.tf
terraform {
backend "s3" {
bucket = "my-company-terraform-state"
key = "projects/web-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks"
encrypt = true
}
}
Then initialize:
terraform init
# Initializing the backend...
# Successfully configured the backend "s3"!
Every plan and apply now reads and writes state from S3. The DynamoDB table prevents concurrent writes.
Step 3 — State Locking in Action
When someone runs terraform apply, Terraform creates a lock in DynamoDB:
# Person A runs apply
terraform apply
# Acquiring state lock. This may take a few moments...
# Lock acquired!
# Person B tries to run apply at the same time
terraform apply
# Error: Error acquiring the state lock
#
# Error message: ConditionalCheckFailedException: The conditional request failed
# Lock Info:
# ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890
# Path: my-company-terraform-state/projects/web-app/terraform.tfstate
# Operation: OperationTypeApply
# Who: person-a@laptop
# Version: 1.6.0
# Created: 2025-06-21 10:30:00.000000 +0000 UTC
Person B sees exactly who holds the lock and when it was acquired. No silent corruption, no duplicate resources.
If a lock gets stuck (for example, because someone's laptop crashed during apply), you can force-unlock:
terraform force-unlock a1b2c3d4-e5f6-7890-abcd-ef1234567890
Use this carefully — only when you are certain no operation is actually running.
Cross-Project State References
Sometimes Project B needs values from Project A's state — like a VPC ID created by the networking team. The terraform_remote_state data source makes this possible:
# In the networking project (Project A)
# outputs.tf
output "vpc_id" {
value = aws_vpc.main.id
}
output "private_subnet_ids" {
value = aws_subnet.private[*].id
}
# In the application project (Project B)
# data.tf
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "my-company-terraform-state"
key = "projects/networking/terraform.tfstate"
region = "us-east-1"
}
}
# Use the outputs from the networking project
resource "aws_instance" "app" {
ami = var.ami_id
instance_type = "t3.micro"
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_ids[0]
tags = {
Name = "app-server"
VpcId = data.terraform_remote_state.network.outputs.vpc_id
}
}
This creates a read-only dependency. The application team cannot modify the network — they can only read its outputs.
Partial Backend Configuration
You might not want to hardcode the bucket name in your .tf files, especially if different teams use different state buckets. Partial configuration lets you pass backend settings at init time:
# backend.tf — partial configuration
terraform {
backend "s3" {
key = "projects/web-app/terraform.tfstate"
# bucket, region, and dynamodb_table provided at init
}
}
# Provide the remaining values at init
terraform init \
-backend-config="bucket=my-company-terraform-state" \
-backend-config="region=us-east-1" \
-backend-config="dynamodb_table=terraform-state-locks" \
-backend-config="encrypt=true"
Or use a backend config file:
# backend-config/production.hcl
bucket = "prod-terraform-state"
region = "us-east-1"
dynamodb_table = "prod-terraform-locks"
encrypt = true
# Initialize with the file
terraform init -backend-config=backend-config/production.hcl
This pattern works well in CI/CD pipelines where you pass environment-specific backend configuration.
Migrating from Local to Remote State
If you already have a project with local state and want to migrate to S3:
# 1. Add the backend block to your configuration
# (add the terraform { backend "s3" { ... } } block)
# 2. Run init — Terraform detects the change
terraform init
# Initializing the backend...
# Do you want to copy existing state to the new backend?
# Enter a value: yes
# Successfully configured the backend "s3"!
Terraform copies your local state to S3 automatically. After confirming it works, you can safely delete the local .tfstate file.
Alternative Backends
S3 is the most common backend for AWS teams, but other clouds have their equivalents:
# Azure Storage backend
terraform {
backend "azurerm" {
resource_group_name = "terraform-state-rg"
storage_account_name = "mycompanytfstate"
container_name = "tfstate"
key = "production.terraform.tfstate"
}
}
# Google Cloud Storage backend
terraform {
backend "gcs" {
bucket = "my-company-terraform-state"
prefix = "projects/web-app"
}
}
Both Azure Storage and GCS support state locking natively — no separate lock table needed.
Best Practices
| Practice | Why |
|---|---|
| Enable S3 versioning | Recover from accidental state corruption |
| Enable encryption | State contains sensitive values (passwords, keys) |
| Block public access | State should never be publicly readable |
| Use DynamoDB locking | Prevents concurrent state modifications |
| One state per project | Keeps blast radius small |
Use prevent_destroy | Protects the state bucket from accidental deletion |
| Pin the state key per project | Avoid state collisions |
Wrapping Up
Remote state transforms Terraform from a single-player tool into a team collaboration platform. S3 gives you durable, versioned, encrypted storage. DynamoDB gives you locking. Together, they ensure that no matter how many engineers are running Terraform, the state remains consistent and safe.
Next, we will explore Terraform on Azure — provisioning resource groups, virtual networks, and virtual machines with the AzureRM provider.
