Skip to main content

Terraform Import — Bring Existing Infrastructure Under Control

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Most teams do not start with Terraform on day one. They have existing EC2 instances, S3 buckets, Azure VNets, and GCP projects that were created manually through the console or with scripts. Terraform import lets you bring those resources under Terraform management without recreating them — no downtime, no data loss.

Why You Need Import

When you adopt Terraform for an existing environment, you face a problem: Terraform does not know about resources it did not create. If you write a resource "aws_instance" "web" block for a server that already exists and run apply, Terraform will try to create a duplicate. Import solves this by telling Terraform "this resource block maps to that real-world object" and writing the mapping into the state file.

Common scenarios where import is essential:

  • Brownfield adoption — migrating existing infrastructure to Terraform
  • Console changes — someone created a resource manually and you need to codify it
  • Acquisitions — inheriting infrastructure from another team or company
  • Disaster recovery — reconstructing state after a state file loss

The Classic CLI Import Command

The original terraform import command has been available since Terraform 0.7. It takes a resource address and a resource ID:

# Import an existing AWS EC2 instance
terraform import aws_instance.web i-0abc123def456789

# Import an S3 bucket
terraform import aws_s3_bucket.data my-data-bucket-prod

# Import an AWS VPC
terraform import aws_vpc.main vpc-0a1b2c3d4e5f67890

Before running import, you must write the resource block in your configuration. Terraform does not generate code for you (at least not with the classic command):

# main.tf — write this BEFORE running import
resource "aws_instance" "web" {
ami = "ami-0abcdef1234567890"
instance_type = "t3.micro"

tags = {
Name = "web-server"
}
}
# Now import the existing instance into this block
terraform import aws_instance.web i-0abc123def456789

# aws_instance.web: Importing from ID "i-0abc123def456789"...
# aws_instance.web: Import prepared!
# aws_instance.web: Refreshing state...
# Import successful!

After import, run terraform plan to see if your configuration matches the actual resource. You will almost certainly need to add missing attributes until the plan shows no changes.

Import Blocks — The Terraform 1.5+ Way

Terraform 1.5 introduced import blocks, which are declarative and far more practical than the CLI command. You define the import in your configuration and Terraform handles it during plan and apply:

# imports.tf
import {
to = aws_instance.web
id = "i-0abc123def456789"
}

import {
to = aws_s3_bucket.data
id = "my-data-bucket-prod"
}

import {
to = aws_vpc.main
id = "vpc-0a1b2c3d4e5f67890"
}

Now you can run terraform plan and see exactly what Terraform will import — before it touches the state:

terraform plan

# aws_instance.web: Preparing import... [id=i-0abc123def456789]
# aws_instance.web: Refreshing state... [id=i-0abc123def456789]
# aws_s3_bucket.data: Preparing import... [id=my-data-bucket-prod]
# aws_s3_bucket.data: Refreshing state... [id=my-data-bucket-prod]
#
# Plan: 3 to import, 0 to add, 0 to change, 0 to destroy.

The advantage over the CLI command is clear: import blocks are reviewable in pull requests, work in CI/CD pipelines, and can be planned before applying.

Generating Configuration Automatically

Writing resource blocks by hand for dozens of existing resources is tedious. Terraform 1.5+ can generate the configuration for you:

# Generate config for all import blocks into a file
terraform plan -generate-config-out=generated.tf

This reads the real-world state of each imported resource and writes a complete resource block. The generated code is verbose — it includes every attribute, even defaults — but it gives you a working starting point that you can clean up.

# generated.tf (auto-generated — clean up afterward)
resource "aws_instance" "web" {
ami = "ami-0abcdef1234567890"
instance_type = "t3.micro"
subnet_id = "subnet-0123456789abcdef0"
vpc_security_group_ids = ["sg-0a1b2c3d4e5f67890"]
associate_public_ip_address = true
key_name = "my-key-pair"

root_block_device {
volume_size = 20
volume_type = "gp3"
encrypted = true
}

tags = {
Name = "web-server"
Environment = "production"
}
}

Importing Azure Resources

Azure resource IDs are long paths that include the subscription, resource group, and resource name:

# Import an Azure resource group
terraform import azurerm_resource_group.main \
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg

# Import an Azure virtual network
terraform import azurerm_virtual_network.main \
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg/providers/Microsoft.Network/virtualNetworks/my-vnet

Or with import blocks:

import {
to = azurerm_resource_group.main
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg"
}

import {
to = azurerm_virtual_network.main
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg/providers/Microsoft.Network/virtualNetworks/my-vnet"
}

Finding the correct ID format is often the hardest part. Check the provider documentation for each resource type — there is always an "Import" section at the bottom that shows the expected ID format.

Bulk Import with Terraformer

When you have hundreds of resources to import, doing them one by one is not realistic. Terraformer (a third-party tool by Google) can scan your cloud account and generate both the Terraform code and the state file:

# Install Terraformer
brew install terraformer

# Import all EC2 instances and VPCs from AWS
terraformer import aws \
--resources=ec2_instance,vpc \
--regions=us-east-1

# Import all resources from an Azure resource group
terraformer import azure \
--resources=resource_group,virtual_network,network_security_group \
--resource-group=my-rg

Terraformer generates a directory structure with separate .tf files and a terraform.tfstate for each resource type. You then merge these into your project and refactor. The generated code is functional but not pretty — treat it as a starting point, not the final product.

Import Workflow Best Practices

StepActionWhy
1Inventory existing resourcesKnow what you are importing before you start
2Write import blocksDeclarative, reviewable, plannable
3Generate config with -generate-config-outSaves time on attribute matching
4Run terraform planVerify zero diff before applying
5Clean up generated codeRemove defaults, add variables, organize files
6Remove import blocksThey are one-time-use — remove after successful import
7Commit and reviewTreat import PRs like any other infra change

Common Gotchas

State-only import. The classic terraform import command only modifies state — it does not generate configuration. If you forget to write the resource block first, you end up with state entries that have no matching code, and the next plan will try to destroy them.

Missing attributes. After import, your configuration may not include every attribute the real resource has. Run terraform plan repeatedly, adding attributes until the plan shows no changes.

Resource ID formats vary. AWS uses simple IDs (i-0abc123), Azure uses long paths (/subscriptions/.../resourceGroups/...), and GCP uses project-based paths (projects/my-project/zones/us-central1-a/instances/my-vm). Always check the provider documentation.

Import does not import dependencies. Importing an EC2 instance does not automatically import its security group, subnet, or key pair. You need to import each resource individually.

Count and for_each resources. When importing into resources that use count or for_each, use the indexed address:

terraform import 'aws_instance.web[0]' i-0abc123
terraform import 'aws_instance.web["api"]' i-0def456

Wrapping Up

Terraform import bridges the gap between existing infrastructure and infrastructure as code. Whether you use the classic CLI command, the newer import blocks with config generation, or Terraformer for bulk imports, the goal is the same — get every resource into state so that terraform plan is the single source of truth. Start with the most critical resources, import incrementally, and always verify with plan before moving on.

Next, we will look at Terraform provisioners — what they are, when they are necessary, and why HashiCorp recommends avoiding them in most cases.