ECS vs Fargate vs EKS — Running Containers on AWS
You've Dockerized your app, and it runs perfectly on your laptop. Now you need to run it in production — with load balancing, auto scaling, rolling deployments, and health checks. AWS gives you three ways to do this: ECS, Fargate, and EKS. Choosing wrong means either over-engineering a simple app or under-engineering a complex one. Let's break down exactly when to use each.
Container Orchestration — Why You Need It
Running a single container is easy. Running 50 containers across 10 servers with zero-downtime deployments, automatic restarts, service discovery, and load balancing — that's orchestration. Without it, you're writing bash scripts and praying.
AWS offers three orchestration services:
- ECS (Elastic Container Service) — AWS-native orchestrator, simpler than Kubernetes
- Fargate — Serverless compute engine for ECS (and EKS), no EC2 to manage
- EKS (Elastic Kubernetes Service) — Managed Kubernetes, full K8s API compatibility
ECS Core Concepts
ECS has four building blocks: clusters, task definitions, tasks, and services.
A cluster is a logical grouping of resources. A task definition is a blueprint (like a docker-compose file). A task is a running instance of that blueprint. A service ensures N tasks are always running and wires them to a load balancer.
Pushing Your Image to ECR
Before running containers, push your Docker image to AWS's container registry:
# Create a repository
aws ecr create-repository \
--repository-name my-web-app \
--image-scanning-configuration scanOnPush=true
# Authenticate Docker with ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
123456789012.dkr.ecr.us-east-1.amazonaws.com
# Build, tag, and push
docker build -t my-web-app:latest .
docker tag my-web-app:latest \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest
docker push \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest
Writing a Task Definition
The task definition tells ECS everything about your container: image, CPU, memory, ports, environment variables, and logging:
{
"family": "web-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "web",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest",
"essential": true,
"portMappings": [
{
"containerPort": 8080,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" },
{ "name": "PORT", "value": "8080" }
],
"secrets": [
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:123456789012:parameter/prod/db-password"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/web-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "web"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
]
}
# Register the task definition
aws ecs register-task-definition \
--cli-input-json file://task-definition.json
# List registered task definitions
aws ecs list-task-definitions \
--family-prefix web-app \
--sort DESC --output table
Creating a Fargate Service with ALB
Now wire it all together — a Fargate service behind an Application Load Balancer:
# Create the cluster
aws ecs create-cluster --cluster-name production
# Create the service
aws ecs create-service \
--cluster production \
--service-name web-service \
--task-definition web-app:1 \
--desired-count 3 \
--launch-type FARGATE \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["subnet-abc123", "subnet-def456"],
"securityGroups": ["sg-789xyz"],
"assignPublicIp": "ENABLED"
}
}' \
--load-balancers '[
{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/web-tg/abc123",
"containerName": "web",
"containerPort": 8080
}
]' \
--deployment-configuration '{
"maximumPercent": 200,
"minimumHealthyPercent": 100,
"deploymentCircuitBreaker": {
"enable": true,
"rollback": true
}
}'
The deploymentCircuitBreaker automatically rolls back if the new version fails health checks. No more stuck deployments at 2 AM.
ECS Auto Scaling
Scale your service based on CPU, memory, or custom metrics:
# Register the service as a scalable target
aws application-autoscaling register-scalable-target \
--service-namespace ecs \
--resource-id service/production/web-service \
--scalable-dimension ecs:service:DesiredCount \
--min-capacity 2 \
--max-capacity 20
# Target tracking policy — maintain 60% average CPU
aws application-autoscaling put-scaling-policy \
--service-namespace ecs \
--resource-id service/production/web-service \
--scalable-dimension ecs:service:DesiredCount \
--policy-name cpu-target-tracking \
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration '{
"TargetValue": 60.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ECSServiceAverageCPUUtilization"
},
"ScaleInCooldown": 300,
"ScaleOutCooldown": 60
}'
ECS vs Fargate vs EKS — The Comparison
| Feature | ECS on EC2 | ECS on Fargate | EKS |
|---|---|---|---|
| Complexity | Medium | Low | High |
| Server management | You manage EC2 | None (serverless) | You manage nodes (or use Fargate) |
| Pricing | EC2 instance cost | Per vCPU + memory/second | $0.10/hr cluster + compute |
| Scaling | Manual + ASG | Automatic with tasks | Cluster Autoscaler / Karpenter |
| GPU support | Yes | Limited | Yes |
| Max task size | Instance-limited | 4 vCPU, 30 GB | Node-limited |
| Startup time | Fast (pre-warmed) | 30-60s cold start | Fast (pre-warmed nodes) |
| Ecosystem | AWS-native | AWS-native | Full Kubernetes ecosystem |
| Service mesh | App Mesh / Cloud Map | App Mesh / Cloud Map | Istio, Linkerd, App Mesh |
| CI/CD | CodePipeline, custom | CodePipeline, custom | ArgoCD, Flux, Helm |
| Learning curve | Low-Medium | Low | High |
When to Use Which
Choose Fargate when:
- You want simplicity — just define CPU/memory and run
- Workloads are bursty or unpredictable
- You don't want to patch or manage EC2 instances
- Your containers fit within 4 vCPU / 30 GB
Choose ECS on EC2 when:
- You need GPU instances or large workloads
- You want maximum control over the host OS
- Steady-state workloads where reserved instances save money
- You need Docker-in-Docker or host-level access
Choose EKS when:
- Your team already knows Kubernetes
- You need portability across clouds (GKE, AKS)
- You want the Kubernetes ecosystem (Helm, operators, service mesh)
- Complex microservice architectures with 50+ services
EKS Quick Start (for Comparison)
# Create an EKS cluster (takes ~15 minutes)
eksctl create cluster \
--name production \
--region us-east-1 \
--version 1.29 \
--nodegroup-name workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 2 \
--nodes-max 10 \
--managed
# Deploy a workload using kubectl
kubectl create deployment web-app \
--image=123456789012.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest \
--replicas=3
kubectl expose deployment web-app \
--port=80 --target-port=8080 \
--type=LoadBalancer
Notice the difference: EKS uses standard Kubernetes commands (kubectl), while ECS uses AWS-specific APIs. If you ever move to Google Cloud or Azure, your K8s manifests travel with you. Your ECS task definitions don't.
What's Next?
Your containers are running, but how do users reach them? Next, we'll explore Route 53 — AWS's DNS service — and learn how to route traffic with weighted, latency-based, and failover routing policies to build truly resilient architectures.
This is Part 10 of our AWS series. Start with Fargate, graduate to EKS when the complexity is justified — not before.
