Azure Container Apps — Serverless Containers Without the Kubernetes Complexity
You have a container image ready to deploy. You want it to scale automatically, handle HTTPS traffic, and cost nothing when idle. You do not want to manage node pools, upgrade Kubernetes versions, or configure ingress controllers. Azure Container Apps gives you the serverless container experience — Kubernetes under the hood, but you never touch the cluster. Deploy your image, define scaling rules, and let Azure handle everything else.
What Are Azure Container Apps?
Azure Container Apps is a fully managed serverless container platform built on Kubernetes and KEDA (Kubernetes Event-Driven Autoscaling). You bring containers, Azure provides the infrastructure.
Key characteristics:
- Scale to zero — No traffic, no charge. Containers spin down completely.
- Built-in HTTPS — Every app gets an HTTPS endpoint with automatic TLS.
- Dapr integration — Service-to-service calls, state management, pub/sub built in.
- Revisions — Immutable snapshots of your app for blue-green and canary deployments.
- KEDA scaling — Scale on HTTP requests, queue depth, CPU, memory, or custom metrics.
You do not manage nodes, control planes, or Kubernetes manifests. But you still get the power of containers with fine-grained scaling.
Environment Setup
A Container Apps environment is the shared boundary for a group of container apps. Apps in the same environment share a virtual network and Log Analytics workspace.
# Create a resource group
az group create \
--name rg-containerapps \
--location eastus
# Create a Log Analytics workspace
az monitor log-analytics workspace create \
--resource-group rg-containerapps \
--workspace-name law-containerapps \
--location eastus
# Get workspace credentials
LOG_ANALYTICS_WORKSPACE_ID=$(az monitor log-analytics workspace show \
--resource-group rg-containerapps \
--workspace-name law-containerapps \
--query customerId --output tsv)
LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group rg-containerapps \
--workspace-name law-containerapps \
--query primarySharedKey --output tsv)
# Create the Container Apps environment
az containerapp env create \
--name cae-prod \
--resource-group rg-containerapps \
--location eastus \
--logs-workspace-id $LOG_ANALYTICS_WORKSPACE_ID \
--logs-workspace-key $LOG_ANALYTICS_KEY
For workload isolation, you can create multiple environments. Apps in different environments are fully isolated — different VNets, different Log Analytics workspaces.
Deploying from Azure Container Registry
# Create an ACR and build an image
az acr create \
--resource-group rg-containerapps \
--name acrcontainerapps2025 \
--sku Basic
az acr build \
--registry acrcontainerapps2025 \
--image myapi:v1.0 \
--file Dockerfile .
# Deploy a container app from ACR
az containerapp create \
--name api-service \
--resource-group rg-containerapps \
--environment cae-prod \
--image acrcontainerapps2025.azurecr.io/myapi:v1.0 \
--registry-server acrcontainerapps2025.azurecr.io \
--registry-identity system \
--target-port 8080 \
--ingress external \
--cpu 0.5 \
--memory 1.0Gi \
--min-replicas 1 \
--max-replicas 10 \
--env-vars "APP_ENV=production" "LOG_LEVEL=info"
Key flags explained:
--ingress external— Expose the app publicly via HTTPS. Useinternalfor apps only reachable within the environment.--registry-identity system— Use managed identity to authenticate to ACR (no passwords).--min-replicas 1— Keep at least one instance running. Set to 0 for scale-to-zero.--target-port 8080— The port your container listens on.
Scaling Rules
Container Apps supports multiple scaling triggers. The platform evaluates all rules and scales to the highest replica count needed.
HTTP Scaling
# Scale based on concurrent HTTP requests
az containerapp update \
--name api-service \
--resource-group rg-containerapps \
--min-replicas 0 \
--max-replicas 30 \
--scale-rule-name http-scaling \
--scale-rule-type http \
--scale-rule-http-concurrency 50
This scales up one replica for every 50 concurrent requests. At 0 concurrent requests, it scales to zero (if min-replicas is 0).
KEDA-Based Scaling
KEDA opens up scaling on nearly any event source — Azure Queue Storage, Service Bus, Kafka, Cron, and dozens more.
# Scale based on Azure Service Bus queue depth
az containerapp update \
--name order-processor \
--resource-group rg-containerapps \
--min-replicas 0 \
--max-replicas 20 \
--scale-rule-name queue-scaling \
--scale-rule-type azure-servicebus \
--scale-rule-metadata \
queueName=orders \
namespace=sb-prod-2025 \
messageCount=5 \
--scale-rule-auth \
connection=servicebus-connection-string
Custom Scaling
# Scale based on CPU utilization
az containerapp update \
--name compute-worker \
--resource-group rg-containerapps \
--scale-rule-name cpu-scaling \
--scale-rule-type cpu \
--scale-rule-metadata \
type=Utilization \
value=70
| Scaling Trigger | Use Case | Scale to Zero |
|---|---|---|
| HTTP | Web APIs, frontends | Yes |
| Azure Queue Storage | Background job processing | Yes |
| Azure Service Bus | Message-driven microservices | Yes |
| Cron | Scheduled tasks | Yes (between schedules) |
| CPU / Memory | Compute-intensive workloads | No (needs min-replicas 1) |
| Custom (KEDA) | Kafka, Redis, PostgreSQL, etc. | Depends on scaler |
Dapr Integration
Dapr (Distributed Application Runtime) provides building blocks for microservices — service invocation, state management, pub/sub, and secrets. Container Apps has Dapr built in as a sidecar.
# Enable Dapr on a container app
az containerapp update \
--name api-service \
--resource-group rg-containerapps \
--enable-dapr true \
--dapr-app-id api-service \
--dapr-app-port 8080 \
--dapr-app-protocol http
With Dapr enabled, your application can call other services by name without knowing their URLs:
# Service A calling Service B through Dapr
# Instead of: http://order-service.internal:8080/api/orders
# You call: http://localhost:3500/v1.0/invoke/order-service/method/api/orders
curl http://localhost:3500/v1.0/invoke/order-service/method/api/orders
Dapr handles service discovery, retries, mTLS encryption, and distributed tracing automatically. You write simple HTTP calls, Dapr handles the distributed systems complexity.
Revisions and Traffic Splitting
Every deployment creates a new revision — an immutable snapshot of your container app configuration. You control how traffic flows between revisions.
# Deploy a new version (creates a new revision)
az containerapp update \
--name api-service \
--resource-group rg-containerapps \
--image acrcontainerapps2025.azurecr.io/myapi:v2.0
# Enable multi-revision mode
az containerapp revision set-mode \
--name api-service \
--resource-group rg-containerapps \
--mode multiple
# Split traffic: 80% to stable, 20% to canary
az containerapp ingress traffic set \
--name api-service \
--resource-group rg-containerapps \
--revision-weight api-service--v1=80 api-service--v2=20
# Promote canary to 100%
az containerapp ingress traffic set \
--name api-service \
--resource-group rg-containerapps \
--revision-weight api-service--v2=100
This gives you canary deployments out of the box. Route 5% of traffic to the new revision, monitor error rates and latency, then gradually increase or roll back instantly.
Secrets Management
Container Apps has built-in secret management that integrates with Azure Key Vault.
# Add secrets to a container app
az containerapp secret set \
--name api-service \
--resource-group rg-containerapps \
--secrets \
db-password=S3cur3P@ssw0rd \
api-key=keyRef:https://kv-prod-2025.vault.azure.net/secrets/stripe-api-key
# Reference secrets as environment variables
az containerapp update \
--name api-service \
--resource-group rg-containerapps \
--set-env-vars \
"DB_PASSWORD=secretref:db-password" \
"STRIPE_KEY=secretref:api-key"
The keyRef: prefix pulls secrets directly from Azure Key Vault. Changes in Key Vault are picked up when a new revision is created.
Custom Domains
# Add a custom domain
az containerapp hostname add \
--name api-service \
--resource-group rg-containerapps \
--hostname api.myapp.com
# Bind a managed certificate (free, auto-renewed)
az containerapp hostname bind \
--name api-service \
--resource-group rg-containerapps \
--hostname api.myapp.com \
--environment cae-prod \
--validation-method CNAME
Container Apps provides free managed TLS certificates that auto-renew. You just need to set up the DNS CNAME record.
Container Apps Jobs
Not every workload is a long-running service. Container Apps Jobs run containers on-demand or on a schedule, then exit.
# Create a scheduled job (runs every hour)
az containerapp job create \
--name data-cleanup-job \
--resource-group rg-containerapps \
--environment cae-prod \
--image acrcontainerapps2025.azurecr.io/cleanup:v1.0 \
--registry-server acrcontainerapps2025.azurecr.io \
--registry-identity system \
--trigger-type Schedule \
--cron-expression "0 * * * *" \
--replica-timeout 1800 \
--cpu 1.0 \
--memory 2.0Gi
# Create an event-driven job (triggered by queue messages)
az containerapp job create \
--name order-processor-job \
--resource-group rg-containerapps \
--environment cae-prod \
--image acrcontainerapps2025.azurecr.io/processor:v1.0 \
--registry-server acrcontainerapps2025.azurecr.io \
--registry-identity system \
--trigger-type Event \
--min-executions 0 \
--max-executions 10 \
--scale-rule-name queue-trigger \
--scale-rule-type azure-queue \
--scale-rule-metadata queueName=orders queueLength=5 \
--scale-rule-auth connection=queue-connection
Jobs are ideal for data processing, report generation, database migrations, and any workload that runs to completion.
Container Apps vs AKS vs App Service vs Container Instances
| Feature | Container Apps | AKS | App Service | Container Instances |
|---|---|---|---|---|
| Complexity | Low | High | Low | Very Low |
| Scale to zero | Yes | No (min 1 node) | No | No (pay per second) |
| Kubernetes access | No | Full kubectl | No | No |
| Custom networking | VNet injection | Full CNI control | VNet integration | VNet delegation |
| Scaling | KEDA + HTTP | HPA + Cluster Autoscaler | Built-in rules | Manual |
| Dapr | Built-in | Manual install | Not available | Not available |
| Jobs / Batch | Container Apps Jobs | Kubernetes Jobs | WebJobs | Excellent fit |
| Cost model | Per vCPU-second (scale to zero) | Per node VM (always on) | Per plan tier | Per second (no minimum) |
| Best for | Microservices, APIs, event-driven | Complex orchestration, full K8s control | Traditional web apps, .NET/Node.js | Short-lived tasks, sidecar containers |
When to Use Each
- Container Apps — You want containers without Kubernetes overhead. Microservices, APIs, event-driven processors. You need scale-to-zero and Dapr.
- AKS — You need full Kubernetes control. Custom operators, service mesh, specific K8s features, large teams with K8s expertise.
- App Service — You have a traditional web application. No containers, just code. Or you want container support with the simplest deployment model.
- Container Instances — Quick, short-lived containers. CI/CD build agents, data processing, one-off scripts. No orchestration needed.
Wrapping Up
Azure Container Apps hits the sweet spot between the simplicity of App Service and the power of AKS. You get containers, automatic scaling, Dapr for microservices patterns, and revisions for safe deployments — all without writing a single Kubernetes manifest. Start with a single container app, add scaling rules based on your traffic patterns, and use traffic splitting for safe rollouts. If you eventually need full Kubernetes control, migrating to AKS is straightforward since Container Apps runs on Kubernetes under the hood. But most teams find they never need to make that move.
Next up: We will explore Azure Cost Management — understanding your Azure bill, setting budgets, and finding optimization opportunities before your cloud spend surprises you.
