Skip to main content

Azure Performance — CDN, Redis Cache, and Front Door Acceleration

· 7 min read
Goel Academy
DevOps & Cloud Learning Hub

Your application works. It passes all tests. Then it goes live and users in Singapore see three-second load times, your database CPU sits at 85% from repeated identical queries, and your static assets are served from a single region. Performance is not a feature you bolt on later — it is an architecture decision you make early. Azure offers a full stack of acceleration services, from edge CDN caching to in-memory data stores to global load balancing. Here is how to use each one and when they overlap.

Azure CDN — Profiles and Endpoints

Azure CDN caches static content at edge locations (Points of Presence) worldwide. When a user in Tokyo requests your JavaScript bundle, it comes from a nearby POP instead of crossing the Pacific to your East US origin.

# Create a CDN profile (Microsoft Standard tier)
az cdn profile create \
--name cdn-goelacademy \
--resource-group rg-cdn \
--sku Standard_Microsoft \
--location global

# Create an endpoint pointing to your origin
az cdn endpoint create \
--name goelacademy-assets \
--profile-name cdn-goelacademy \
--resource-group rg-cdn \
--origin "goelacademy.azurewebsites.net" \
--origin-host-header "goelacademy.azurewebsites.net" \
--enable-compression true \
--content-types-to-compress "text/html" "text/css" "application/javascript" "application/json" "image/svg+xml"

# Purge cached content after deployment
az cdn endpoint purge \
--name goelacademy-assets \
--profile-name cdn-goelacademy \
--resource-group rg-cdn \
--content-paths "/*"

CDN Caching Rules

Rule TypePurposeExample
Global cachingDefault TTL for all contentCache for 7 days
Custom cachingOverride by path/extension/api/* = no cache, *.js = 365 days
Query stringHandle URL parametersCache every unique query string separately
# Set a custom caching rule: cache images for 30 days
az cdn endpoint rule add \
--name goelacademy-assets \
--profile-name cdn-goelacademy \
--resource-group rg-cdn \
--order 1 \
--rule-name "CacheImages" \
--action-name "CacheExpiration" \
--cache-behavior "Override" \
--cache-duration "30.00:00:00" \
--match-variable "UrlFileExtension" \
--operator "Equal" \
--match-values "jpg" "png" "webp" "gif" "svg"

Azure Cache for Redis

Redis eliminates redundant database calls by storing frequently accessed data in memory. Azure manages clustering, patching, failover, and backups.

TierMemoryReplicasFeaturesUse Case
Basic250 MB - 53 GB0NoneDev/Test
Standard250 MB - 53 GB1Replication, SLASmall production
Premium6 GB - 120 GBUp to 3Clustering, persistence, VNetEnterprise
Enterprise12 GB - 2 TBUp to 3RediSearch, RedisBloom, Active GeoHigh performance
# Create a Premium Redis cache with clustering
az redis create \
--name redis-webapp-prod \
--resource-group rg-cache \
--location eastus \
--sku Premium \
--vm-size P1 \
--shard-count 2 \
--enable-non-ssl-port false \
--minimum-tls-version 1.2

# Get the connection string
az redis list-keys \
--name redis-webapp-prod \
--resource-group rg-cache

Cache-Aside Pattern

The most common pattern. The application checks the cache first, falls back to the database on a miss, and populates the cache for the next request.

import redis
import json
import psycopg2

r = redis.Redis(
host='redis-webapp-prod.redis.cache.windows.net',
port=6380,
password='<access-key>',
ssl=True,
decode_responses=True
)

def get_product(product_id: str) -> dict:
cache_key = f"product:{product_id}"

# 1. Check cache first
cached = r.get(cache_key)
if cached:
return json.loads(cached) # Cache hit

# 2. Cache miss — query database
conn = psycopg2.connect(dsn="your-connection-string")
cur = conn.cursor()
cur.execute("SELECT id, name, price, stock FROM products WHERE id = %s", (product_id,))
row = cur.fetchone()
conn.close()

if row:
product = {"id": row[0], "name": row[1], "price": float(row[2]), "stock": row[3]}
# 3. Populate cache with 1-hour TTL
r.setex(cache_key, 3600, json.dumps(product))
return product

return None

Write-Through Pattern

Every write goes to the cache and the database simultaneously, keeping them in sync.

def update_product_price(product_id: str, new_price: float):
# Write to database
conn = psycopg2.connect(dsn="your-connection-string")
cur = conn.cursor()
cur.execute("UPDATE products SET price = %s WHERE id = %s", (new_price, product_id))
conn.commit()
conn.close()

# Update cache immediately
cache_key = f"product:{product_id}"
cached = r.get(cache_key)
if cached:
product = json.loads(cached)
product["price"] = new_price
r.setex(cache_key, 3600, json.dumps(product))

Azure Front Door — Global Load Balancing + CDN + WAF

Azure Front Door is the premium all-in-one service: global HTTP load balancing, CDN acceleration, SSL offloading, and WAF protection in a single resource.

# Create a Front Door profile (Premium tier for WAF)
az afd profile create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--sku Premium_AzureFrontDoor

# Add an origin group with health probing
az afd origin-group create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--origin-group-name og-webapp \
--probe-path "/health" \
--probe-protocol Https \
--probe-request-type HEAD \
--probe-interval-in-seconds 30 \
--sample-size 4 \
--successful-samples-required 3

# Add origins in multiple regions
az afd origin create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--origin-group-name og-webapp \
--origin-name origin-eastus \
--host-name "webapp-eastus.azurewebsites.net" \
--origin-host-header "webapp-eastus.azurewebsites.net" \
--http-port 80 --https-port 443 \
--priority 1 --weight 1000

az afd origin create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--origin-group-name og-webapp \
--origin-name origin-westeurope \
--host-name "webapp-westeurope.azurewebsites.net" \
--origin-host-header "webapp-westeurope.azurewebsites.net" \
--http-port 80 --https-port 443 \
--priority 1 --weight 1000

# Create an endpoint and route
az afd endpoint create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--endpoint-name ep-goelacademy \
--enabled-state Enabled

az afd route create \
--profile-name afd-goelacademy \
--resource-group rg-frontdoor \
--endpoint-name ep-goelacademy \
--route-name default-route \
--origin-group og-webapp \
--supported-protocols Https \
--https-redirect Enabled \
--forwarding-protocol HttpsOnly \
--patterns-to-match "/*"

Proximity Placement Groups and Ultra Disks

For latency-sensitive workloads, co-locate VMs and storage in the same data center rack.

# Create a Proximity Placement Group
az ppg create \
--name ppg-hpc-cluster \
--resource-group rg-hpc \
--location eastus \
--type Standard

# Create a VM inside the PPG
az vm create \
--name vm-hpc-01 \
--resource-group rg-hpc \
--ppg ppg-hpc-cluster \
--size Standard_E16s_v5 \
--image Ubuntu2204 \
--admin-username azureadmin \
--generate-ssh-keys

# Attach an Ultra Disk for high IOPS workloads
az disk create \
--name disk-ultra-data01 \
--resource-group rg-hpc \
--location eastus \
--sku UltraSSD_LRS \
--size-gb 256 \
--disk-iops-read-write 50000 \
--disk-mbps-read-write 1000

az vm disk attach \
--vm-name vm-hpc-01 \
--resource-group rg-hpc \
--name disk-ultra-data01 \
--sku UltraSSD_LRS

Ultra Disk performance comparison:

Disk TypeMax IOPSMax ThroughputLatencyBest For
Standard HDD2,000500 MB/s~10 msArchives, backups
Standard SSD6,000750 MB/s~5 msWeb servers, dev
Premium SSD v280,0001,200 MB/s<1 msDatabases, analytics
Ultra Disk400,0004,000 MB/s<0.5 msSAP HANA, HPC, real-time analytics

Azure SQL Performance Tuning

-- Enable Query Store (auto-enabled on Azure SQL)
ALTER DATABASE [webapp-db] SET QUERY_STORE = ON;

-- Find the top 10 most expensive queries by CPU
SELECT TOP 10
q.query_id,
qt.query_sql_text,
rs.avg_cpu_time / 1000.0 AS avg_cpu_ms,
rs.avg_duration / 1000.0 AS avg_duration_ms,
rs.count_executions,
rs.avg_logical_io_reads
FROM sys.query_store_query_text qt
JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
ORDER BY rs.avg_cpu_time DESC;

-- Add missing indexes suggested by the engine
SELECT
mid.statement AS table_name,
mid.equality_columns,
mid.inequality_columns,
mid.included_columns,
migs.avg_user_impact AS improvement_pct,
migs.user_seeks + migs.user_scans AS total_operations
FROM sys.dm_db_missing_index_details mid
JOIN sys.dm_db_missing_index_groups mig ON mid.index_handle = mig.index_handle
JOIN sys.dm_db_missing_index_group_stats migs ON mig.index_group_handle = migs.group_handle
ORDER BY migs.avg_user_impact DESC;

Azure Load Testing

Validate performance before production with managed load testing.

# Create a load test resource
az load create \
--name lt-webapp-perf \
--resource-group rg-testing \
--location eastus

# Upload and run a JMeter test plan
az load test create \
--load-test-resource lt-webapp-perf \
--resource-group rg-testing \
--test-id "homepage-load-test" \
--display-name "Homepage 1000 users" \
--test-plan "tests/homepage.jmx" \
--engine-instances 4

# View results
az load test-run list \
--load-test-resource lt-webapp-perf \
--resource-group rg-testing \
--test-id "homepage-load-test" \
--query "[].{Run:testRunId, Status:status, VUsers:virtualUsers, AvgResponseTime:testRunStatistics.avgResponseTime}"

Performance optimization is a continuous process. Start with the biggest wins — CDN for static content, Redis for database query caching, Front Door for global routing — then measure, test, and iterate. The tools exist. The hard part is knowing which lever to pull for your specific bottleneck, and that only comes from monitoring real traffic patterns.