Skip to main content

Azure DevOps Pipelines — CI/CD That Just Works

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Your team pushes code to a repository, someone manually builds it, zips it up, and deploys it by dragging files into a portal. On Fridays. That workflow is a ticking time bomb. Azure DevOps Pipelines replaces the entire manual chain with automated, repeatable, auditable deployments — from git push to production with approvals, testing, and rollback built in.

Azure DevOps Services Overview

Azure DevOps is a suite of five integrated services:

ServicePurposeComparable To
Azure ReposGit repositoriesGitHub, GitLab
Azure BoardsWork tracking, sprints, backlogsJira, Linear
Azure PipelinesCI/CD automationGitHub Actions, Jenkins
Azure ArtifactsPackage management (npm, NuGet, pip)Artifactory, GitHub Packages
Azure Test PlansManual and exploratory testingTestRail

You can use each service independently. Most teams start with Pipelines and Repos, then adopt the rest as needed.

YAML Pipeline Anatomy

Every pipeline lives in your repository as a YAML file. This is infrastructure as code for your build and deployment process.

# azure-pipelines.yml
trigger:
branches:
include:
- main
- release/*
paths:
exclude:
- docs/*
- README.md

pool:
vmImage: 'ubuntu-latest'

variables:
buildConfiguration: 'Release'
nodeVersion: '20.x'

stages:
- stage: Build
displayName: 'Build & Test'
jobs:
- job: BuildApp
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
displayName: 'Install Node.js'

- script: |
npm ci
npm run build
npm test -- --coverage
displayName: 'Install, Build, Test'

- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
displayName: 'Publish Artifacts'

The key sections:

  • trigger — What branch or path changes start the pipeline
  • pool — Which agent runs the jobs
  • variables — Reusable values across the pipeline
  • stages — Logical groupings (Build, Test, Deploy)
  • jobs — Units of work within a stage
  • steps — Individual tasks within a job

Agent Pools

Pool TypeManaged ByCostBest For
Microsoft-hostedMicrosoftFree tier (1800 min/month) then paidStandard builds, no special dependencies
Self-hostedYouYour infrastructure costPrivate network access, custom tools, GPU

Microsoft-hosted agents come pre-loaded with common tools (Node.js, Python, .NET, Docker, Terraform, kubectl). Each job gets a fresh VM, so there is no state leakage between builds.

# Use a specific Microsoft-hosted image
pool:
vmImage: 'ubuntu-22.04'

# Or use a self-hosted agent pool
pool:
name: 'SelfHostedLinux'
demands:
- docker
- kubectl

Variable Groups and Secrets

Variable groups store shared configuration that multiple pipelines can reference. Secrets get encrypted and masked in logs.

# Create a variable group via CLI
az pipelines variable-group create \
--name "Production-Config" \
--variables \
APP_ENV=production \
API_URL=https://api.myapp.com \
--organization https://dev.azure.com/myorg \
--project MyProject

# Add a secret variable
az pipelines variable-group variable create \
--group-id 1 \
--name "DB_PASSWORD" \
--value "s3cretP@ssw0rd" \
--secret true \
--organization https://dev.azure.com/myorg \
--project MyProject

Reference variable groups in your pipeline:

variables:
- group: Production-Config
- name: localVar
value: 'some-value'

Pipeline Templates

Templates eliminate duplication. Define a reusable build step once, reference it everywhere.

# templates/build-node-app.yml
parameters:
- name: nodeVersion
default: '20.x'
- name: workingDirectory
default: '.'

steps:
- task: NodeTool@0
inputs:
versionSpec: '${{ parameters.nodeVersion }}'

- script: |
cd ${{ parameters.workingDirectory }}
npm ci
npm run build
npm run test
displayName: 'Build and Test'

Use the template in any pipeline:

# azure-pipelines.yml
stages:
- stage: Build
jobs:
- job: BuildFrontend
steps:
- template: templates/build-node-app.yml
parameters:
nodeVersion: '20.x'
workingDirectory: 'apps/frontend'

- job: BuildBackend
steps:
- template: templates/build-node-app.yml
parameters:
nodeVersion: '18.x'
workingDirectory: 'apps/api'

Multi-Stage Pipeline: Build to Production

Here is a real-world pipeline with four stages, approvals, and environment deployments:

trigger:
branches:
include:
- main

stages:
# Stage 1: Build
- stage: Build
displayName: 'Build & Unit Test'
jobs:
- job: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
npm ci
npm run build
npm run test:unit
displayName: 'Build and Test'

- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'dist'
ArtifactName: 'app'

# Stage 2: Integration Test
- stage: IntegrationTest
displayName: 'Integration Tests'
dependsOn: Build
jobs:
- job: Test
pool:
vmImage: 'ubuntu-latest'
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: 'app'

- script: npm run test:integration
displayName: 'Run Integration Tests'

# Stage 3: Staging
- stage: Staging
displayName: 'Deploy to Staging'
dependsOn: IntegrationTest
jobs:
- deployment: DeployStaging
environment: 'staging'
pool:
vmImage: 'ubuntu-latest'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'Azure-Production-SC'
appName: 'myapp-staging'
package: '$(Pipeline.Workspace)/app'

# Stage 4: Production (requires manual approval)
- stage: Production
displayName: 'Deploy to Production'
dependsOn: Staging
condition: succeeded()
jobs:
- deployment: DeployProduction
environment: 'production'
pool:
vmImage: 'ubuntu-latest'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'Azure-Production-SC'
appName: 'myapp-production'
package: '$(Pipeline.Workspace)/app'

Approvals and Gates

Approvals are configured on the environment, not in the YAML. This separation means the pipeline author cannot bypass approvals.

# Create environments with the Azure DevOps CLI
az devops invoke \
--area distributedtask \
--resource environments \
--route-parameters project=MyProject \
--http-method POST \
--in-file environment.json \
--api-version 7.0

Configure approvals in the Azure DevOps portal:

  1. Go to Pipelines > Environments > production
  2. Click the three dots > Approvals and checks
  3. Add Approvals — select required approvers
  4. Add Business Hours gate — deployments only during work hours
  5. Add Invoke REST API gate — check a health endpoint before deploying

Service Connections

Service connections authenticate your pipeline with external services like Azure subscriptions, Docker registries, and Kubernetes clusters.

# Create an Azure Resource Manager service connection
az devops service-endpoint azurerm create \
--azure-rm-service-principal-id $SP_APP_ID \
--azure-rm-subscription-id $SUBSCRIPTION_ID \
--azure-rm-subscription-name "Production Subscription" \
--azure-rm-tenant-id $TENANT_ID \
--name "Azure-Production-SC" \
--organization https://dev.azure.com/myorg \
--project MyProject

Azure DevOps Pipelines vs GitHub Actions

FeatureAzure DevOps PipelinesGitHub Actions
YAML locationazure-pipelines.yml (root).github/workflows/*.yml
StagesNative multi-stageJobs with needs
EnvironmentsBuilt-in with approvalsEnvironment protection rules
Self-hosted runnersAgent poolsSelf-hosted runners
ArtifactsAzure Artifacts (NuGet, npm, pip)GitHub Packages
MarketplaceTasks (curated)Actions (community)
Free tier1800 min/month (public), 1 parallel job (private)2000 min/month (free), unlimited (public)
Best forEnterprise Azure shops, complex pipelinesOpen-source, GitHub-native workflows

Both are excellent. If your code lives in Azure Repos and you use Azure Boards for planning, Azure DevOps Pipelines integrates seamlessly. If your code is on GitHub, GitHub Actions is the natural fit.

Wrapping Up

Azure DevOps Pipelines gives you a battle-tested CI/CD platform that scales from a single developer to enterprise teams running hundreds of deployments per day. Start with a simple build pipeline, add stages for testing and deployment, and layer in approvals and gates for production safety. Store your pipeline YAML in the same repository as your code — when someone asks "how do we deploy this?" the answer is always in the repo.


Next up: We will explore Azure Monitor — collecting logs, metrics, and alerts to keep your infrastructure healthy and observable.