Skip to main content

Docker in CI/CD — Build, Test, and Push Images Automatically

· 7 min read
Goel Academy
DevOps & Cloud Learning Hub

You are building Docker images on your laptop and pushing them to production with docker push. It works until it does not — someone forgets to run tests, pushes a debug build, or tags latest over a stable release. CI/CD pipelines eliminate these human errors by making every build reproducible, tested, and traceable to a specific commit.

Why Docker in CI/CD

Manual Docker workflows break in predictable ways:

  • "It works on my machine." Your laptop has cached layers and local files that the build depends on without anyone realizing it.
  • Forgotten tests. You skip tests "just this once" before pushing. The image is broken in production.
  • Tag confusion. Two people push to latest an hour apart. Which version is running?
  • No audit trail. Who built this image? From which commit? With which Dockerfile?

A CI/CD pipeline solves all of these. Every image is built from a clean environment, tested automatically, tagged with a git SHA, and pushed only if all checks pass.

GitHub Actions: Complete Workflow

Here is a production-ready workflow that builds, tests, scans, and pushes a Docker image on every push to main and on every pull request.

# .github/workflows/docker.yml
name: Docker Build and Push

on:
push:
branches: [main]
tags: ['v*']
pull_request:
branches: [main]

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- name: Checkout
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Log in to Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=latest,enable={{is_default_branch}}

- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
format: table
exit-code: 1
severity: CRITICAL,HIGH

This workflow does six things: checks out code, sets up BuildKit for advanced features, authenticates to GitHub Container Registry, generates smart tags, builds and pushes the image with layer caching, and scans for vulnerabilities.

GitLab CI with Docker

GitLab CI has native Docker support with Docker-in-Docker (DinD) or the Kaniko builder.

# .gitlab-ci.yml
stages:
- build
- test
- scan
- push

variables:
DOCKER_IMAGE: $CI_REGISTRY_IMAGE
DOCKER_TAG: $CI_COMMIT_SHORT_SHA

build:
stage: build
image: docker:24
services:
- docker:24-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build --cache-from $DOCKER_IMAGE:latest -t $DOCKER_IMAGE:$DOCKER_TAG .
- docker push $DOCKER_IMAGE:$DOCKER_TAG
only:
- main
- merge_requests

test:
stage: test
image: docker:24
services:
- docker:24-dind
script:
- docker pull $DOCKER_IMAGE:$DOCKER_TAG
- docker run --rm $DOCKER_IMAGE:$DOCKER_TAG npm test

scan:
stage: scan
image:
name: aquasec/trivy:latest
entrypoint: [""]
script:
- trivy image --exit-code 1 --severity CRITICAL $DOCKER_IMAGE:$DOCKER_TAG

push-latest:
stage: push
image: docker:24
services:
- docker:24-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $DOCKER_IMAGE:$DOCKER_TAG
- docker tag $DOCKER_IMAGE:$DOCKER_TAG $DOCKER_IMAGE:latest
- docker push $DOCKER_IMAGE:latest
only:
- main

Jenkins with Docker Agent

Jenkins can use Docker containers as build agents, eliminating the need to install build tools on Jenkins nodes.

// Jenkinsfile
pipeline {
agent {
docker {
image 'docker:24'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}

environment {
REGISTRY = 'your-registry.com'
IMAGE_NAME = 'myapp'
IMAGE_TAG = "${env.GIT_COMMIT[0..7]}"
}

stages {
stage('Build') {
steps {
sh "docker build -t ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} ."
}
}

stage('Test') {
steps {
sh "docker run --rm ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} npm test"
}
}

stage('Scan') {
steps {
sh "docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image --exit-code 1 --severity HIGH,CRITICAL ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
}
}

stage('Push') {
when { branch 'main' }
steps {
withCredentials([usernamePassword(
credentialsId: 'docker-registry',
usernameVariable: 'DOCKER_USER',
passwordVariable: 'DOCKER_PASS'
)]) {
sh "echo $DOCKER_PASS | docker login -u $DOCKER_USER --password-stdin ${REGISTRY}"
sh "docker push ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
sh "docker tag ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} ${REGISTRY}/${IMAGE_NAME}:latest"
sh "docker push ${REGISTRY}/${IMAGE_NAME}:latest"
}
}
}
}
}

Docker Layer Caching in CI

CI environments start fresh every run. Without caching, every layer rebuilds from scratch — downloading dependencies, installing packages, compiling code. Layer caching brings build times from 10 minutes back down to 30 seconds.

GitHub Actions Cache

# Using GitHub Actions cache backend (recommended)
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max

Registry-Based Cache

# Cache layers in the registry itself
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myregistry/myapp:latest
cache-from: type=registry,ref=myregistry/myapp:buildcache
cache-to: type=registry,ref=myregistry/myapp:buildcache,mode=max

BuildKit Inline Cache

# Build with inline cache metadata
docker build --build-arg BUILDKIT_INLINE_CACHE=1 -t myapp:latest .
docker push myapp:latest

# Next build uses the pushed image as cache source
docker build --cache-from myapp:latest -t myapp:latest .

Multi-Platform Builds with buildx

Build images for multiple architectures (AMD64, ARM64) from a single CI pipeline. Essential if you deploy to AWS Graviton, Apple Silicon Macs, or Raspberry Pi.

# GitHub Actions multi-platform build
- name: Set up QEMU
uses: docker/setup-qemu-action@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build and push multi-platform
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: myregistry/myapp:latest
# Manual multi-platform build
docker buildx create --name multibuilder --use
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/myapp:latest --push .

Automated Tagging Strategies

Good tagging makes it possible to trace any running container back to its source code.

# Tag with git SHA (always unique, traceable)
docker build -t myapp:abc1234 .

# Tag with semver (for releases)
docker build -t myapp:v2.1.0 -t myapp:v2.1 -t myapp:v2 .

# Tag with branch name (for staging/preview)
docker build -t myapp:feature-login .

# Tag with date (for nightly builds)
docker build -t myapp:nightly-2025-07-05 .
Tag StrategyUse CaseExample
Git SHAEvery build, exact traceabilitymyapp:a1b2c3d
SemverReleasesmyapp:v2.1.0
Branch nameStaging/preview environmentsmyapp:develop
latestMost recent main buildmyapp:latest
Date-basedNightly buildsmyapp:2025-07-05

Never rely solely on latest. It gives you no way to roll back or know what is running.

Pushing to Different Registries

# Docker Hub
docker login -u myuser
docker push myuser/myapp:v1.0

# AWS ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1.0

# Azure ACR
az acr login --name myregistry
docker push myregistry.azurecr.io/myapp:v1.0

# GitHub Container Registry (GHCR)
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
docker push ghcr.io/myorg/myapp:v1.0

Integration Testing with docker-compose

Use Compose in CI to spin up the full application stack, run integration tests, and tear everything down.

# docker-compose.test.yml
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 5s
retries: 5

api:
build: .
depends_on:
db:
condition: service_healthy
environment:
DATABASE_URL: postgresql://postgres:testpass@db:5432/testdb
NODE_ENV: test

test-runner:
build:
context: .
target: tester
depends_on:
api:
condition: service_started
environment:
API_URL: http://api:3000
command: npm run test:integration
# Run in CI
docker compose -f docker-compose.test.yml up --build --abort-on-container-exit --exit-code-from test-runner

# Clean up
docker compose -f docker-compose.test.yml down -v

The --exit-code-from test-runner flag makes the compose command return the exit code of the test runner container, so CI fails if tests fail.

Wrapping Up

Docker in CI/CD transforms container builds from a manual, error-prone process into an automated, auditable pipeline. Start with GitHub Actions if you are on GitHub — the docker/build-push-action handles 90% of use cases. Add layer caching to keep builds fast, Trivy scanning to catch vulnerabilities, and git SHA tagging for traceability. Use docker-compose for integration tests that spin up the full stack.

In the next post, we will cover Docker Logging — from docker logs basics to centralized logging with the ELK stack and Loki, including logging drivers, structured logs, and log rotation.