BuildKit — The Next-Generation Docker Build Engine
Docker's legacy builder processes Dockerfile instructions sequentially, one layer at a time, even when stages have no dependency on each other. BuildKit changes this fundamentally — it analyzes the build graph, parallelizes independent stages, provides smarter caching, and adds features that were simply impossible before: secret mounts that never leak into image layers, SSH forwarding for private repos, and building images for architectures your machine does not even run.
What Is BuildKit
BuildKit is a replacement for Docker's legacy build engine. It was developed as a separate project (moby/buildkit) and has been the default builder in Docker Desktop since version 23.0. On Linux servers running older Docker versions, you may need to enable it explicitly.
Key improvements over the legacy builder:
- Parallel execution of independent build stages.
- Better cache management with cache mounts and external cache sources.
- Secret mounts that keep credentials out of image layers.
- SSH forwarding for accessing private Git repos during builds.
- Multi-platform builds from a single command.
- Improved output with progress tracking and build duration.
Enabling BuildKit
# Option 1: Environment variable (per command)
DOCKER_BUILDKIT=1 docker build -t myapp .
# Option 2: Export for the session
export DOCKER_BUILDKIT=1
docker build -t myapp .
# Option 3: Enable permanently in Docker daemon config
# /etc/docker/daemon.json
{
"features": {
"buildkit": true
}
}
# Restart Docker after editing daemon.json
sudo systemctl restart docker
# Verify BuildKit is active — look for the buildkit progress output
docker build -t test .
# [+] Building 12.5s (8/8) FINISHED docker:default
# => [internal] load build definition from Dockerfile 0.0s
# => [internal] load .dockerignore 0.0s
# => [internal] load metadata for docker.io/library/node 0.5s
If you see [+] Building with the arrow-style progress, BuildKit is active. The legacy builder shows Step 1/8 : style output.
Parallelized Build Stages
The biggest performance win comes from parallel execution. BuildKit builds a dependency graph and runs independent stages simultaneously.
# These two stages have NO dependency on each other
# BuildKit runs them IN PARALLEL
FROM node:20-alpine AS frontend
WORKDIR /app/frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
FROM python:3.12-slim AS backend
WORKDIR /app/backend
COPY backend/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY backend/ ./
# Final stage depends on both — waits for both to finish
FROM python:3.12-slim
WORKDIR /app
COPY --from=backend /app/backend ./
COPY --from=frontend /app/frontend/dist ./static
CMD ["python", "main.py"]
With the legacy builder, the frontend stage finishes completely before the backend stage starts. With BuildKit, both stages build at the same time, often cutting total build time in half.
Cache Mounts
Cache mounts persist a directory across builds without including it in the final image. This is transformative for package managers that download the internet on every build.
# WITHOUT cache mount — downloads all packages every time the lockfile changes
RUN pip install -r requirements.txt
# WITH cache mount — package cache persists between builds
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
# Node.js — cache npm/yarn downloads
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN npm run build
# Go — cache module downloads and build cache
FROM golang:1.22-alpine
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -o /app/server .
# apt packages — cache the download directory
FROM ubuntu:24.04
RUN --mount=type=cache,target=/var/cache/apt \
--mount=type=cache,target=/var/lib/apt/lists \
apt-get update && apt-get install -y curl git
Cache mounts are stored on the builder host. They survive between builds but are not included in the image layers, so they do not increase image size.
Secret Mounts
Secret mounts let you use credentials during the build without them ever appearing in the image history or layers. This solves the problem of needing private registry access, API keys, or tokens during RUN commands.
# syntax=docker/dockerfile:1
# Use a secret during the build
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
# Access secret as an environment variable
RUN --mount=type=secret,id=api_key \
API_KEY=$(cat /run/secrets/api_key) && \
curl -H "Authorization: Bearer $API_KEY" https://api.example.com/data > config.json
# Pass secrets at build time
docker build --secret id=npmrc,src=$HOME/.npmrc -t myapp .
docker build --secret id=api_key,src=./api_key.txt -t myapp .
Unlike ARG or ENV, secrets are never stored in any image layer. Running docker history on the built image reveals nothing.
SSH Forwarding for Private Repos
Need to clone a private Git repository during the build? SSH forwarding uses your host's SSH agent without copying keys into the image.
# syntax=docker/dockerfile:1
FROM golang:1.22-alpine
RUN apk add --no-cache git openssh-client
# Configure Git to use SSH for private repos
RUN mkdir -p /root/.ssh && \
ssh-keyscan github.com >> /root/.ssh/known_hosts
# Clone private repo using forwarded SSH agent
RUN --mount=type=ssh \
git clone git@github.com:myorg/private-lib.git /app/lib
# Or for Go private modules
RUN --mount=type=ssh \
GOPRIVATE=github.com/myorg/* go mod download
# Build with SSH agent forwarding
docker build --ssh default -t myapp .
# Or specify a specific SSH key
docker build --ssh default=$HOME/.ssh/id_ed25519 -t myapp .
Multi-Platform Builds with buildx
docker buildx extends BuildKit to build images for multiple CPU architectures from a single command. Build an ARM image on your x86 laptop — no cross-compilation setup needed.
# Create a buildx builder instance
docker buildx create --name multiarch --driver docker-container --use
# Bootstrap the builder
docker buildx inspect multiarch --bootstrap
# Build for multiple platforms and push to a registry
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag myregistry.com/myapp:latest \
--push .
# Build for a single foreign architecture (e.g., ARM on an x86 host)
docker buildx build \
--platform linux/arm64 \
--tag myapp:arm64 \
--load .
# List supported platforms
docker buildx ls
# NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
# multiarch docker-container
# multiarch0 unix:///var/run/docker.sock running v0.12.4 linux/amd64, linux/arm64, linux/arm/v7
Multi-platform builds use QEMU emulation under the hood. The build is slower than native, but the result is a manifest list that Docker automatically pulls the right architecture for each host.
# Dockerfile that works across platforms
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG TARGETOS TARGETARCH
WORKDIR /app
COPY . .
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /app/server .
FROM alpine:3.19
COPY --from=builder /app/server /usr/local/bin/server
CMD ["server"]
The $BUILDPLATFORM, $TARGETOS, and $TARGETARCH variables are injected by BuildKit automatically.
Remote Builders
BuildKit can offload builds to a remote machine, useful when your CI runner is underpowered or when you need native ARM builds.
# Create a remote builder via SSH
docker buildx create --name remote-builder \
--driver docker-container \
--platform linux/amd64 \
ssh://user@build-server.example.com
# Create a multi-node builder (x86 + ARM native)
docker buildx create --name hybrid \
--driver docker-container \
--platform linux/amd64 \
ssh://user@amd64-server.example.com
docker buildx create --name hybrid --append \
--driver docker-container \
--platform linux/arm64 \
ssh://user@arm64-server.example.com
docker buildx use hybrid
BuildKit vs Legacy Builder
| Feature | Legacy Builder | BuildKit |
|---|---|---|
| Stage execution | Sequential | Parallel (dependency graph) |
| Cache | Layer-based only | Layer + cache mounts + external cache |
| Secrets | Not supported (use ARG, which leaks) | --mount=type=secret (never in layers) |
| SSH forwarding | Not supported | --mount=type=ssh |
| Multi-platform | Not supported | docker buildx build --platform |
| Output formats | Image only | Image, tar, OCI, local directory |
| Progress output | Step-by-step text | Parallel progress with timing |
| Garbage collection | Manual | Automatic build cache GC |
| Build context | Sent entirely upfront | Lazy — only files actually used are sent |
The lazy context transfer is an underrated feature. If your build context is 2 GB but the Dockerfile only COPYs 50 MB of files, the legacy builder sends all 2 GB while BuildKit only transfers the 50 MB actually needed.
Dockerfile Syntax Directive
BuildKit supports a # syntax directive at the top of your Dockerfile to pin the Dockerfile parser version. This lets you use newer Dockerfile features without upgrading Docker.
# syntax=docker/dockerfile:1
# This tells BuildKit to use the latest 1.x Dockerfile frontend
# You get access to all the latest features: cache mounts, secrets, etc.
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
COPY . .
RUN npm run build
# You can even use third-party frontends
# syntax=docker/dockerfile:1.7-labs # Experimental features
# syntax=tonistiigi/dockerfile:runmount # Community extensions
Wrapping Up
BuildKit is not an optional optimization — it is a fundamentally better build engine. Parallel stage execution makes multi-stage builds dramatically faster. Cache mounts eliminate redundant package downloads. Secret mounts solve the long-standing problem of build-time credentials leaking into image layers. And buildx gives you multi-platform builds that would have required dedicated hardware and complex CI pipelines just a few years ago. If you are still seeing Step 1/8 : in your build output, you are leaving performance and security on the table.
In the next post, we will cover Rootless Docker — how to run the Docker daemon and containers without root privileges, eliminating an entire class of container escape vulnerabilities.
