Skip to main content

Artifact Management — JFrog, Nexus, and Container Registries

· 7 min read
Goel Academy
DevOps & Cloud Learning Hub

Your CI pipeline builds a Docker image. Where does it go? Your Java app produces a JAR file. Where is it stored? Your Terraform module is versioned and shared across 12 teams. Where do they find it? If your answer to any of these is "somewhere on the build server" or "I just rebuild it," you have an artifact management problem. And it will bite you the first time you need to roll back a production deployment at 2 AM and cannot find the last known-good binary.

What Are Build Artifacts?

A build artifact is any file produced by your build process that is needed for deployment, testing, or distribution. They are the output of your CI pipeline — the thing that actually runs in production.

Artifact TypeFormatExample
Container ImagesDocker/OCImyapp:v1.2.3
JavaJAR, WAR, EARapi-service-1.2.3.jar
JavaScript/Nodenpm package, tarball@company/ui-lib-1.0.0.tgz
PythonWheel, sdistmy_library-1.0.0-py3-none-any.whl
GoBinarymyapp-linux-amd64
InfrastructureHelm chart, Terraform moduleingress-chart-0.3.1.tgz
GenericZIP, tarball, installerrelease-v2.0.0.zip

Why Artifact Management Matters

Without proper artifact management:

  • You cannot reproduce deployments. "Works on my machine" becomes "worked on that one build server that got rebuilt."
  • Rollbacks require re-building from source, which takes time you do not have during an outage.
  • You have no audit trail of what was deployed, when, and by whom.
  • Teams download dependencies from the public internet during builds, introducing supply chain risk and flaky builds.

With proper artifact management:

  • Every build produces an immutable, versioned artifact.
  • Deployments pull a known binary — no rebuilds, no surprises.
  • Rollbacks are instant — just redeploy the previous version.
  • Dependencies are proxied and cached locally, so builds are faster and more reliable.

JFrog Artifactory

JFrog Artifactory is the Swiss Army knife of artifact repositories. It supports every package format and acts as a universal repository manager.

# Install Artifactory OSS with Docker
docker run -d --name artifactory \
-p 8082:8082 -p 8081:8081 \
-v artifactory_data:/var/opt/jfrog/artifactory \
releases-docker.jfrog.io/jfrog/artifactory-oss:latest

# Access UI at http://localhost:8082
# Default credentials: admin / password

# Configure Docker registry in Artifactory
# Create a local Docker repository named "docker-local"
# Then configure Docker to use it:
docker login localhost:8082
docker tag myapp:latest localhost:8082/docker-local/myapp:v1.0.0
docker push localhost:8082/docker-local/myapp:v1.0.0

# Configure npm to use Artifactory as a registry
npm config set registry http://localhost:8082/artifactory/api/npm/npm-remote/

# Upload a generic artifact via REST API
curl -u admin:password \
-T ./build/release-v1.0.0.zip \
"http://localhost:8082/artifactory/generic-local/releases/v1.0.0/release.zip"

Key Artifactory features:

  • Virtual repositories: Aggregate multiple local and remote repos behind one URL
  • Remote repositories: Proxy and cache external registries (Docker Hub, npm, PyPI)
  • Build integration: Attach build info (Git commit, CI job, dependencies) to artifacts
  • Replication: Replicate artifacts across data centers for geo-distributed teams

Nexus Repository

Sonatype Nexus is the most popular open-source artifact repository. The OSS version is free and covers most use cases.

# Run Nexus OSS with Docker
docker run -d --name nexus \
-p 8081:8081 \
-v nexus_data:/nexus-data \
sonatype/nexus3:latest

# Access UI at http://localhost:8081
# Get initial admin password:
docker exec nexus cat /nexus-data/admin.password

# Configure Maven to use Nexus (settings.xml)
cat > ~/.m2/settings.xml << 'SETTINGS'
<settings>
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://localhost:8081/repository/maven-public/</url>
</mirror>
</mirrors>
<servers>
<server>
<id>nexus</id>
<username>admin</username>
<password>your-password</password>
</server>
</servers>
</settings>
SETTINGS

# Deploy a JAR to Nexus
mvn deploy -DaltDeploymentRepository=nexus::default::http://localhost:8081/repository/maven-releases/

Container Registries Compared

For Docker and OCI images specifically, you have many options:

RegistryTypeFree TierPrivate ReposVulnerability ScanningBest For
Docker HubSaaS1 private repoPaid plansPaidOpen source, public images
GitHub (GHCR)SaaSUnlimited privateYesYes (Dependabot)GitHub-native workflows
AWS ECRCloud500 MB/monthYesYes (built-in)AWS workloads
Azure ACRCloudBasic tierYesYes (Defender)Azure workloads
Google GARCloud500 MB/monthYesYes (built-in)GCP workloads
HarborSelf-hostedFree (OSS)YesYes (Trivy)Air-gapped, on-prem, compliance
# AWS ECR — Create repo and push image
aws ecr create-repository --repository-name myapp --region us-east-1

# Login to ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com

# Tag and push
docker tag myapp:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1.0.0
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1.0.0

# GitHub Container Registry (GHCR)
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
docker tag myapp:latest ghcr.io/myorg/myapp:v1.0.0
docker push ghcr.io/myorg/myapp:v1.0.0

Artifact Repository Comparison

FeatureJFrog ArtifactoryNexus OSSNexus ProHarbor
PriceFree (OSS) / $$$ (Pro)Free$$$Free
DockerYesYesYesYes
npmYesYesYesNo
Maven/GradleYesYesYesNo
PyPIYesYesYesNo
HelmYesYesYesYes
ReplicationPro onlyPro onlyYesYes
Vulnerability ScanXray (paid)NoLifecycleTrivy (built-in)
RBACYesBasicYesYes
REST APIExcellentGoodGoodGood
Build InfoYesNoLimitedNo

Versioning Strategies

How you version your artifacts matters more than you think:

# Semantic Versioning (SemVer) — for libraries and APIs
# MAJOR.MINOR.PATCH
# 1.0.0 → Initial release
# 1.1.0 → New feature (backwards compatible)
# 1.1.1 → Bug fix
# 2.0.0 → Breaking change

# Git SHA tagging — for application deployments
# Ties the artifact directly to a commit
docker tag myapp:latest myregistry/myapp:a1b2c3d
docker tag myapp:latest myregistry/myapp:main-a1b2c3d-1234 # branch-sha-buildnum

# Calendar versioning (CalVer) — for projects with time-based releases
# YYYY.MM.DD or YYYY.MM.PATCH
# 2025.06.07
# 2025.06.1

# NEVER use "latest" in production
# "latest" is mutable — it changes every time you push
# You cannot roll back to "latest" because it is a moving target
docker tag myapp:latest myregistry/myapp:v1.2.3 # DO THIS
docker tag myapp:latest myregistry/myapp:latest # ONLY for development

Cleanup and Retention Policies

Artifacts accumulate fast. Without cleanup policies, your storage costs will grow unbounded.

# Nexus — Configure cleanup policy via REST API
curl -u admin:password -X POST \
'http://localhost:8081/service/rest/v1/lifecycle/cleanup' \
-H 'Content-Type: application/json' \
-d '{
"name": "delete-old-snapshots",
"format": "docker",
"mode": "delete",
"criteria": {
"lastDownloaded": 30,
"lastBlobUpdated": 60
}
}'

# AWS ECR — Lifecycle policy (delete untagged images older than 7 days)
aws ecr put-lifecycle-policy \
--repository-name myapp \
--lifecycle-policy-text '{
"rules": [
{
"rulePriority": 1,
"description": "Delete untagged images older than 7 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 7
},
"action": { "type": "expire" }
},
{
"rulePriority": 2,
"description": "Keep only last 10 tagged images",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": ["v"],
"countType": "imageCountMoreThan",
"countNumber": 10
},
"action": { "type": "expire" }
}
]
}'

CI/CD Integration

Your CI pipeline should build once, push the artifact, and then every subsequent stage pulls that exact artifact:

# .github/workflows/build-and-push.yml
name: Build and Push

on:
push:
branches: [main]
tags: ["v*"]

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- uses: actions/checkout@v4

- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}

- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

Security Scanning

Never deploy an artifact without scanning it first:

# Scan with Trivy (open source, works with any registry)
trivy image myregistry/myapp:v1.0.0

# Scan and fail CI if critical vulnerabilities found
trivy image --exit-code 1 --severity CRITICAL myregistry/myapp:v1.0.0

# Scan a filesystem (for npm/pip/go vulnerabilities)
trivy fs --security-checks vuln,secret,config .

# Grype (by Anchore) — alternative scanner
grype myregistry/myapp:v1.0.0

Your pipeline should: build the artifact, scan it, push it to the registry (only if scan passes), and then deploy the exact same artifact to each environment. Build once, deploy everywhere.


You have your binaries stored, versioned, and scanned. But how do you configure the servers that run them? In the next post, we will compare the big four configuration management tools — Ansible, Puppet, Chef, and SaltStack — and figure out which one fits your team.