Helm Charts — The Package Manager for Kubernetes
You just deployed an NGINX ingress controller with one Helm command and it provisioned a Deployment, a Service, a ConfigMap, an IngressClass, RBAC roles, and a ServiceAccount — all wired together correctly. Imagine writing those 400+ lines of YAML by hand. That is the problem Helm solves.
Why Helm Exists
A typical Kubernetes application requires multiple manifests — Deployment, Service, ConfigMap, Secret, Ingress, HPA, PDB, ServiceAccount, and RBAC rules. Managing these raw YAML files across dev, staging, and production environments becomes painful fast. You end up with copy-pasted manifests, environment-specific diffs scattered everywhere, and no way to rollback cleanly.
Helm packages all those manifests into a single unit called a chart, adds templating so you can customize values per environment, and tracks releases so you can upgrade, rollback, and audit changes.
Core Concepts
| Concept | Description |
|---|---|
| Chart | A package of pre-configured Kubernetes resources (like an apt/yum package) |
| Release | A running instance of a chart (you can install the same chart multiple times) |
| Repository | A registry where charts are stored and shared |
| Values | Configuration that customizes a chart for your specific needs |
| Template | YAML manifests with Go template syntax that get rendered using values |
Installing Helm
# macOS
brew install helm
# Linux (script)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Verify
helm version
# version.BuildInfo{Version:"v3.14.x", ...}
Working with Repositories
# Add popular repositories
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add jetstack https://charts.jetstack.io
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# Update repo index (like apt update)
helm repo update
# Search for charts
helm search repo nginx
helm search repo postgresql --versions # Show all available versions
helm search hub grafana # Search Artifact Hub
# List added repos
helm repo list
Installing, Upgrading, and Rolling Back
# Install a chart (creates a release)
helm install my-nginx bitnami/nginx --namespace web --create-namespace
# Install with custom values
helm install my-app bitnami/nginx \
--namespace production \
--set replicaCount=3 \
--set service.type=ClusterIP \
--set resources.requests.memory=128Mi
# Install with a values file (preferred for complex configs)
helm install my-app bitnami/nginx \
--namespace production \
-f production-values.yaml
# Upgrade a release
helm upgrade my-app bitnami/nginx \
--namespace production \
-f production-values.yaml \
--set image.tag=1.26.0
# View release history
helm history my-app -n production
# Rollback to a previous revision
helm rollback my-app 2 -n production
# Uninstall a release
helm uninstall my-app -n production
# List all releases
helm list -A
Chart Structure
Every Helm chart follows a standard directory layout:
my-chart/
Chart.yaml # Chart metadata (name, version, description)
values.yaml # Default configuration values
charts/ # Dependency charts
templates/ # Kubernetes manifest templates
deployment.yaml
service.yaml
ingress.yaml
configmap.yaml
hpa.yaml
serviceaccount.yaml
_helpers.tpl # Reusable template snippets
NOTES.txt # Post-install instructions shown to user
.helmignore # Files to exclude from packaging
# Chart.yaml
apiVersion: v2
name: my-web-app
description: A Helm chart for deploying a web application
type: application
version: 1.2.0 # Chart version
appVersion: "3.5.1" # Application version
maintainers:
- name: Vivek Goel
email: contact@goelacademy.com
dependencies:
- name: postgresql
version: "12.x.x"
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
Go Template Syntax
Helm uses Go templates to inject values into your YAML manifests:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-web-app.fullname" . }}
labels:
{{- include "my-web-app.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "my-web-app.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "my-web-app.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
ports:
- containerPort: {{ .Values.containerPort }}
{{- if .Values.resources }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- end }}
{{- if .Values.env }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
# values.yaml (defaults)
replicaCount: 1
image:
repository: my-registry/my-web-app
tag: ""
pullPolicy: IfNotPresent
containerPort: 8080
service:
type: ClusterIP
port: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
env:
LOG_LEVEL: "info"
NODE_ENV: "production"
Creating Your Own Chart
# Scaffold a new chart
helm create my-web-app
# This generates the full directory structure with sensible defaults
# Edit templates/ and values.yaml to match your application
# Lint your chart (catch errors before deploying)
helm lint my-web-app/
# Render templates locally without installing (see what YAML gets generated)
helm template my-release my-web-app/ -f staging-values.yaml
# Dry-run against the cluster (server-side validation)
helm install my-release my-web-app/ --dry-run --debug
# Package your chart for distribution
helm package my-web-app/
# Creates my-web-app-1.2.0.tgz
Multi-Environment Values Files
Use separate values files for each environment instead of maintaining separate manifests:
# values-dev.yaml
replicaCount: 1
image:
tag: "latest"
resources:
requests:
cpu: 100m
memory: 128Mi
env:
LOG_LEVEL: "debug"
DATABASE_HOST: "postgres-dev.default.svc"
# values-prod.yaml
replicaCount: 5
image:
tag: "3.5.1"
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "1"
memory: "1Gi"
env:
LOG_LEVEL: "warn"
DATABASE_HOST: "postgres-prod.production.svc"
# Deploy to different environments
helm upgrade --install my-app ./my-web-app -f values-dev.yaml -n dev
helm upgrade --install my-app ./my-web-app -f values-prod.yaml -n production
Helm Dependencies
# After adding dependencies in Chart.yaml, pull them
helm dependency update my-web-app/
helm dependency list my-web-app/
# Dependencies are downloaded into charts/ directory
ls my-web-app/charts/
# postgresql-12.5.6.tgz
Override dependency values by nesting under the dependency name:
# values.yaml
postgresql:
enabled: true
auth:
postgresPassword: "supersecret"
database: "myapp"
primary:
persistence:
size: 10Gi
Popular Helm Charts Worth Knowing
# Ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --create-namespace
# TLS certificate automation
helm install cert-manager jetstack/cert-manager -n cert-manager --create-namespace --set crds.enabled=true
# Monitoring stack (Prometheus + Grafana)
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
# Check what a chart will deploy before installing
helm show values bitnami/nginx | head -50
helm show readme bitnami/nginx
Helm vs Kustomize
Both tools solve the multi-environment problem, but differently:
| Feature | Helm | Kustomize |
|---|---|---|
| Approach | Templating (Go templates) | Patching (overlay merges) |
| Learning curve | Steeper (template syntax) | Gentler (plain YAML overlays) |
| Packaging | Charts (.tgz) with versioning | Directories (no packaging) |
| Release management | Built-in (install, upgrade, rollback) | None (you use kubectl apply) |
| Dependencies | Built-in (charts/) | None |
| Community ecosystem | Huge (Artifact Hub, thousands of charts) | Smaller |
| Built into kubectl | No (separate binary) | Yes (kubectl apply -k) |
| Best for | Third-party apps, complex apps with many knobs | Simple overlays, GitOps workflows |
The short answer: use Helm for installing third-party software (databases, monitoring, ingress controllers) and Kustomize for your own application manifests where overlays are enough. Many teams use both.
Next, we will tackle RBAC — who can do what in your cluster, and how to stop giving everyone cluster-admin.
