Kubernetes Dev & Ops in Practice 5 — Helm in Practice
Helm Chart structure design, environment separation with values files, safe deployment and rollback strategies. A practical guide to managing Helm systematically in production.
TestForge Team ·
Why You Need Helm
Managing Kubernetes manifests with plain kubectl apply -f hits a wall quickly.
- Image tags, replicas, and domains differ across dev/staging/prod
- Settings are repeated across many files
- Hard to track which version is deployed in which cluster
- Rolling back to a previous state is unclear
Helm solves this with a Chart (templates) + Values (per-environment config) + Release (deployment history) model.
1. Chart Structure
my-app/
├── Chart.yaml
├── values.yaml # Defaults
├── values-dev.yaml
├── values-staging.yaml
├── values-prod.yaml
└── templates/
├── _helpers.tpl
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── configmap.yaml
├── hpa.yaml
└── NOTES.txt
Chart.yaml
apiVersion: v2
name: my-app
description: My Application Helm Chart
type: application
version: 0.3.1
appVersion: "1.2.3"
dependencies:
- name: redis
version: "19.x.x"
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled
2. values.yaml — Default Value Design
replicaCount: 1
image:
repository: my-registry/my-app
tag: "" # Falls back to Chart.yaml appVersion
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
targetPort: 8080
ingress:
enabled: false
className: nginx
host: ""
tls: false
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 70
env: {}
redis:
enabled: false
nodeSelector: {}
tolerations: []
affinity: {}
values-prod.yaml — Production Overrides
replicaCount: 3
image:
pullPolicy: Always
ingress:
enabled: true
host: api.example.com
tls: true
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "2"
memory: 2Gi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
redis:
enabled: true
auth:
enabled: true
3. Template Authoring
_helpers.tpl — Common Name Functions
{{- define "my-app.name" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "my-app.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "my-app.labels" -}}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
deployment.yaml Template
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "my-app.name" . }}
template:
metadata:
labels:
{{- include "my-app.labels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.targetPort }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.env }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
4. Deployment Commands
Install
# Basic install
helm install my-release ./my-app
# With environment-specific values
helm install my-release ./my-app \
-f values.yaml \
-f values-prod.yaml \
--namespace production \
--create-namespace
# Inject image tag dynamically (CI/CD)
helm install my-release ./my-app \
-f values-prod.yaml \
--set image.tag=1.2.3 \
--namespace production
Upgrade
# Upgrade (install if not exists)
helm upgrade --install my-release ./my-app \
-f values-prod.yaml \
--set image.tag=$NEW_TAG \
--namespace production \
--atomic \ # Auto-rollback on failure
--timeout 5m \
--wait # Wait for all Pods to be Ready
Release History and Rollback
helm history my-release -n production
REVISION STATUS CHART APP VERSION DESCRIPTION
1 superseded my-app-0.3.0 1.2.1 Initial install
2 superseded my-app-0.3.1 1.2.2 Upgrade complete
3 deployed my-app-0.3.1 1.2.3 Upgrade complete
# Roll back to a specific revision
helm rollback my-release 2 -n production
# Roll back one step
helm rollback my-release -n production
5. Managing Secrets Safely
Putting passwords directly in Helm values is dangerous. Use one of these approaches.
Option 1: Reference External Secrets
# values.yaml: store only the Secret name
database:
secretName: db-credentials
# deployment.yaml template
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.database.secretName }}
key: url
Option 2: helm-secrets Plugin (SOPS)
helm plugin install https://github.com/jkroepke/helm-secrets
helm secrets upgrade my-release ./my-app \
-f values-prod.yaml \
-f secrets.prod.yaml # SOPS-encrypted file
6. Chart Validation
# Render templates without deploying
helm template my-release ./my-app -f values-prod.yaml
# Lint
helm lint ./my-app -f values-prod.yaml
# Dry run with server-side validation
helm upgrade --install my-release ./my-app \
-f values-prod.yaml \
--dry-run \
--namespace production
7. Repository Management
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm search repo bitnami/redis --versions | head -10
# OCI-based registry (modern approach)
helm push my-app-0.3.1.tgz oci://registry.example.com/charts
helm install my-release oci://registry.example.com/charts/my-app --version 0.3.1
Summary
| Stage | Command | Purpose |
|---|---|---|
| Validate | helm lint / helm template | Check syntax and rendering before deploy |
| Deploy | helm upgrade --install --atomic | Auto-rollback on failure |
| History | helm history | Deployment state per revision |
| Rollback | helm rollback | Restore to a specific revision |
Next: Multi-Environment Deployment Strategy — separating configuration with Kustomize overlays and managing the entire cluster as GitOps with ArgoCD App of Apps.