Building a Multi-Namespace Helm Chart with Environment Overlays
Structure a Helm chart that deploys cleanly to dev, staging, and production with different values per environment — without duplicating templates or maintaining separate charts per namespace.
Before you begin
- Helm 3 installed
- kubectl configured with cluster access
- Basic Helm knowledge (install
- upgrade
- template)
The typical Helm anti-pattern: one chart for dev, a fork for staging, another fork for prod. They diverge over time. A fix applied to one isn't applied to the others. Three months later, nobody's sure which is canonical.
The right approach: one chart, multiple values files, deployed to separate namespaces. This tutorial builds that structure from scratch.
The Target Structure
charts/api-server/
├── Chart.yaml
├── values.yaml # Defaults (safe for dev)
├── values-staging.yaml # Staging overrides
├── values-production.yaml # Production overrides
└── templates/
├── deployment.yaml
├── service.yaml
├── configmap.yaml
├── hpa.yaml
└── _helpers.tpl
Deploy commands:
# Dev
helm upgrade --install api-server ./charts/api-server \
-n dev --create-namespace
# Staging
helm upgrade --install api-server ./charts/api-server \
-n staging --create-namespace \
-f charts/api-server/values-staging.yaml
# Production
helm upgrade --install api-server ./charts/api-server \
-n production --create-namespace \
-f charts/api-server/values-production.yaml
Step 1: Scaffold the Chart
mkdir -p charts/api-server/templates
Step 2: Chart.yaml
cat > charts/api-server/Chart.yaml <<EOF
apiVersion: v2
name: api-server
description: API server — deployed to dev, staging, and production
type: application
version: 0.1.0
appVersion: "1.0.0"
EOF
Step 3: Default values.yaml
These are the dev defaults — permissive, low-resource, single replica:
cat > charts/api-server/values.yaml <<EOF
replicaCount: 1
image:
repository: myregistry/api-server
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
env:
LOG_LEVEL: "debug"
DATABASE_URL: "postgres://dev-db:5432/appdb"
probes:
readiness:
path: /healthz
initialDelaySeconds: 5
liveness:
path: /healthz
initialDelaySeconds: 15
ingress:
enabled: false
EOF
Step 4: Staging Override Values
cat > charts/api-server/values-staging.yaml <<EOF
replicaCount: 2
image:
tag: "staging"
pullPolicy: IfNotPresent
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 8
env:
LOG_LEVEL: "info"
DATABASE_URL: "postgres://staging-db:5432/appdb"
ingress:
enabled: true
host: api.staging.example.com
EOF
Step 5: Production Override Values
cat > charts/api-server/values-production.yaml <<EOF
replicaCount: 3
image:
tag: "v1.2.3" # Always pin in production
pullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 1Gi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
env:
LOG_LEVEL: "warn"
DATABASE_URL: "postgres://prod-db:5432/appdb"
probes:
readiness:
initialDelaySeconds: 10
liveness:
initialDelaySeconds: 30
ingress:
enabled: true
host: api.example.com
EOF
Step 6: Templates
_helpers.tpl
cat > charts/api-server/templates/_helpers.tpl <<'EOF'
{{/*
Expand the name of the chart.
*/}}
{{- define "api-server.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
*/}}
{{- define "api-server.fullname" -}}
{{- printf "%s-%s" .Release.Name (include "api-server.name" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "api-server.labels" -}}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/name: {{ include "api-server.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
EOF
deployment.yaml
cat > charts/api-server/templates/deployment.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "api-server.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "api-server.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
{{- include "api-server.labels" . | nindent 8 }}
spec:
terminationGracePeriodSeconds: 60
containers:
- name: api
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.targetPort }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.probes.readiness.path }}
port: {{ .Values.service.targetPort }}
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
path: {{ .Values.probes.liveness.path }}
port: {{ .Values.service.targetPort }}
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
periodSeconds: 10
failureThreshold: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
lifecycle:
preStop:
exec:
command: ["sleep", "15"]
EOF
hpa.yaml
cat > charts/api-server/templates/hpa.yaml <<'EOF'
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "api-server.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "api-server.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
EOF
Step 7: Validate Before Deploying
# Render templates without deploying
helm template api-server ./charts/api-server \
-n dev \
| kubectl apply --dry-run=client -f -
# Render staging overlay
helm template api-server ./charts/api-server \
-n staging \
-f charts/api-server/values-staging.yaml \
| kubectl apply --dry-run=client -f -
# Lint
helm lint ./charts/api-server
helm lint ./charts/api-server -f charts/api-server/values-production.yaml
Step 8: Deploy to All Environments
# Dev
helm upgrade --install api-server ./charts/api-server \
--namespace dev --create-namespace \
--atomic --timeout 3m
# Staging
helm upgrade --install api-server ./charts/api-server \
--namespace staging --create-namespace \
--atomic --timeout 3m \
-f charts/api-server/values-staging.yaml
# Production (with explicit version)
helm upgrade --install api-server ./charts/api-server \
--namespace production --create-namespace \
--atomic --timeout 5m \
-f charts/api-server/values-production.yaml \
--set image.tag=v1.2.3
--atomic rolls back automatically if the deployment fails. --timeout sets a deadline for the rollout to complete.
Verify
# Check releases across namespaces
helm list -A
# Compare actual values per environment
helm get values api-server -n dev
helm get values api-server -n production
# Check running image tags
kubectl get deployment api-server -n production \
-o jsonpath='{.spec.template.spec.containers[0].image}'
The Key Rules
Pin image tags in production — latest in production is how you get silent breaking changes.
Keep defaults safe for dev — the base values.yaml should be the least dangerous configuration. Production adds constraints, not removes them.
Never commit secrets to values files — use --set at deploy time, external-secrets-operator, or Vault. Values files go in git; secrets don't.
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.