Enforcing Policy with Kyverno: Validate, Mutate, Generate
Kyverno lets you write Kubernetes-native admission policies without learning Rego. This tutorial covers the three policy types — validate (block bad resources), mutate (auto-fix them), and generate (create dependent resources) — plus the audit-to-enforce migration path that won't break your cluster.
Before you begin
- Kubernetes cluster
- kubectl and Helm installed
- Basic understanding of Kubernetes admission webhooks (helpful but not required)
Admission controllers are the last line of defence before a resource is written to etcd. They sit in the API server request pipeline and can either block a request outright or mutate it before it lands. Most teams know they should be using them; fewer actually do, because the traditional path — writing OPA/Rego — has a steep learning curve and a debugging experience that feels like shouting into a void.
Kyverno takes a different approach: cluster policy expressed as Kubernetes resources. No Rego, no external tooling, no separate policy language to learn. Just YAML you can version-control, test locally with the Kyverno CLI, and apply the same way you apply anything else. It ships a validating and mutating webhook, handles policy reports natively, and covers three distinct use cases: validate (block bad resources), mutate (auto-correct them), and generate (create dependent resources automatically).
This tutorial walks through all three. By the end you'll have a realistic policy suite — resource limit enforcement, automatic label injection, auto-generated NetworkPolicies — plus the escape hatch mechanism (PolicyException) and a local testing workflow you can slot into CI.
What You'll Build
- A validate
ClusterPolicythat blocks pods without CPU/memory limits - A mutate
ClusterPolicythat injects a standard label on every Deployment - A generate
ClusterPolicythat auto-creates a default-deny NetworkPolicy when a namespace is labelledteam: engineering - A
PolicyExceptionto exempt a specific workload from a policy - A
kyverno testsetup to validate policies locally before they ever touch a cluster
Step 1: Install Kyverno
Kyverno ships as a Helm chart. Version 3.x split the monolith into four controllers — admission, background, cleanup, and reports — each independently scalable. For a tutorial environment a single-replica setup is fine; production should run at least 3 replicas of the admission controller.
bash1helm repo add kyverno https://kyverno.github.io/kyverno/ 2helm repo update 3 4helm install kyverno kyverno/kyverno \ 5 --namespace kyverno \ 6 --create-namespace \ 7 --version 3.1.0
Wait for all pods to reach Running:
bashkubectl get pods -n kyverno
NAME READY STATUS RESTARTS AGE
kyverno-admission-controller-7d9b8f6c4-x2krp 1/1 Running 0 60s
kyverno-background-controller-5c8d9b7f4-vqt8n 1/1 Running 0 60s
kyverno-cleanup-controller-6b9c4d8f5-mn3kp 1/1 Running 0 60s
kyverno-reports-controller-4f7b5c9d6-jw9lx 1/1 Running 0 60s
Four controllers, four pods. If any are in CrashLoopBackOff, check webhook registration — Kyverno registers ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects at startup, and occasionally cert rotation takes a moment.
Step 2: Validate — Require Resource Limits
The most common baseline policy in any production cluster: every container must declare CPU and memory limits. Without limits, a single misbehaving container can starve everything else on the node.
ClusterPolicy vs Policy: ClusterPolicy is cluster-scoped and applies across all namespaces (with optional exclusions). Policy is namespace-scoped — a Policy created in the kyverno namespace only governs resources in that namespace. Use ClusterPolicy for baseline security requirements that should apply everywhere.
validationFailureAction: This is the single most important field. Audit logs violations to policy reports but does not block the request. Enforce blocks at admission and returns an error to the client. I start every new policy in Audit mode and run it for at least a week before switching to Enforce. Skipping this step is how you brick a cluster on a Friday afternoon.
yaml1apiVersion: kyverno.io/v1 2kind: ClusterPolicy 3metadata: 4 name: require-resource-limits 5spec: 6 validationFailureAction: Audit 7 background: true 8 rules: 9 - name: check-container-limits 10 match: 11 any: 12 - resources: 13 kinds: 14 - Pod 15 validate: 16 message: "CPU and memory limits are required on all containers." 17 pattern: 18 spec: 19 containers: 20 - name: "*" 21 resources: 22 limits: 23 memory: "?*" 24 cpu: "?*"
Apply it:
bashkubectl apply -f require-resource-limits.yaml
Deploy a pod without limits to see the audit trail:
bashkubectl run no-limits --image=nginx -n default
The pod creates — we're in Audit mode — but the violation lands in the policy report:
bashkubectl get policyreport -n default -o yaml | grep -A 5 "require-resource-limits"
yaml- message: 'CPU and memory limits are required on all containers.' policy: require-resource-limits result: fail rule: check-container-limits source: kyverno
Good. After you've reviewed what Audit catches across your cluster, flip to Enforce:
bashkubectl patch clusterpolicy require-resource-limits \ --type=merge \ -p '{"spec":{"validationFailureAction":"Enforce"}}'
Now try to create the same non-compliant pod:
bashkubectl run no-limits-2 --image=nginx -n default
Error from server: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Pod/default/no-limits-2 was blocked due to the following policies
require-resource-limits:
check-container-limits: CPU and memory limits are required on all containers.
That's the admission webhook doing its job. The resource never reaches etcd.
Step 3: Mutate — Inject a Standard Label
Mutate policies run before validate policies in the Kyverno pipeline. This ordering matters: you can use a mutate policy to auto-correct a resource, then have a validate policy confirm the corrected state. The client sees a single admission response; the pipeline is invisible to them.
A common use case is enforcing a standard label taxonomy. Rather than blocking deployments that lack the label and making developers fix it manually, you can just inject it automatically:
yaml1apiVersion: kyverno.io/v1 2kind: ClusterPolicy 3metadata: 4 name: inject-managed-by-label 5spec: 6 rules: 7 - name: add-managed-by-label 8 match: 9 any: 10 - resources: 11 kinds: 12 - Deployment 13 mutate: 14 patchStrategicMerge: 15 metadata: 16 labels: 17 +(app.kubernetes.io/managed-by): "platform-team"
The +(key) syntax is Kyverno-specific and means "add this key if it doesn't exist, leave it alone if it does." Without the +() wrapper, a strategic merge patch overwrites whatever value the user set. That's usually not what you want for labels — if a developer explicitly set app.kubernetes.io/managed-by: my-team, clobbering it will cause confusion. Use +(key) for additive-only mutations.
Apply and test:
bashkubectl apply -f inject-managed-by-label.yaml kubectl create deployment test-deploy --image=nginx kubectl get deployment test-deploy -o jsonpath='{.metadata.labels}'
json{"app":"nginx","app.kubernetes.io/managed-by":"platform-team"}
The label is there even though the kubectl create deployment command never set it. The mutation happened transparently at admission time.
Step 4: Generate — Auto-Create NetworkPolicy
Generate policies create new resources in response to events on other resources. They're the Kyverno primitive that teams discover last but end up relying on heavily for platform automation.
The use case here: when a namespace with the label team: engineering is created, automatically provision a default-deny NetworkPolicy. This ensures every engineering namespace starts with a zero-trust network posture — traffic must be explicitly allowed, not implicitly permitted.
yaml1apiVersion: kyverno.io/v1 2kind: ClusterPolicy 3metadata: 4 name: default-deny-network-policy 5spec: 6 rules: 7 - name: generate-default-deny 8 match: 9 any: 10 - resources: 11 kinds: 12 - Namespace 13 selector: 14 matchLabels: 15 team: engineering 16 generate: 17 apiVersion: networking.k8s.io/v1 18 kind: NetworkPolicy 19 name: default-deny-all 20 namespace: "{{request.object.metadata.name}}" 21 synchronize: true 22 data: 23 spec: 24 podSelector: {} 25 policyTypes: 26 - Ingress 27 - Egress
synchronize: true is the critical field. With it enabled, Kyverno acts as a reconciler for the generated resource: if someone deletes the default-deny-all NetworkPolicy, Kyverno recreates it. If someone edits it, Kyverno reverts it to the policy-defined state. The generated resource is owned by the policy. I'll cover when you want false in the Common Mistakes section below.
Create a namespace with the trigger label:
bashkubectl create namespace eng-team-1 kubectl label namespace eng-team-1 team=engineering
Check for the generated NetworkPolicy:
bashkubectl get networkpolicy -n eng-team-1
NAME POD-SELECTOR AGE
default-deny-all <none> 3s
Kyverno's background controller picked up the label event and generated the resource. For namespaces created with the label already set, the admission-time trigger fires immediately. For namespaces labelled after creation, the background controller reconciles within its sync interval (default: 1 hour; configurable via --backgroundScanInterval).
Step 5: PolicyException — Exempt Specific Workloads
Real clusters have legitimate exceptions. A node-exporter DaemonSet may need to skip resource limit requirements because its actual consumption varies by node. A privileged init container may need to skip security context rules. Refusing to acknowledge this reality leads to engineers working around your policies instead of with them.
PolicyException is the sanctioned escape hatch. It's a namespaced resource, which means you can control who can create exceptions by restricting RBAC on the PolicyException resource type. The exception lives in the same namespace as the exempted workload.
yaml1apiVersion: kyverno.io/v2beta1 2kind: PolicyException 3metadata: 4 name: allow-no-limits-monitoring 5 namespace: monitoring 6spec: 7 exceptions: 8 - policyName: require-resource-limits 9 ruleNames: 10 - check-container-limits 11 match: 12 any: 13 - resources: 14 kinds: 15 - Pod 16 namespaces: 17 - monitoring 18 names: 19 - node-exporter-*
This exempts only pods in the monitoring namespace whose names match node-exporter-* from the check-container-limits rule. The scope is deliberate and narrow — the exception doesn't apply to any other pods in monitoring, only to node-exporter pods. Keep exceptions as specific as possible. Broad exceptions defeat the purpose of the policy.
To enable PolicyException support, Kyverno requires it to be enabled via Helm values (it's on by default in 3.x, but worth verifying):
bashkubectl get configmap kyverno -n kyverno -o yaml | grep -i exception
If you see enablePolicyException: "true", you're good. If not, update via Helm:
bashhelm upgrade kyverno kyverno/kyverno \ --namespace kyverno \ --set config.enablePolicyException=true
Step 6: Test Policies with kyverno test
Applying untested policies to a cluster — even in Audit mode — is avoidable risk. The kyverno test command runs policies against static resource manifests locally, with no cluster required. It belongs in your CI pipeline alongside your Helm chart linting and kubeval checks.
Install the Kyverno CLI:
bashbrew install kyverno
Set up a test directory:
kyverno-tests/
├── policies/
│ └── require-resource-limits.yaml
├── resources/
│ ├── pod-with-limits.yaml
│ └── pod-without-limits.yaml
└── kyverno-test.yaml
pod-with-limits.yaml:
yaml1apiVersion: v1 2kind: Pod 3metadata: 4 name: pod-with-limits 5 namespace: default 6spec: 7 containers: 8 - name: app 9 image: nginx:1.25 10 resources: 11 limits: 12 cpu: "500m" 13 memory: "128Mi" 14 requests: 15 cpu: "100m" 16 memory: "64Mi"
pod-without-limits.yaml:
yaml1apiVersion: v1 2kind: Pod 3metadata: 4 name: pod-without-limits 5 namespace: default 6spec: 7 containers: 8 - name: app 9 image: nginx:1.25
kyverno-test.yaml:
yaml1name: require-resource-limits-test 2policies: 3 - policies/require-resource-limits.yaml 4resources: 5 - resources/pod-with-limits.yaml 6 - resources/pod-without-limits.yaml 7results: 8 - policy: require-resource-limits 9 rule: check-container-limits 10 resource: pod-with-limits 11 result: pass 12 - policy: require-resource-limits 13 rule: check-container-limits 14 resource: pod-without-limits 15 result: fail
Run the tests:
bashkyverno test kyverno-tests/
Executing require-resource-limits-test...
applying 1 policy to 2 resources...
policy require-resource-limits -> resource Pod/default/pod-with-limits: Pass
policy require-resource-limits -> resource Pod/default/pod-without-limits: Fail
Test Summary: 2 tests passed and 0 tests failed
The test framework evaluates your declared results against the actual policy evaluation outcomes and fails if they diverge. This catches both policy regressions (a change that breaks a previously-passing case) and expectation drift (the policy isn't catching what you think it's catching).
Add kyverno test kyverno-tests/ to your CI pipeline. If the tests pass, the policies are safe to promote.
Verification
After applying all policies, confirm they're registered and check for any violations:
bash1# List all ClusterPolicies and their current mode 2kubectl get cpol 3 4# Check policy reports per namespace 5kubectl get policyreport -A 6 7# Check cluster-wide policy reports 8kubectl get clusterpolicyreport 9 10# See specific failures in Audit mode 11kubectl get policyreport default -o yaml | grep -B 2 "result: fail"
The policyreport and clusterpolicyreport resources are Kyverno's built-in observability layer. They record policy evaluation results for existing resources (governed by the background controller) and admission-time results. You can feed them into a monitoring stack — Grafana dashboards that track violation counts over time give you a useful signal for how well your policies are being adopted.
Common Mistakes
Starting with Enforce. The most common way teams damage their clusters with Kyverno. Always start with Audit. Run it for a week. Review the policy reports. Understand what would have been blocked, and whether any of those blocks would have been wrong. Then flip to Enforce.
ClusterPolicy vs Policy confusion. A Policy only applies to resources in its own namespace. This trips up people who create a Policy in the kyverno namespace expecting it to catch pods in default. If your intent is cluster-wide enforcement, use ClusterPolicy.
Forgetting background: true. Without it, Kyverno only evaluates the policy at admission time — new resources are checked, but existing resources that violate the policy are invisible to policy reports. Set background: true on every validate policy so you have a complete picture of your compliance posture, not just the state of resources created after the policy was applied.
Using a plain key instead of +(key) in mutate patches. A plain key in a patchStrategicMerge overwrites whatever value exists. If a developer set app.kubernetes.io/managed-by: my-team and your mutate policy uses a plain key, you'll silently overwrite their value with yours. Use +(key) unless overwriting is your explicit intent.
synchronize: true on generate policies when teams need to customize the generated resource. If you generate a default-deny NetworkPolicy with synchronize: true and a team then needs to add an Ingress rule to allow traffic from their ingress controller, Kyverno will revert their change. In that case, use synchronize: false — the generated resource becomes a starting point that teams own. The tradeoff is that they can also delete it. Pick based on whether the generated resource is a hard requirement or a scaffold.
Cleanup
bashkubectl delete clusterpolicy require-resource-limits inject-managed-by-label default-deny-network-policy kubectl delete policyexception allow-no-limits-monitoring -n monitoring kubectl delete namespace eng-team-1 kubectl delete deployment test-deploy kubectl delete pod no-limits -n default
If you installed Kyverno only for this tutorial:
bashhelm uninstall kyverno -n kyverno kubectl delete namespace kyverno
What's Next
The three policy types covered here — validate, mutate, generate — cover the majority of platform policy use cases. Where to go from here:
- Preconditions on rules let you apply logic conditionally within a single policy, rather than maintaining separate policies for similar-but-not-identical cases.
- JMESPath expressions in Kyverno rules give you access to the full request context — you can compare fields against each other, do arithmetic on resource quantities, and reference external data via API calls.
- Kyverno Chainsaw is the integration testing framework for Kyverno policies, extending
kyverno testfor more complex multi-resource scenarios. - Policy Reporter is a Kyverno ecosystem project that exposes policy report data via a UI and Prometheus metrics — worth deploying if you want dashboards over your compliance posture.
The Kyverno policy library at https://kyverno.io/policies/ has hundreds of community-contributed policies covering Pod Security Standards, supply chain security (image signing verification with Cosign), and common baseline requirements. Most of them are production-ready starting points you can adapt rather than writing from scratch.
Official References
- Kyverno Documentation — Official docs covering all policy types, JMESPath, preconditions, and the CLI
- Kyverno Policy Library — Community-contributed production-ready policies for Pod Security, supply chain, and baseline requirements
- Kyverno CLI — Reference for
kyverno test,kyverno apply, and local policy validation - Writing Policies — verifyImages — Kyverno's image verification rules, used in combination with Cosign
- PolicyException — Official docs on the PolicyException resource and how to scope exemptions
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.