Platform Engineering
13 min readMay 1, 2026

Kubernetes Admission Webhooks: OPA Gatekeeper and Kyverno

Admission webhooks intercept every API request before it's persisted to etcd — they're how you enforce policy across an entire cluster without modifying application manifests. Two dominant policy engines: OPA Gatekeeper (Rego policies compiled into ConstraintTemplates) and Kyverno (Kubernetes-native YAML policies). This covers the architecture, when to use each, and the production patterns that prevent policy from becoming an operational burden.

CO
Coding Protocols Team
Platform Engineering
Kubernetes Admission Webhooks: OPA Gatekeeper and Kyverno

Every Kubernetes object passes through the admission control chain before it's written to etcd. Admission webhooks sit in that chain and can validate (reject or allow) or mutate (modify before storing) any resource. OPA Gatekeeper and Kyverno are the two dominant policy engines built on this mechanism — both enforce policy across the cluster without touching application code.

The choice isn't always obvious. Gatekeeper uses Rego — a purpose-built policy language that's expressive but has a learning curve. Kyverno uses Kubernetes-native YAML policies — easier to write but harder to express complex logic. Most platform teams end up choosing based on what their teams can maintain.


How Admission Webhooks Work

kubectl apply → API Server → Authentication → Authorization (RBAC)
    → Mutating Admission Webhooks  ← modify the object
    → Schema Validation
    → Validating Admission Webhooks ← approve or reject
    → Persist to etcd

Two types:

  • Validating webhooks: Inspect the request, return allow or deny. Run after mutating webhooks so they see the final object.
  • Mutating webhooks: Modify the request (inject sidecars, set defaults, add labels). Run before validating webhooks.

Both are registered via ValidatingWebhookConfiguration or MutatingWebhookConfiguration objects that tell the API server where to send requests (a Service in-cluster or an external HTTPS endpoint).


OPA Gatekeeper

Gatekeeper runs an admission webhook that evaluates Rego policies. The workflow:

  1. ConstraintTemplate — defines the Rego policy logic and declares the CRD schema for constraints
  2. Constraint — instantiates the template with specific parameters (which namespaces, which exceptions)
  3. Audit controller — a separate deployment that periodically re-evaluates existing resources (not just new ones)

Installation

bash
1helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
2helm repo update
3
4helm install gatekeeper gatekeeper/gatekeeper \
5  --namespace gatekeeper-system \
6  --create-namespace \
7  --version 3.17.1 \
8  --set auditInterval=60 \
9  --set constraintViolationsLimit=100

ConstraintTemplate: Required Labels

yaml
1apiVersion: templates.gatekeeper.sh/v1
2kind: ConstraintTemplate
3metadata:
4  name: k8srequiredlabels
5spec:
6  crd:
7    spec:
8      names:
9        kind: K8sRequiredLabels
10      validation:
11        openAPIV3Schema:
12          type: object
13          properties:
14            labels:
15              type: array
16              items:
17                type: string
18
19  targets:
20    - target: admission.k8s.gatekeeper.sh
21      rego: |
22        package k8srequiredlabels
23
24        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
25          provided := {label | input.review.object.metadata.labels[label]}
26          required := {label | label := input.parameters.labels[_]}
27          missing := required - provided
28          count(missing) > 0
29          msg := sprintf("Missing required labels: %v", [missing])
30        }

Constraint: Apply the Template

yaml
1apiVersion: constraints.gatekeeper.sh/v1beta1
2kind: K8sRequiredLabels
3metadata:
4  name: require-team-label
5spec:
6  enforcementAction: deny     # deny | warn | dryrun
7  match:
8    kinds:
9      - apiGroups: [""]
10        kinds: ["Namespace"]
11    excludedNamespaces:
12      - kube-system
13      - kube-public
14      - gatekeeper-system
15  parameters:
16    labels:
17      - "team"
18      - "env"

With enforcementAction: warn, violations are recorded but not blocked — useful for rolling out new policies without breaking existing clusters.

ConstraintTemplate: Container Image Registry

yaml
1apiVersion: templates.gatekeeper.sh/v1
2kind: ConstraintTemplate
3metadata:
4  name: k8sallowedrepos
5spec:
6  crd:
7    spec:
8      names:
9        kind: K8sAllowedRepos
10      validation:
11        openAPIV3Schema:
12          type: object
13          properties:
14            repos:
15              type: array
16              items:
17                type: string
18  targets:
19    - target: admission.k8s.gatekeeper.sh
20      rego: |
21        package k8sallowedrepos
22
23        violation[{"msg": msg}] {
24          container := input.review.object.spec.containers[_]
25          not starts_with_allowed_repo(container.image)
26          msg := sprintf("Container image %q is from an untrusted registry", [container.image])
27        }
28
29        violation[{"msg": msg}] {
30          container := input.review.object.spec.initContainers[_]
31          not starts_with_allowed_repo(container.image)
32          msg := sprintf("Init container image %q is from an untrusted registry", [container.image])
33        }
34
35        starts_with_allowed_repo(image) {
36          repo := input.parameters.repos[_]
37          startswith(image, repo)
38        }
yaml
1apiVersion: constraints.gatekeeper.sh/v1beta1    # v1 available in Gatekeeper 3.15+
2kind: K8sAllowedRepos
3metadata:
4  name: allowed-repos
5spec:
6  enforcementAction: deny
7  match:
8    kinds:
9      - apiGroups: ["apps"]
10        kinds: ["Deployment", "StatefulSet", "DaemonSet"]
11    excludedNamespaces: ["kube-system"]
12  parameters:
13    repos:
14      - "123456789.dkr.ecr.us-east-1.amazonaws.com/"
15      - "ghcr.io/my-org/"

Checking Violations

bash
1# List all constraint violations
2kubectl get constraints -o json | \
3  jq '.items[] | {name: .metadata.name, violations: .status.totalViolations}'
4
5# Detailed violations for a specific constraint
6kubectl describe k8srequiredlabels require-team-label
7
8# All audit results
9kubectl get k8srequiredlabels -o jsonpath='{.items[*].status.violations}'

Kyverno

Kyverno policies are Kubernetes YAML — no separate policy language. Each policy can validate, mutate, generate, or clone resources. A single ClusterPolicy can contain multiple rules.

Installation

bash
1helm repo add kyverno https://kyverno.github.io/kyverno/
2helm repo update
3
4helm install kyverno kyverno/kyverno \
5  --namespace kyverno \
6  --create-namespace \
7  --version 3.2.6 \
8  --set admissionController.replicaCount=3 \
9  --set backgroundController.replicaCount=2

Validation Policy: Required Labels

yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4  name: require-team-label
5spec:
6  validationFailureAction: Enforce    # Enforce | Audit
7  background: true    # Audit existing resources (not just new requests)
8  rules:
9    - name: check-team-label
10      match:
11        any:
12          - resources:
13              kinds: [Namespace]
14      exclude:
15        any:
16          - resources:
17              kinds: [Namespace]
18              namespaces: ["kube-system", "kube-public", "kube-node-lease", "kyverno"]
19      validate:
20        message: "Namespace must have 'team' and 'env' labels."
21        pattern:
22          metadata:
23            labels:
24              team: "?*"
25              env: "?*"

Mutation Policy: Inject Default Labels

yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4  name: add-default-labels
5spec:
6  rules:
7    - name: add-app-version-label
8      match:
9        any:
10          - resources:
11              kinds: [Deployment]
12      mutate:
13        patchStrategicMerge:
14          metadata:
15            labels:
16              +(app.kubernetes.io/managed-by): "helm"    # + prefix: only add if absent

The + prefix on a key means "add this if it doesn't already exist" — it won't overwrite an existing value.

Generate Policy: Default NetworkPolicy for New Namespaces

yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4  name: generate-default-networkpolicy
5spec:
6  rules:
7    - name: generate-default-deny
8      match:
9        any:
10          - resources:
11              kinds: [Namespace]
12              selector:
13                matchLabels:
14                  network-policy: "managed"
15      generate:
16        apiVersion: networking.k8s.io/v1
17        kind: NetworkPolicy
18        name: default-deny
19        namespace: "{{request.object.metadata.name}}"
20        synchronize: true    # Keep the generated resource in sync with this policy
21        data:
22          spec:
23            podSelector: {}
24            policyTypes:
25              - Ingress
26              - Egress

synchronize: true means if someone deletes the generated NetworkPolicy, Kyverno recreates it. It also means changing the policy updates existing generated resources.

Policy Exception

For cases where a specific workload needs an exception:

yaml
1apiVersion: kyverno.io/v2beta1
2kind: PolicyException
3metadata:
4  name: monitoring-image-exception
5  namespace: monitoring
6spec:
7  exceptions:
8    - policyName: allowed-repos
9      ruleNames:
10        - check-image-registry
11  match:
12    any:
13      - resources:
14          kinds: [Pod]
15          namespaces: [monitoring]
16          names: ["prometheus-*", "grafana-*"]

PolicyExceptions require the Kyverno admission controller to be configured with --enablePolicyException=true (singular in Kyverno 3.x).


Gatekeeper vs Kyverno: When to Choose

AspectGatekeeperKyverno
Policy languageRego (purpose-built, expressive)YAML patterns (familiar, limited)
Learning curveHighLow
Complex logicStrong (set operations, data import)Limited (JMESPath for some queries)
MutationVia AssignMetadata/Assign CRDsNative, powerful
GenerationNot supportedNative (generate rules)
Audit modeBuilt-in (audit controller)Built-in (background: true)
Policy exceptionsManual (constraint exclusions)PolicyException CRD
CNCF statusGraduated (via OPA)Incubating

Choose Gatekeeper when: Your policies need complex logic — cross-object queries (checking if a referenced ServiceAccount exists), multi-field conditions, or policies that require OPA's data import for external context. Rego scales to complexity that YAML patterns can't express.

Choose Kyverno when: You want policies that platform engineers who know Kubernetes can write and maintain without learning a new language. Kyverno's mutation and generation capabilities are significantly more powerful than Gatekeeper's.

Many teams run both: Gatekeeper for complex security policies from OPA's policy library, Kyverno for day-to-day operational policies like label injection and NetworkPolicy generation.


Native Validation: ValidatingAdmissionPolicy (CEL)

ValidatingAdmissionPolicy (VAP) is a native Kubernetes alternative for simple validation rules. It uses the Common Expression Language (CEL) to evaluate rules directly in the API server, eliminating the network hop for straightforward checks:

yaml
1apiVersion: admissionregistration.k8s.io/v1
2kind: ValidatingAdmissionPolicy
3metadata:
4  name: "require-team-label"
5spec:
6  failurePolicy: Fail
7  matchConstraints:
8    resourceRules:
9      - apiGroups:   ["apps"]
10        apiVersions: ["v1"]
11        operations:  ["CREATE", "UPDATE"]
12        resources:   ["deployments"]
13  validations:
14    - expression: "object.metadata.labels.has('team')"
15      message: "Deployments must have a 'team' label."
16---
17apiVersion: admissionregistration.k8s.io/v1
18kind: ValidatingAdmissionPolicyBinding
19metadata:
20  name: "require-team-label-binding"
21spec:
22  policyName: "require-team-label"
23  validationActions: [Deny]
24  matchResources:
25    namespaceSelector:
26      matchExpressions:
27        - key: environment
28          operator: In
29          values: [production]

VAP is a good fit for simple, stateless validation (label presence, image registry allow-lists). Webhooks remain the standard for mutation, resource generation, cross-resource checks, and complex policy logic—Gatekeeper and Kyverno continue to be the production choice for these use cases.


Preventing Policy from Blocking the Cluster

A misconfigured admission webhook can take down the entire cluster — if the webhook service is unavailable and failurePolicy: Fail, all API requests fail.

Safe Rollout Pattern

yaml
1# Phase 1: Audit only (don't block anything)
2# Gatekeeper
3spec:
4  enforcementAction: warn    # Records violations, doesn't block
5
6# Kyverno
7spec:
8  validationFailureAction: Audit    # Records violations, doesn't block
bash
# Phase 2: After a week of audit, check what would break
kubectl get constraints -o json | jq '.items[].status.totalViolations'
kubectl get policyreport -A    # Kyverno: namespace-scoped policy reports

# Phase 3: Switch to enforcement for new namespaces, leave audit on old ones
# Phase 4: After teams fix violations, switch all to enforcement

Webhook Failure Policy

yaml
# In ValidatingWebhookConfiguration (managed by Gatekeeper/Kyverno)
failurePolicy: Ignore    # Allow requests if webhook is unavailable
# vs
failurePolicy: Fail      # Block all requests if webhook is unavailable (safer but risky)

Gatekeeper defaults to Fail since v3.9 — a crashing webhook blocks all API requests until it recovers. Kyverno's failurePolicy varies by enforcement mode: Enforce policies use Fail, Audit policies use Ignore. For production: ensure both have sufficient replicas and PodDisruptionBudgets so the webhook stays available.


Frequently Asked Questions

Does admission webhook enforcement apply retroactively?

No — admission webhooks only intercept new requests. Existing resources are unaffected. Both Gatekeeper (via AuditPod that runs every auditInterval seconds) and Kyverno (via background reconciliation) periodically check existing resources and report violations, but they don't modify or delete existing non-compliant resources.

Can I write policies that reference other Kubernetes objects?

Yes, but differently for each tool. Gatekeeper supports data.inventory to query objects already synced to OPA's cache (via the Config CRD that specifies which resources to sync). Kyverno supports context lookups: context[].apiCall to query the Kubernetes API for related objects at admission time.

What happens if my policy breaks cluster add-ons?

Both tools support namespace exclusions. By default, kube-system should be excluded from most policies — cluster add-ons (CoreDNS, kube-proxy, CNI plugins) deploy there and typically don't follow application label conventions. Always test policies in dryrun/Audit mode in a non-production cluster before enforcing in production.


For NetworkPolicy generation as a concrete Kyverno generate use case, see Kubernetes Network Policies: Zero-Trust Networking Between Pods. For resource limits enforcement as a Kyverno validation use case, see Kubernetes Resource Management: Quotas, LimitRanges, and QoS Classes.

Deploying policy-as-code across a multi-tenant cluster with dozens of teams? Talk to us at Coding Protocols — we help platform teams design admission control policies that enforce standards without creating toil or breaking developer workflows.

Related Topics

Kubernetes
OPA
Gatekeeper
Kyverno
Admission Webhooks
Policy
Security
Platform Engineering

Read Next