Kubernetes Pod Security Admission: The PodSecurityPolicy Replacement Guide
PodSecurityPolicy was removed in Kubernetes 1.25. If you're still running it via a webhook shim, or if you're building a new cluster and wondering what replaced it, this is the guide. PSA is simpler, more opinionated, and easier to enforce correctly.

PodSecurityPolicy (PSP) was deprecated in Kubernetes 1.21 and removed in 1.25. If your cluster is running 1.25 or later and you relied on PSP, you've either migrated already or you're using a webhook shim to keep it working — a situation with a limited shelf life.
Pod Security Admission (PSA) is the built-in replacement. It's simpler than PSP, built directly into the API server (no webhook required), and deliberately more opinionated. It's also narrower in scope — some things PSP could do, PSA cannot. This post covers what PSA gives you, where it falls short, and how to migrate cleanly.
What PSA Is and Isn't
PSA enforces one of three pre-defined security profiles at the namespace level. You label a namespace with the profile you want, and the API server enforces it on every pod admission in that namespace.
What PSA enforces: constraints on the pod spec — host namespaces, privilege escalation, capabilities, volume types, seccomp profiles, AppArmor.
What PSA does not do:
- It cannot create custom policies (no Rego, no YAML policy definitions)
- It cannot enforce policies at the individual pod level (only namespace-level)
- It cannot mutate pods (only validate)
- It cannot enforce image registry restrictions
- It cannot enforce resource limit requirements
For use cases PSA doesn't cover, you still need Kyverno or OPA/Gatekeeper. PSA handles the baseline pod security floor; policy engines handle everything else.
The Three Security Levels
PSA defines exactly three profiles. No custom profiles, no intermediate options.
Privileged
No restrictions. Equivalent to having no PSP. Use for system-level namespaces (kube-system, CNI namespaces, monitoring agent namespaces) where DaemonSets need host access.
Baseline
Prevents the most obviously dangerous configurations while remaining compatible with most legitimate container workloads. Blocks:
privileged: truecontainers- Host namespaces (
hostNetwork,hostPID,hostIPC) - Dangerous capabilities (
NET_ADMIN,SYS_ADMIN,SYS_PTRACE, and others)
Note: baseline does not restrict hostPath volumes and does not require allowPrivilegeEscalation: false — that is a restricted requirement only. If you need to block hostPath mounts, you must use the restricted profile or enforce it via a separate policy engine (Kyverno, OPA). This is a common misconception — baseline is not a substitute for a policy that prevents host filesystem access.
Most application workloads run fine under baseline. If a workload breaks under baseline, it's usually because it runs as root, requests dangerous capabilities, or uses host namespaces — not because of volume types.
Restricted
The strictest profile. Enforces security best practices:
- All
baselinerestrictions plus: - Requires
runAsNonRoot: true - Requires
allowPrivilegeEscalation: false - Requires
seccompProfileto be set (RuntimeDefaultorLocalhost) - Limits volume types to a safe subset (configMap, csi, downwardAPI, emptyDir, ephemeral, projected, secret, persistentVolumeClaim) —
hostPathand most other volume types are blocked - Requires dropping
ALLcapabilities, with onlyNET_BIND_SERVICEas an allowable add-back
Running under restricted means the pod cannot run as root, cannot escalate privileges, and has a minimal Linux capability set. Most well-written application containers work under restricted; the common breakages are containers that run as root by convention rather than necessity.
Enforcement Modes
Each profile can be applied in three modes independently:
enforce — Pod is rejected if it violates the profile. This is the mode with teeth.
warn — Pod is admitted, but a warning is returned to the client. Visible in kubectl apply output. Good for migration — you see violations without breaking anything.
audit — Pod is admitted, and a violation is recorded in the audit log. No client-visible warning.
You can combine modes:
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/audit: restrictedThis enforces baseline (pods violating baseline are rejected), warns on restricted violations (so you can see what would break if you tightened the policy), and audits restricted violations for log analysis.
Applying PSA via Namespace Labels
PSA is configured entirely through namespace labels. No admission webhook, no CRD, no controller to install — it's baked into the API server.
1# Apply baseline enforcement to a namespace
2kubectl label namespace production \
3 pod-security.kubernetes.io/enforce=baseline \
4 pod-security.kubernetes.io/enforce-version=latest
5
6# Apply restricted with warn mode for migration
7kubectl label namespace staging \
8 pod-security.kubernetes.io/enforce=baseline \
9 pod-security.kubernetes.io/warn=restricted \
10 pod-security.kubernetes.io/audit=restrictedThe enforce-version label pins the profile version. Use latest to always track the current Kubernetes version's definition, or pin to a specific version (v1.29) for stability across upgrades.
Cluster-Wide Defaults via AdmissionConfiguration
To set a default profile for all namespaces (avoiding the need to label every namespace), configure the API server's AdmissionConfiguration:
1apiVersion: apiserver.config.k8s.io/v1
2kind: AdmissionConfiguration
3plugins:
4 - name: PodSecurity
5 configuration:
6 apiVersion: pod-security.admission.config.k8s.io/v1
7 kind: PodSecurityConfiguration
8 defaults:
9 enforce: "baseline"
10 enforce-version: "latest"
11 audit: "restricted"
12 audit-version: "latest"
13 warn: "restricted"
14 warn-version: "latest"
15 exemptions:
16 usernames: []
17 runtimeClasses: []
18 namespaces:
19 - kube-system
20 - kube-public
21 - kube-node-lease
22 - monitoring # Prometheus node-exporter needs host access
23 - logging # Fluentd DaemonSet needs hostPath volumesWith this configuration, every namespace gets baseline enforcement by default, and kube-system plus infrastructure namespaces are exempted. New namespaces inherit the default without any labelling required.
This is the recommended production setup — safe default, explicit exemptions, no silent open namespaces.
Migrating from PodSecurityPolicy
Step 1: Map PSPs to PSA Profiles
Audit your existing PSPs and determine which PSA profile each maps to:
kubectl get psp -o yamlFor each PSP, answer:
- Does it allow
privileged: true? →privilegednamespace - Does it restrict host namespaces and dangerous capabilities? →
baseline(note: hostPath volumes are NOT restricted by Baseline — only by Restricted) - Does it also require non-root, dropped capabilities, and seccomp? →
restricted
Step 2: Enable PSA in Warn Mode First
Before removing PSP, add PSA labels in warn mode to all namespaces:
for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'); do
kubectl label namespace "$ns" \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted \
--overwrite 2>/dev/null
doneNow run your workloads normally for a few days. Any pod that would violate restricted produces a warning in kubectl apply output and an audit log entry. Collect these violations — they're your migration backlog.
Step 3: Fix Violations
Common violations and their fixes:
Running as root:
1# Before (violates restricted)
2containers:
3 - name: app
4 image: myapp:latest
5
6# After
7containers:
8 - name: app
9 image: myapp:latest
10 securityContext:
11 runAsNonRoot: true
12 runAsUser: 1000Missing seccompProfile:
securityContext:
seccompProfile:
type: RuntimeDefaultCapabilities not dropped:
securityContext:
capabilities:
drop: ["ALL"]allowPrivilegeEscalation not set to false:
securityContext:
allowPrivilegeEscalation: falseMany violations are fixable by adding securityContext fields to pod specs without changing the container image at all. For containers that genuinely need root (legacy applications), baseline is still far better than no policy.
Step 4: Apply PSA Labels per Namespace
Once violations are resolved, apply appropriate enforcement labels:
1# Application namespace — restricted profile
2kubectl label namespace production \
3 pod-security.kubernetes.io/enforce=restricted \
4 pod-security.kubernetes.io/enforce-version=latest \
5 --overwrite
6
7# Infrastructure namespace — baseline (DaemonSets need some access)
8kubectl label namespace monitoring \
9 pod-security.kubernetes.io/enforce=baseline \
10 pod-security.kubernetes.io/enforce-version=latest \
11 --overwrite
12
13# System namespace — privileged (CNI, etc.)
14kubectl label namespace kube-system \
15 pod-security.kubernetes.io/enforce=privileged \
16 --overwriteStep 5: Remove PSP
Once PSA is enforcing the desired profiles and all workloads are running correctly:
# Disable PSP admission plugin (if self-managed cluster)
# Remove PodSecurityPolicy from --enable-admission-plugins in kube-apiserver
# Delete PSP objects
kubectl delete psp --allOn managed clusters (EKS pre-1.25, GKE), PSP removal is handled by the control plane upgrade path.
What PSA Doesn't Cover: The Kyverno Layer
PSA enforces pod security constraints. Everything else requires a policy engine.
Common requirements PSA cannot handle:
| Requirement | Tool |
|---|---|
| Require resource limits on all containers | Kyverno / OPA |
| Restrict images to approved registries | Kyverno / OPA |
| Require specific labels on all workloads | Kyverno / OPA |
Block latest image tag | Kyverno / OPA |
| Require read-only root filesystem | Kyverno (PSA has no such check) |
| Enforce naming conventions | Kyverno / OPA |
| Mutate pods to add sidecars or annotations | Kyverno (PSA cannot mutate) |
A complete security posture uses PSA for the pod security floor and Kyverno for everything else. PSA handles "is this pod spec dangerously configured?" and Kyverno handles "does this pod meet our platform standards?"
Example Kyverno policy that PSA can't express:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: require-readonly-rootfs
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: check-readonly-rootfs
9 match:
10 any:
11 - resources:
12 kinds: ["Pod"]
13 validate:
14 message: "Root filesystem must be read-only."
15 pattern:
16 spec:
17 containers:
18 - securityContext:
19 readOnlyRootFilesystem: trueDry-Run Testing
Before applying a PSA label to a live namespace, dry-run it to see what would be rejected:
kubectl label --dry-run=server --overwrite namespace production \
pod-security.kubernetes.io/enforce=restrictedThis returns a warning listing every currently-running workload in the namespace that violates the profile, without applying the label or disrupting anything.
Frequently Asked Questions
Can I have different PSA levels for different pods in the same namespace?
No. PSA is namespace-scoped — all pods in a namespace are subject to the same profile. If you need different security levels for different workloads, put them in separate namespaces. This is often a good reason to revisit namespace structure.
What happened to PSP webhooks like OPA-PSP or kube-psp-advisor?
kube-psp-advisor is a migration tool that generates PSPs from observed pod specs — useful for the PSP→PSA migration to understand what your workloads need. After migration, it's not needed. OPA-PSP plugins are obsolete with PSA available natively. Migrate to PSA + Kyverno for the full policy surface.
Does PSA work with Helm deployments?
Yes — Helm creates pods through the Kubernetes API, and PSA enforces on every pod admission regardless of how the pod was created. If a Helm chart creates pods that violate the namespace's PSA profile, the Helm install fails with a clear error. Use --dry-run on Helm installs to catch violations before applying.
Are there PSA exemptions for specific service accounts?
The cluster-wide AdmissionConfiguration supports usernames exemptions (by the username that creates the pod) and runtimeClasses exemptions. You cannot exempt specific service accounts at the namespace level via labels — namespace exemptions are all-or-nothing. For fine-grained per-workload exemptions, use Kyverno with exclude rules.
What's the recommended profile for kube-system?
privileged. The kube-system namespace runs CNI plugins, DNS, metrics-server, kube-proxy, and various controllers that legitimately need host access, host network, and elevated capabilities. Applying baseline or restricted to kube-system on an existing cluster will break it immediately.
For the broader admission control context, see RBAC vs ABAC in Kubernetes. For policy enforcement beyond what PSA covers, see Supply Chain Security Tools for Kubernetes (Kyverno section). For the broader security hardening checklist that puts PSA in context with RBAC and NetworkPolicy, see Kubernetes Security Hardening Guide. For multi-tenant clusters where PSA is applied per-namespace with different profiles per team, see Kubernetes Multi-Tenancy Patterns.
Migrating from PodSecurityPolicy on a live cluster? Talk to us at Coding Protocols — we've run this migration on clusters with hundreds of workloads and can help you do it without downtime.


