Pod Security Admission: Replace PodSecurityPolicy the Right Way
PodSecurityPolicy was removed in Kubernetes 1.25. Its replacement, Pod Security Admission, is built into the API server and requires no CRDs. This tutorial walks you through labelling namespaces with the right security profile and fixing everything that breaks.
Before you begin
- A running Kubernetes cluster (1.25+)
- kubectl configured
PodSecurityPolicy (PSP) was removed in Kubernetes 1.25. It was powerful but widely misunderstood — misconfigured PSPs silently allowed what they should have blocked, or blocked what they should have allowed. The admission plugin required a matching ClusterRole and binding just to function, and if you got any part of that wrong, pods would fail to schedule with cryptic errors that pointed nowhere near the actual problem.
Pod Security Admission (PSA) replaces it with a simpler, opinionated model built directly into the API server. No CRDs, no webhooks, no external dependencies. You label a namespace, and the API server enforces the policy at admission time. The tradeoff is less flexibility — you can't write arbitrary rules — but for 90% of clusters that's exactly the right tradeoff. If you need more granularity, reach for OPA Gatekeeper or Kyverno. For most workloads, PSA is sufficient and operationally much simpler.
What You'll Build
By the end of this tutorial you'll have:
- A
productionnamespace with theRestrictedprofile inenforcemode — non-compliant pods are rejected at the API server, they never get scheduled - A
stagingnamespace with theBaselineprofile inwarnmode — deploys succeed but the API response includes warnings your CI pipeline can parse - A step-by-step compliant pod spec that passes
Restricted, with an explanation of every required field
The Three Profiles
PSA ships with three built-in profiles. They're cumulative — each one is a strict superset of the one before it.
Privileged — no restrictions at all. This is the default for unlabelled namespaces. System components in kube-system run as root and require host access, so avoid labelling that namespace with anything more restrictive.
Baseline — prevents known privilege escalation vectors. Blocks host networking, host PID/IPC namespaces, privilege mode, and dangerous capabilities. Most existing workloads that aren't doing anything sketchy will pass Baseline without modification.
Restricted — follows the current Kubernetes pod hardening best practices. Requires non-root user, drops all Linux capabilities, mandates a seccomp profile, and blocks volume types that can expose host data. This is where you want production workloads to land.
Each profile is applied per mode using namespace labels:
These are the six label keys PSA recognises — you mix and match modes and profiles per namespace:
textpod-security.kubernetes.io/enforce: restricted pod-security.kubernetes.io/enforce-version: latest pod-security.kubernetes.io/warn: restricted pod-security.kubernetes.io/warn-version: latest pod-security.kubernetes.io/audit: restricted pod-security.kubernetes.io/audit-version: latest
The three modes behave differently:
- enforce — the API server rejects the pod creation request entirely. The pod never exists. This is the only mode that actually stops bad pods.
- warn — the API server accepts the request but includes a
Warning:header in the response.kubectlprints these to stderr. Your CI tooling can fail on warnings if you configure it to. - audit — the API server accepts the request and adds an annotation to the audit log entry. Requires audit logging to be configured; useful for detecting violations without affecting deploys.
You can mix and match. A common migration pattern: set warn and audit to restricted, leave enforce at baseline. This lets you see what would break before you commit to blocking it.
Step 1: Set Up the Staging Namespace with Warn Mode
Create the namespace and label it with Baseline in warn mode. We're not blocking anything yet, just surfacing issues.
bashkubectl create namespace staging
bashkubectl label namespace staging \ pod-security.kubernetes.io/warn=baseline \ pod-security.kubernetes.io/warn-version=latest
Now deploy a pod that violates Baseline — for example, one requesting host networking:
bash1kubectl apply -n staging -f - <<EOF 2apiVersion: v1 3kind: Pod 4metadata: 5 name: host-net-test 6spec: 7 hostNetwork: true 8 containers: 9 - name: nginx 10 image: nginx:1.25 11EOF
The pod is created, but kubectl prints a warning before the success confirmation:
Warning: would violate PodSecurity "baseline:latest": host namespaces (hostNetwork=true)
pod/host-net-test created
This is the value of warn mode during migration. Your staging deploy pipeline keeps working, but you see exactly which fields need fixing before you turn on enforcement. Clean up:
bashkubectl delete pod host-net-test -n staging
Step 2: Set Up the Production Namespace with Restricted Enforcement
Create the production namespace and apply Restricted enforcement directly:
bashkubectl create namespace production
bash1kubectl label namespace production \ 2 pod-security.kubernetes.io/enforce=restricted \ 3 pod-security.kubernetes.io/enforce-version=latest \ 4 pod-security.kubernetes.io/audit=restricted \ 5 pod-security.kubernetes.io/audit-version=latest \ 6 pod-security.kubernetes.io/warn=restricted \ 7 pod-security.kubernetes.io/warn-version=latest
I'm setting all three modes here. enforce blocks non-compliant pods. warn surfaces issues in the kubectl output so developers see what's wrong without having to go look at events. audit logs violations for your security team.
Now try to deploy a plain nginx pod with no security context:
bash1kubectl apply -n production -f - <<EOF 2apiVersion: v1 3kind: Pod 4metadata: 5 name: nginx-plain 6spec: 7 containers: 8 - name: nginx 9 image: nginx:1.25 10EOF
The API server rejects it immediately:
Error from server (Forbidden): error when creating "STDIN": pods "nginx-plain" is forbidden:
violates PodSecurity "restricted:latest":
allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false),
unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true),
seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
The error message is verbose but specific. Every violation is named. This is a significant improvement over PSP, which would often give you a single "forbidden" error that told you nothing about which policy field was the problem.
Step 3: Fix the Pod Spec
Here's the broken spec that got rejected above:
yaml1apiVersion: v1 2kind: Pod 3metadata: 4 name: nginx-plain 5spec: 6 containers: 7 - name: nginx 8 image: nginx:1.25 9 # no securityContext at all
Here's the compliant version:
yaml1apiVersion: v1 2kind: Pod 3metadata: 4 name: nginx-secure 5spec: 6 securityContext: 7 runAsNonRoot: true 8 runAsUser: 1000 9 seccompProfile: 10 type: RuntimeDefault 11 containers: 12 - name: nginx 13 image: nginxinc/nginx-unprivileged:stable 14 securityContext: 15 allowPrivilegeEscalation: false 16 capabilities: 17 drop: 18 - ALL
Note: I switched to nginxinc/nginx-unprivileged because the official nginx image runs as root by default. More on that in the Common Mistakes section.
Here's why each field is required for Restricted:
securityContext.runAsNonRoot: true — The API server checks whether the container's effective UID is 0. If it is, the pod is rejected. This field alone isn't enough; if the image's USER instruction specifies root (or is absent), you also need runAsUser.
securityContext.runAsUser: 1000 — Sets the UID explicitly. Combined with runAsNonRoot, this gives you two layers: the pod spec declares the intent, and the non-zero UID makes it concrete. Use any non-zero UID your application supports.
securityContext.seccompProfile.type: RuntimeDefault — Enables the container runtime's default seccomp profile, which syscall-filters the container to a reasonable allowlist. This field was added to the Restricted requirement in Kubernetes 1.25 and is the most commonly missed one when migrating from PSP, because PSP had no equivalent mandatory field. Set it at the pod level (spec.securityContext.seccompProfile) to apply it to all containers as a default, or set it per-container to override the pod default — Restricted accepts either placement.
securityContext.allowPrivilegeEscalation: false — Prevents the container process from gaining more privileges than its parent, specifically blocking setuid binaries and CAP_SYS_ADMIN. This is a container-level field; it must be set on each container in the pod, not just at the pod level.
securityContext.capabilities.drop: ["ALL"] — Drops every Linux capability from the container. Kubernetes assigns a default set of capabilities to every container (things like NET_BIND_SERVICE, CHOWN, SETUID). Restricted requires you to drop all of them. If your application genuinely needs a specific capability, add it to capabilities.add — but justify it, because each one is a security surface.
Step 4: Deploy the Compliant Pod
Apply the fixed spec:
bash1kubectl apply -n production -f - <<EOF 2apiVersion: v1 3kind: Pod 4metadata: 5 name: nginx-secure 6spec: 7 securityContext: 8 runAsNonRoot: true 9 runAsUser: 1000 10 seccompProfile: 11 type: RuntimeDefault 12 containers: 13 - name: nginx 14 image: nginxinc/nginx-unprivileged:stable 15 securityContext: 16 allowPrivilegeEscalation: false 17 capabilities: 18 drop: 19 - ALL 20EOF
Expected output:
pod/nginx-secure created
No warnings, no rejections. Confirm it's running:
bashkubectl get pod nginx-secure -n production
NAME READY STATUS RESTARTS AGE
nginx-secure 1/1 Running 0 15s
Verification
PSA rejects bare pod creates synchronously — the API server returns a 403 Forbidden immediately and no Event is written. Events with reason=FailedCreate only appear when a Deployment or ReplicaSet controller tries to create a pod and is blocked; for direct kubectl apply of a Pod resource, the rejection shows up as an error in your terminal, not in the event stream.
To verify PSA is enforcing, re-run the non-compliant pod:
bash# Verify PSA is enforcing — try the non-compliant pod again kubectl apply -f nginx-plain.yaml -n production # Expected: admission webhook denied the request (or similar 403 rejection) # Verify compliant pod is running kubectl get pod nginx-secure -n production
Verify the labels are applied correctly:
bashkubectl get namespace production --show-labels # NAME STATUS AGE LABELS # production Active 10m pod-security.kubernetes.io/audit=restricted,...
Check what a service account can create in the namespace:
bashkubectl auth can-i create pods \ --as=system:serviceaccount:production:default \ -n production # yes
The service account can still attempt to create pods — PSA doesn't restrict RBAC. It rejects pods at admission time regardless of who created them. RBAC and PSA are complementary controls.
Common Mistakes
1. Setting runAsNonRoot without runAsUser
runAsNonRoot: true tells Kubernetes to check the effective UID at container start. But if the image has no USER instruction and you don't set runAsUser, the container defaults to root (UID 0), and the pod fails to start with container has runAsNonRoot and image will run as root. Set both fields. runAsNonRoot is the policy intent; runAsUser is the mechanism.
2. Jumping straight to enforce in production
Don't. Label existing namespaces with warn and audit first. Leave them for at least a week and check your audit logs. In a real cluster you'll have DaemonSets, monitoring agents, and ingress controllers that run in application namespaces and need privileged access. Finding this out after you've turned on enforcement means an outage. Finding it out in warn mode means a backlog item.
3. Forgetting seccompProfile
This is the most commonly missed Restricted field, especially if you're migrating PSP policies by hand. PSP had no mandatory seccomp field — you could add annotations for it, but nothing forced you to. Restricted in PSA requires it. Your pod will pass every other check and then fail on this one. Check for it first.
4. Applying labels to kube-system
Don't touch kube-system. System components — kube-proxy, CoreDNS, CSI drivers, the CNI plugin — run with elevated privileges because they need host network access, host PID, and capabilities like NET_ADMIN. Some managed Kubernetes providers (EKS, GKE, AKS) pre-configure exemptions for system namespaces, but this is not guaranteed. On a self-managed cluster, labelling kube-system with enforce: restricted will be enforced and will break system components.
5. Using enforce-version: latest in production
latest means "whatever the current Kubernetes version considers Restricted." When you upgrade Kubernetes, the Restricted profile can gain new requirements. If you're on latest, a cluster upgrade can silently change what's allowed and start rejecting pods that previously passed. In production, pin to a specific version — v1.30, v1.31 — and update it deliberately after reviewing the changelog. Use latest in development where surprises are acceptable.
Cleanup
bashkubectl delete namespace production staging
What's Next
If you need more expressive policy — for example, allowing exceptions per workload rather than per namespace — look at Kyverno, which lets you write validate, mutate, and generate policies as Kubernetes resources.
Official References
- Pod Security Admission — Kubernetes docs covering all modes, profiles, and label syntax
- Pod Security Standards — Full specification of Privileged, Baseline, and Restricted profiles with every field listed
- PodSecurityPolicy Deprecation: Past, Present, and Future — The official explanation of why PSP was removed and the migration path
- Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller — Step-by-step migration guide
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.