Setting Up Kubernetes RBAC from Scratch
A step-by-step guide to configuring Role-Based Access Control in Kubernetes. You'll create users, define roles with least-privilege permissions, bind them, and verify access — all with real kubectl commands.
Before you begin
- kubectl installed and configured
- Access to a running Kubernetes cluster
- Basic familiarity with Kubernetes namespaces and pods
Kubernetes RBAC is one of those things that's easy to ignore — until you're running in production and realise that half your engineers have cluster-admin because that was the path of least resistance.
This tutorial walks you through setting up RBAC properly: creating users and service accounts, writing roles with least-privilege permissions, binding them, and verifying the access is exactly what you intended.
By the end you'll have a repeatable pattern you can apply to every namespace in your cluster.
What You'll Build
A three-tier access model for a production namespace:
- Reader — can view pods, services, and deployments. Cannot modify anything.
- Developer — can view everything + exec into pods + manage ConfigMaps and Secrets.
- CI Bot — a ServiceAccount that can update deployments (for rolling releases) but nothing else.
Step 1: Create the Namespace
Start with a clean namespace to test in:
kubectl create namespace production
Verify it exists:
kubectl get namespace production
# NAME STATUS AGE
# production Active 5s
Step 2: Create the Reader Role
A Role grants permissions within a single namespace. This one allows viewing pods, services, and deployments but nothing else:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: reader
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch"]
EOF
A few things worth noting:
apiGroups: [""]means the core API group — pods, services, configmaps all live here.apiGroups: ["apps"]covers deployments and replicasets, which live in the apps API group.- Verbs
get,list,watchare read-only. Addingcreate,update,delete, orpatchwould grant write access.
Verify the role was created:
kubectl get role reader -n production -o yaml
Step 3: Create the Developer Role
The developer role extends the reader role with exec access and the ability to manage ConfigMaps and Secrets:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["pods/exec", "pods/log", "pods/portforward"]
verbs: ["create", "get"]
EOF
The pods/exec sub-resource is what controls kubectl exec. Without it, even if a user can get pods, they can't exec into them. The pods/log sub-resource similarly controls kubectl logs.
Step 4: Create a ServiceAccount for the CI Bot
ServiceAccounts are for machines, not humans. Create one for your CI pipeline:
kubectl create serviceaccount ci-bot -n production
Verify:
kubectl get serviceaccount ci-bot -n production
# NAME SECRETS AGE
# ci-bot 0 3s
Now create its role — it only needs to update deployments:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ci-deployer
namespace: production
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
EOF
This lets the CI bot update a deployment image tag but nothing else. It can't create new deployments, touch secrets, or exec into pods.
Step 5: Create User Identities
Kubernetes doesn't manage users directly — it delegates to your identity provider. For this tutorial, we'll create client certificates, which work with any cluster.
Create a private key and certificate signing request for a user named alice:
# Generate private key
openssl genrsa -out alice.key 2048
# Generate CSR
openssl req -new -key alice.key -out alice.csr -subj "/CN=alice/O=developers"
The /CN=alice becomes the username in Kubernetes. The /O=developers sets the group — you can use groups in RoleBindings to manage multiple users at once.
Submit the CSR to Kubernetes for signing:
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: alice
spec:
request: $(cat alice.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 86400
usages:
- client auth
EOF
Approve it:
kubectl certificate approve alice
Extract the signed certificate:
kubectl get csr alice -o jsonpath='{.status.certificate}' | base64 -d > alice.crt
Step 6: Bind Roles to Users and ServiceAccounts
A RoleBinding connects a Role to a user, group, or ServiceAccount within a namespace.
Bind the reader role to alice:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-reader
namespace: production
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: reader
apiGroup: rbac.authorization.k8s.io
EOF
Bind the ci-deployer role to the ci-bot ServiceAccount:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ci-bot-deployer
namespace: production
subjects:
- kind: ServiceAccount
name: ci-bot
namespace: production
roleRef:
kind: Role
name: ci-deployer
apiGroup: rbac.authorization.k8s.io
EOF
Step 7: Configure kubectl for Alice
Add alice's credentials to your kubeconfig:
kubectl config set-credentials alice \
--client-certificate=alice.crt \
--client-key=alice.key
kubectl config set-context alice-production \
--cluster=$(kubectl config current-context | cut -d@ -f2) \
--user=alice \
--namespace=production
Switch to alice's context:
kubectl config use-context alice-production
Step 8: Verify Access
Test what alice can do — and what she can't:
# Should succeed — alice has read access to pods
kubectl get pods -n production
# Should succeed
kubectl get deployments -n production
# Should fail — alice only has read access, not exec
kubectl exec -it some-pod -n production -- /bin/sh
# Error from server (Forbidden): pods "some-pod" is forbidden:
# User "alice" cannot create resource "pods/exec" in API group ""
# Should fail — alice has no access outside production
kubectl get pods -n default
# Error from server (Forbidden)
Switch back to your admin context:
kubectl config use-context <your-admin-context>
Use kubectl auth can-i to check permissions without switching contexts — faster for bulk verification:
# Check what alice can do
kubectl auth can-i get pods --as alice -n production # yes
kubectl auth can-i delete pods --as alice -n production # no
kubectl auth can-i create deployments --as alice -n production # no
# Check the ci-bot ServiceAccount
kubectl auth can-i update deployments \
--as system:serviceaccount:production:ci-bot \
-n production # yes
kubectl auth can-i delete secrets \
--as system:serviceaccount:production:ci-bot \
-n production # no
The --as flag impersonates any user or service account without needing their credentials. It's the fastest way to audit RBAC in bulk.
Step 9: Get the CI Bot Token
To use the ServiceAccount from your CI pipeline, create a long-lived token (or use the short-lived projected token that's auto-mounted):
kubectl create token ci-bot -n production --duration=8760h
This outputs a JWT. Store it in your CI system's secret manager (GitHub Actions secrets, GitLab CI variables, etc.) and use it with:
kubectl config set-credentials ci-bot \
--token=<token-from-above>
For production, prefer short-lived tokens via the TokenRequest API or use Workload Identity (AWS IRSA, GCP Workload Identity Federation) instead of static tokens.
Common Mistakes to Avoid
Granting cluster-admin "just to get it working" — this bypasses RBAC entirely and grants unrestricted access to the entire cluster. There's almost no legitimate reason for a non-admin human or service account to have this role in production.
Using ClusterRole when you need Role — a ClusterRoleBinding that binds a ClusterRole grants permissions across all namespaces, not just the target namespace. Use a RoleBinding (even for a ClusterRole) to scope it to one namespace.
Wildcards in rules — resources: ["*"] and verbs: ["*"] are rarely appropriate. Define only what the subject actually needs.
Not auditing regularly — roles accumulate. Run this periodically to list all RoleBindings and their subjects:
kubectl get rolebindings -A -o wide
kubectl get clusterrolebindings -o wide
Cleanup
kubectl delete namespace production
kubectl delete csr alice
kubectl config delete-context alice-production
kubectl config delete-user alice
rm alice.key alice.crt alice.csr
What's Next
- Set up an OPA Gatekeeper policy that prevents any RoleBinding from granting
cluster-admin - Integrate with your identity provider (OIDC) so users authenticate with SSO instead of client certificates
- Use
kube-rbac-proxyto add RBAC enforcement to custom metrics endpoints - Automate RBAC auditing with
rbac-lookuporkubectl-who-can
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.