Encrypting Kubernetes Secrets at Rest
Kubernetes stores Secrets as base64 in etcd by default — not encrypted. This tutorial shows you how to enable encryption at rest using EncryptionConfiguration, verify it actually works by reading from etcd directly, and rotate the key without downtime.
Before you begin
- A self-managed Kubernetes cluster with API server access (kubeadm)
- kubectl configured
- etcdctl installed
Kubernetes Secrets are not secret by default. The spec says so plainly: values are base64-encoded, which is reversible by anyone with read access to etcd. An attacker who gets a snapshot of your etcd data — from a misconfigured backup bucket, a compromised node, or a volume snapshot — can decode every password, token, and certificate in your cluster in seconds.
The fix is encryption at rest via EncryptionConfiguration. It's a static API server flag that wraps every Secret write in AES-GCM or AES-CBC before it hits the etcd wire. The decryption key never enters etcd. This tutorial walks through enabling it, verifying it actually works by reading raw etcd data, and rotating keys without losing access to existing Secrets.
Everything below assumes a kubeadm-provisioned cluster where you have SSH access to the control plane node.
What You'll Build
- An
EncryptionConfigurationmanifest with an AES-CBC provider (compatible with all Kubernetes versions; AES-GCM is preferred on 1.28+) and an identity fallback - API server static pod configured to load that manifest
- Proof that new Secrets are encrypted (read directly from etcd)
- A working key rotation procedure that re-encrypts all existing Secrets
Step 1: Prove Secrets Are Currently Unencrypted
Create a test Secret with a recognizable value:
bashkubectl create secret generic my-test-secret \ --from-literal=password=hunter2 \ -n default
Now read it directly from etcd. The etcdctl command needs the etcd TLS certificates, which kubeadm places under /etc/kubernetes/pki/etcd/:
bash1ETCDCTL_API=3 etcdctl get \ 2 /registry/secrets/default/my-test-secret \ 3 --endpoints=https://127.0.0.1:2379 \ 4 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 5 --cert=/etc/kubernetes/pki/etcd/server.crt \ 6 --key=/etc/kubernetes/pki/etcd/server.key \ 7 --print-value-only
The output is binary protobuf — pipe it through strings to see the readable parts:
bash1ETCDCTL_API=3 etcdctl get \ 2 /registry/secrets/default/my-test-secret \ 3 --endpoints=https://127.0.0.1:2379 \ 4 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 5 --cert=/etc/kubernetes/pki/etcd/server.crt \ 6 --key=/etc/kubernetes/pki/etcd/server.key \ 7 --print-value-only | strings
Example output (abbreviated — binary noise stripped):
k8s
v1Secret
my-test-secret
default
...
hunter2
...
The raw bytes are not encrypted — they're just protobuf-encoded. hunter2 is visible in plain text buried in the binary output. That's what we're fixing.
Step 2: Generate an Encryption Key
AES-256-GCM requires a 32-byte key. Generate one from /dev/urandom:
bashhead -c 32 /dev/urandom | base64
Example output:
7G3mHkLpNqRsVwXyZaBcDeFgHiJkLmNo+Pq/Rs==
Copy this value. Do not lose it. There is no recovery path if you lose the key and have encrypted Secrets. Store it in your HSM, AWS KMS, or HashiCorp Vault before doing anything else. Treat it like a root CA private key.
Step 3: Write the EncryptionConfiguration Manifest
On the control plane node, create the config directory:
bashmkdir -p /etc/kubernetes/encryption
Write the manifest. Replace <your-base64-key> with the key you generated above:
bash1cat > /etc/kubernetes/encryption/config.yaml << 'EOF' 2apiVersion: apiserver.config.k8s.io/v1 3kind: EncryptionConfiguration 4resources: 5 - resources: 6 - secrets 7 providers: 8 - aescbc: 9 keys: 10 - name: key1 11 secret: <your-base64-key> 12 - identity: {} 13EOF
Two things to understand about this config:
Provider order matters. The first provider is used for all new writes. Subsequent providers are used only for decryption (the API server tries each in order until one succeeds). With aescbc first, all new Secrets are encrypted. With identity last, the API server can still read Secrets that were written before this config was applied.
Do not remove identity: {} yet. If you remove it before re-encrypting existing Secrets, the API server will fail to decrypt them and they become inaccessible. You'll see unable to decrypt errors when any controller or workload tries to read a Secret.
On Kubernetes 1.28+, prefer aesgcm over aescbc — AES-GCM is an authenticated encryption scheme (it detects tampering), whereas AES-CBC is not. The configuration syntax is identical; just swap the key name.
Lock down the config file. The encryption key is now on disk, so restrict access:
bashchmod 600 /etc/kubernetes/encryption/config.yaml chown root:root /etc/kubernetes/encryption/config.yaml
Step 4: Configure the API Server
The API server runs as a static pod managed by the kubelet. Its manifest lives at /etc/kubernetes/manifests/kube-apiserver.yaml. Edit it:
bashvim /etc/kubernetes/manifests/kube-apiserver.yaml
Make three changes:
1. Add the flag to the command args:
yamlspec: containers: - command: - kube-apiserver - --encryption-provider-config=/etc/kubernetes/encryption/config.yaml # ... existing flags ...
2. Add a volumeMount inside the container spec:
yamlvolumeMounts: - mountPath: /etc/kubernetes/encryption name: encryption-config readOnly: true # ... existing mounts ...
3. Add the corresponding volume in the pod spec:
yamlvolumes: - hostPath: path: /etc/kubernetes/encryption type: DirectoryOrCreate name: encryption-config # ... existing volumes ...
Save the file. The kubelet watches /etc/kubernetes/manifests/ and will detect the change immediately, killing and restarting the API server pod. This causes a brief control plane interruption — typically 20–40 seconds. Plan for it.
Watch for the API server to come back:
bashkubectl get pods -n kube-system | grep apiserver
kube-apiserver-controlplane 1/1 Running 1 45s
Once it's Running, the new configuration is active. The restart count incrementing is expected — that's the kubelet cycling the pod after the manifest change.
Step 5: Verify Encryption Is Active
Create a new Secret after the API server has restarted:
bashkubectl create secret generic my-new-secret \ --from-literal=password=supersecret \ -n default
Read it directly from etcd:
bash1ETCDCTL_API=3 etcdctl get \ 2 /registry/secrets/default/my-new-secret \ 3 --endpoints=https://127.0.0.1:2379 \ 4 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 5 --cert=/etc/kubernetes/pki/etcd/server.crt \ 6 --key=/etc/kubernetes/pki/etcd/server.key \ 7 --print-value-only | head -c 30
Expected output:
k8s:enc:aescbc:v1:key1:
The value starts with k8s:enc:aescbc:v1:key1: followed by opaque ciphertext. Compare this to the output in Step 1. supersecret is nowhere in the etcd value. Encryption is working.
If you're using aesgcm, the prefix will be k8s:enc:aesgcm:v1:key1:.
Step 6: Re-encrypt All Existing Secrets
New Secrets are encrypted. Old Secrets — including my-test-secret from Step 1 — are still stored in plaintext. You need to force a re-write of every existing Secret through the API server so they get encrypted on the way in.
The standard approach is to read all Secrets and replace them in place:
bashkubectl get secrets --all-namespaces -o json | kubectl replace -f -
This pipes every Secret through kubectl replace, which sends a PUT to the API server. The API server decrypts (using identity for old plaintext Secrets) and re-encrypts (using aescbc for all writes) before writing back to etcd.
On a large cluster with thousands of Secrets, this can take a few minutes and generates significant API server load. Run it during a maintenance window or throttle with xargs -P if needed.
After it completes, verify that the old Secret is now encrypted:
bash1ETCDCTL_API=3 etcdctl get \ 2 /registry/secrets/default/my-test-secret \ 3 --endpoints=https://127.0.0.1:2379 \ 4 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 5 --cert=/etc/kubernetes/pki/etcd/server.crt \ 6 --key=/etc/kubernetes/pki/etcd/server.key \ 7 --print-value-only | head -c 30
k8s:enc:aescbc:v1:key1:
Now my-test-secret is also encrypted. At this point you can optionally remove identity: {} from the providers list — but leave it if you're not certain all Secrets have been re-encrypted. It doesn't hurt anything to keep it; it just means a plaintext Secret written somehow by a misconfigured API server would still be readable.
Step 7: Rotate the Encryption Key
Key rotation is necessary when a key is compromised, when your compliance policy mandates rotation, or when a team member with key access leaves. The procedure is designed to be zero-downtime but does require two API server restarts.
Step 7a: Add the new key at the top of the providers list.
Generate a new key:
bashhead -c 32 /dev/urandom | base64
Update /etc/kubernetes/encryption/config.yaml with both keys. The new key goes first — it will be used for all new writes. The old key stays second — it allows decryption of Secrets already encrypted with it:
yaml1apiVersion: apiserver.config.k8s.io/v1 2kind: EncryptionConfiguration 3resources: 4 - resources: 5 - secrets 6 providers: 7 - aescbc: 8 keys: 9 - name: key2 10 secret: <your-new-base64-key> 11 - name: key1 12 secret: <your-old-base64-key> 13 - identity: {}
We keep identity: {} here temporarily so the API server can still decrypt any Secret that was missed during the Step 6 re-encryption. Once you confirm all Secrets have the k8s:enc: prefix, remove it.
Step 7b: Restart the API server.
The kubelet detects the config file change automatically (because we mounted the directory as a hostPath). But the API server process itself does not hot-reload — you need to trigger a restart:
bash# Touch the manifest to force kubelet to restart the API server pod touch /etc/kubernetes/manifests/kube-apiserver.yaml
Wait for it to come back:
bashkubectl get pods -n kube-system -w | grep apiserver
Step 7c: Re-encrypt all Secrets with the new key.
bashkubectl get secrets --all-namespaces -o json | kubectl replace -f -
All Secrets are now encrypted with key2. Secrets previously encrypted with key1 are decrypted (using key1 from the providers list) and re-encrypted with key2 (the first key).
Step 7d: Remove the old key and restart again.
Edit /etc/kubernetes/encryption/config.yaml and remove key1:
yaml1apiVersion: apiserver.config.k8s.io/v1 2kind: EncryptionConfiguration 3resources: 4 - resources: 5 - secrets 6 providers: 7 - aescbc: 8 keys: 9 - name: key2 10 secret: <your-new-base64-key> 11 - identity: {}
Restart the API server again:
bashtouch /etc/kubernetes/manifests/kube-apiserver.yaml
key1 is now gone from the cluster. Secrets encrypted with it have all been re-encrypted with key2. If you had a backup of etcd taken before the re-encryption step, those Secrets are still encrypted with key1 — which is now gone — and cannot be decrypted. That's exactly what you want for a compromised key scenario.
Verification
Quick sanity check — confirm new Secrets have the encrypted prefix and no Secrets are stored in plaintext:
bash1# Confirm a specific Secret is encrypted 2ETCDCTL_API=3 etcdctl get \ 3 /registry/secrets/default/my-new-secret \ 4 --endpoints=https://127.0.0.1:2379 \ 5 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 6 --cert=/etc/kubernetes/pki/etcd/server.crt \ 7 --key=/etc/kubernetes/pki/etcd/server.key \ 8 --print-value-only | head -c 20
k8s:enc:aescbc:v1:key
bash1# Scan all Secrets for any that are NOT encrypted 2ETCDCTL_API=3 etcdctl get /registry/secrets/ \ 3 --endpoints=https://127.0.0.1:2379 \ 4 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 5 --cert=/etc/kubernetes/pki/etcd/server.crt \ 6 --key=/etc/kubernetes/pki/etcd/server.key \ 7 --prefix --keys-only | while read key; do 8 val=$(ETCDCTL_API=3 etcdctl get "$key" \ 9 --endpoints=https://127.0.0.1:2379 \ 10 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 11 --cert=/etc/kubernetes/pki/etcd/server.crt \ 12 --key=/etc/kubernetes/pki/etcd/server.key \ 13 --print-value-only 2>/dev/null) 14 echo "$val" | grep -q "^k8s:enc:" || echo "UNENCRYPTED: $key" 15done
If the script prints nothing, every Secret in the cluster is encrypted. If it prints any UNENCRYPTED: lines, run the re-encryption command from Step 6 again.
Common Mistakes
1. Skipping the re-encryption step. This is the most common mistake. After enabling EncryptionConfiguration, only new Secrets are encrypted. Existing ones stay in plaintext until you explicitly re-write them. If you audit etcd and wonder why old Secrets are unencrypted, this is why.
2. Removing identity: {} before re-encrypting. If you remove the identity provider while any Secret in etcd is still in plaintext, the API server will fail to decrypt it. Every controller that reads that Secret will get an error. Pods using it as an environment variable or volume will fail to start. Add identity back immediately if this happens, re-encrypt everything, then remove it again.
3. Losing the encryption key. There is no recovery. The API server cannot decrypt Secrets without the key. You cannot "reset" the key — the ciphertext in etcd is permanently unreadable. Store the key in AWS KMS, GCP Cloud KMS, Azure Key Vault, or HashiCorp Vault. Never put it only on the control plane node disk.
4. Encrypting only Secrets. The EncryptionConfiguration spec supports multiple resource types. If you store sensitive data in ConfigMaps, encrypt those too. The resources list in the manifest accepts any API resource — configmaps, events, and even custom resources.
5. Not accounting for the API server restart. Editing the static pod manifest restarts the API server within seconds of saving the file. On a single-control-plane cluster this causes a full control plane interruption. On a multi-control-plane cluster, roll the change one node at a time to maintain quorum. Either way, notify your team before making this change in production.
Cleanup
Remove the test Secrets created during this tutorial:
bashkubectl delete secret my-test-secret my-new-secret -n default
The EncryptionConfiguration and API server flags remain in place — that's your permanent encryption setup.
Official References
- Encrypting Secret Data at Rest — Official Kubernetes guide covering EncryptionConfiguration, providers, and key rotation
- Encryption at Rest Configuration — API reference for the EncryptionConfiguration resource and all supported providers
- Good Practices for Kubernetes Secrets — Kubernetes docs on secret hygiene, rotation, and access control
- etcd Security — etcd docs on securing etcd at rest and in transit
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.