HashiCorp Vault: Kubernetes Auth and Dynamic Secrets
Configure Vault's Kubernetes auth method so pods authenticate using their ServiceAccount token, then generate short-lived database credentials on demand instead of storing static passwords in Kubernetes Secrets.
Before you begin
- A running Kubernetes cluster
- kubectl and Helm installed
- Basic understanding of Vault concepts (policies
- roles
- paths)
- A PostgreSQL database (or any Vault-supported database)
Static database passwords in Kubernetes Secrets have three problems: they never rotate, they're visible to anyone with RBAC read access to the namespace, and when they leak you have to update every service that uses them.
Vault's Kubernetes auth method and dynamic secrets engine solve all three. A pod authenticates using its ServiceAccount JWT (which Kubernetes already provides), gets a short-lived database password that's only valid for its session, and Vault revokes it automatically when the lease expires.
Architecture
Pod starts
→ Pod has a ServiceAccount JWT token
→ Pod calls Vault: "here's my JWT, I want the role db-app-role"
→ Vault validates JWT with Kubernetes API
→ Vault checks: does the ServiceAccount match db-app-role's binding?
→ Vault generates a new PostgreSQL user with 1-hour TTL
→ Pod receives username + password
→ 1 hour later, Vault drops the PostgreSQL user automatically
Step 1: Deploy Vault with Helm
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set "server.ha.enabled=false" \
--set "server.dev.enabled=true" # Dev mode: unsealed, in-memory, root token = "root"
Dev mode is not for production — it resets on restart. For production, use the HA setup with Raft storage.
Wait for Vault to start:
kubectl wait --for=condition=Ready pod/vault-0 -n vault --timeout=60s
Initialize the Vault CLI:
# Port-forward for local CLI access
kubectl port-forward vault-0 8200:8200 -n vault &
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='root' # Dev mode token
vault status
Step 2: Enable the Kubernetes Auth Method
vault auth enable kubernetes
Configure it to talk to the Kubernetes API:
# Get the Kubernetes API server address from inside the cluster
KUBE_CA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
KUBE_HOST=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.server}')
vault write auth/kubernetes/config \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$(echo $KUBE_CA | base64 -d)"
When running inside the cluster (the Vault pod itself), Vault can discover these automatically:
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc.cluster.local:443"
Step 3: Enable the Database Secrets Engine
vault secrets enable database
Configure a connection to PostgreSQL:
vault write database/config/my-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="app-role,readonly-role" \
connection_url="postgresql://{{username}}:{{password}}@postgres.production.svc.cluster.local:5432/appdb" \
username="vault_admin" \
password="vault_admin_password"
vault_admin must have CREATEROLE and LOGIN permissions in PostgreSQL:
CREATE USER vault_admin WITH CREATEROLE LOGIN PASSWORD 'vault_admin_password';
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO vault_admin WITH GRANT OPTION;
Create a role that Vault uses to generate credentials:
vault write database/roles/app-role \
db_name=my-postgres \
creation_statements="
CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";
" \
default_ttl="1h" \
max_ttl="24h"
Test credential generation manually:
vault read database/creds/app-role
# Key Value
# lease_duration 1h
# username v-root-app-role-AbCdEf123456
# password A1b2C3d4E5f6G7h8I9j0
The generated user exists in PostgreSQL and disappears when the lease expires.
Step 4: Create a Vault Policy
The policy defines what a pod can access:
vault policy write my-app - <<EOF
# Read dynamic database credentials
path "database/creds/app-role" {
capabilities = ["read"]
}
# Renew leases
path "sys/leases/renew" {
capabilities = ["update"]
}
# Revoke own leases
path "sys/leases/revoke" {
capabilities = ["update"]
}
EOF
Step 5: Create a Kubernetes Auth Role
This role says: "pods in namespace production with ServiceAccount my-app can use the my-app policy":
vault write auth/kubernetes/role/my-app \
bound_service_account_names=my-app \
bound_service_account_namespaces=production \
policies=my-app \
ttl=1h
Create the ServiceAccount in Kubernetes:
kubectl create serviceaccount my-app -n production
Step 6: Use Vault Agent Sidecar for Secret Injection
The Vault Agent sidecar runs alongside your pod, authenticates to Vault, and writes secrets to a shared volume. Your application reads files instead of calling Vault directly.
Enable the sidecar injector (installed with the Helm chart, but needs the mutating webhook):
helm upgrade vault hashicorp/vault \
--namespace vault \
--set "injector.enabled=true" \
--reuse-values
Annotate your deployment to inject secrets:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-role"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/app-role" -}}
export DB_USERNAME="{{ .Data.username }}"
export DB_PASSWORD="{{ .Data.password }}"
{{- end }}
spec:
serviceAccountName: my-app
containers:
- name: app
image: my-app:latest
command: ["/bin/sh", "-c"]
args:
- source /vault/secrets/db-creds && exec /app/server
The sidecar writes to /vault/secrets/db-creds. Your container sources it to get DB_USERNAME and DB_PASSWORD as environment variables.
Step 7: Verify Injection
kubectl get pod -n production -l app=my-app
# Check the init container ran successfully
kubectl describe pod <pod-name> -n production | grep -A 10 "vault-agent-init"
# Check the secret file exists
kubectl exec -n production <pod-name> -c app -- cat /vault/secrets/db-creds
# export DB_USERNAME="v-kubernet-app-role-AbCdEf"
# export DB_PASSWORD="A1b2C3d4"
Step 8: Use the Vault SDK Instead of Files (Alternative)
For more control, authenticate directly in your application:
import * as vault from 'node-vault';
async function getDatabaseCredentials() {
const client = vault.default({ endpoint: process.env.VAULT_ADDR });
// Read the ServiceAccount JWT from the mounted volume
const jwt = fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token', 'utf8');
// Authenticate with Kubernetes auth method
const auth = await client.kubernetesLogin({
role: 'my-app',
jwt,
});
client.token = auth.auth.client_token;
// Get dynamic database credentials
const creds = await client.read('database/creds/app-role');
return {
username: creds.data.username,
password: creds.data.password,
leaseId: creds.lease_id,
leaseDuration: creds.lease_duration,
};
}
Schedule credential renewal before the TTL expires:
async function renewCredentials(leaseId: string, leaseDuration: number) {
// Renew at 80% of TTL
setTimeout(async () => {
await client.write('sys/leases/renew', { lease_id: leaseId, increment: 3600 });
}, leaseDuration * 0.8 * 1000);
}
Production Considerations
Vault HA with Raft: For production, use the integrated Raft storage (3-node minimum):
helm install vault hashicorp/vault \
--set "server.ha.enabled=true" \
--set "server.ha.raft.enabled=true" \
--set "server.ha.replicas=3"
Auto-unseal: Production Vault requires unsealing after restart. Use AWS KMS, GCP KMS, or Azure Key Vault for auto-unseal so Vault recovers automatically without manual key entry.
Audit logging: Enable before going to production:
vault audit enable file file_path=/vault/logs/audit.log
Least-privilege vault_admin: The Vault database admin user should only have CREATEROLE on the schemas your app uses, not superuser.
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.