Setting Up IAM Roles for Service Accounts (IRSA) on EKS
Give individual Kubernetes pods scoped AWS permissions without node-level IAM roles. IRSA uses the EKS OIDC provider to issue short-lived credentials per ServiceAccount — no static keys, no overly permissive nodes.
Before you begin
- An EKS cluster with an OIDC provider enabled
- AWS CLI configured with IAM admin permissions
- kubectl configured for the cluster
- eksctl (optional but makes OIDC setup simpler)
The naive approach to giving pods AWS access: attach IAM policies to the node group IAM role. This gives every pod on every node the same permissions. One compromised pod means access to everything.
IRSA (IAM Roles for Service Accounts) scopes permissions to a specific ServiceAccount in a specific namespace. A pod in production with ServiceAccount s3-writer can write to S3. The pod next to it with ServiceAccount my-api cannot.
How IRSA Works
- EKS has an OIDC provider — a URL that proves "this ServiceAccount token came from this cluster"
- You create an IAM role with a trust policy that says "allow this role to be assumed if the token is from this specific ServiceAccount in this namespace"
- Kubernetes mounts a projected token (not the default SA token) into the pod
- The AWS SDK exchanges that token for temporary credentials automatically
Your application code doesn't change — boto3, the AWS SDK for Go, or any other AWS SDK picks up the credentials from environment variables that the EKS admission controller injects.
Step 1: Verify the OIDC Provider Exists
# Get cluster OIDC issuer URL
aws eks describe-cluster \
--name my-cluster \
--query "cluster.identity.oidc.issuer" \
--output text
# https://oidc.eks.ap-south-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E
# Check if an IAM OIDC provider exists for this URL
aws iam list-open-id-connect-providers
If no provider exists for your cluster's OIDC URL, create it:
eksctl utils associate-iam-oidc-provider \
--cluster my-cluster \
--region ap-south-1 \
--approve
Or manually:
OIDC_URL=$(aws eks describe-cluster \
--name my-cluster \
--query "cluster.identity.oidc.issuer" \
--output text | sed 's|https://||')
THUMBPRINT=$(openssl s_client -connect oidc.eks.ap-south-1.amazonaws.com:443 2>/dev/null \
| openssl x509 -fingerprint -noout -sha1 \
| sed 's/://g' \
| awk -F= '{print tolower($2)}')
aws iam create-open-id-connect-provider \
--url "https://${OIDC_URL}" \
--client-id-list sts.amazonaws.com \
--thumbprint-list $THUMBPRINT
Step 2: Create the IAM Policy
Define what the pod is allowed to do:
# Example: write-only access to a specific S3 bucket
cat > s3-writer-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-app-uploads/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::my-app-uploads"
}
]
}
EOF
aws iam create-policy \
--policy-name S3WriterPolicy \
--policy-document file://s3-writer-policy.json
Step 3: Create the IAM Role with a Trust Policy
The trust policy says: "allow this role to be assumed by the OIDC token if it's from ServiceAccount s3-writer in namespace production."
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
OIDC_URL=$(aws eks describe-cluster \
--name my-cluster \
--query "cluster.identity.oidc.issuer" \
--output text | sed 's|https://||')
cat > trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_URL}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_URL}:sub": "system:serviceaccount:production:s3-writer",
"${OIDC_URL}:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
aws iam create-role \
--role-name S3WriterRole \
--assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy \
--role-name S3WriterRole \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/S3WriterPolicy
ROLE_ARN=$(aws iam get-role \
--role-name S3WriterRole \
--query "Role.Arn" --output text)
echo "Role ARN: $ROLE_ARN"
The sub claim format is always system:serviceaccount:<namespace>:<service-account-name>. This is the key trust condition — it scopes the role to exactly one ServiceAccount in one namespace.
Step 4: Create the Kubernetes ServiceAccount
Annotate the ServiceAccount with the IAM role ARN:
kubectl create namespace production 2>/dev/null || true
kubectl create serviceaccount s3-writer -n production
kubectl annotate serviceaccount s3-writer \
-n production \
eks.amazonaws.com/role-arn=$ROLE_ARN
Or declaratively:
apiVersion: v1
kind: ServiceAccount
metadata:
name: s3-writer
namespace: production
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/S3WriterRole"
eks.amazonaws.com/token-expiration: "86400" # Token TTL in seconds (default: 86400)
Step 5: Use the ServiceAccount in a Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
spec:
serviceAccountName: s3-writer # This is the key field
containers:
- name: app
image: my-app:latest
env:
- name: AWS_REGION
value: ap-south-1
- name: S3_BUCKET
value: my-app-uploads
When the pod starts, EKS automatically:
- Mounts a projected token at
/var/run/secrets/eks.amazonaws.com/serviceaccount/token - Sets
AWS_WEB_IDENTITY_TOKEN_FILEenv var pointing to the token - Sets
AWS_ROLE_ARNenv var to the IAM role from the annotation
The AWS SDK reads these environment variables and handles credential exchange automatically.
Step 6: Verify It Works
Deploy a test pod and check its credentials:
kubectl run irsa-test \
--image=amazon/aws-cli:latest \
--serviceaccount=s3-writer \
--namespace=production \
--rm -it \
--restart=Never \
-- sts get-caller-identity
Expected output:
{
"UserId": "AROAEXAMPLE:eks-production-s3-writ-xxxx",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/S3WriterRole/eks-production-s3-writ-xxxx"
}
The Arn confirms the pod assumed S3WriterRole. Test the actual permission:
kubectl run irsa-test \
--image=amazon/aws-cli:latest \
--serviceaccount=s3-writer \
--namespace=production \
--rm -it \
--restart=Never \
-- s3 cp /etc/hostname s3://my-app-uploads/test.txt
Step 7: Remove Node-Level IAM Permissions
After verifying IRSA works for all your services, remove overly permissive node-level IAM policies. Only these three are required on the node group role:
AmazonEKSWorkerNodePolicy — worker node lifecycle
AmazonEKS_CNI_Policy — VPC networking
AmazonEC2ContainerRegistryReadOnly — pulling images from ECR
Remove anything else (S3, DynamoDB, SQS, etc.) — those belong on per-service IRSA roles.
Common Patterns
Multiple services, different roles:
# Each service gets its own ServiceAccount + IAM role
kubectl annotate sa my-api -n production eks.amazonaws.com/role-arn=$MY_API_ROLE_ARN
kubectl annotate sa worker -n production eks.amazonaws.com/role-arn=$WORKER_ROLE_ARN
kubectl annotate sa exporter -n production eks.amazonaws.com/role-arn=$EXPORTER_ROLE_ARN
Cross-account access: The trust policy can reference an OIDC provider in account A while the role lives in account B. Add the cross-account OIDC ARN to the federated principal.
Token expiration tuning: For long-running batch jobs, extend the token TTL:
eks.amazonaws.com/token-expiration: "43200" # 12 hours
Debugging
NoCredentialProviders: The pod isn't using the annotated ServiceAccount. Check kubectl get pod my-pod -o yaml | grep serviceAccountName.
AssumeRoleWithWebIdentity: InvalidIdentityToken: The OIDC URL in the trust policy doesn't match the cluster's OIDC issuer. Double-check with aws eks describe-cluster --name my-cluster --query cluster.identity.oidc.issuer.
AccessDenied: The role is assumed correctly but doesn't have permission for the specific action. Verify with aws iam simulate-principal-policy.
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.