AWS
14 min readMay 2, 2026

AWS Load Balancer Controller: ALB and NLB for EKS

The AWS Load Balancer Controller provisions and manages ALBs and NLBs from Kubernetes Ingress and Service resources. It replaces the in-tree Kubernetes cloud provider integration for load balancers on EKS and gives you full control over ALB configuration — listener rules, target groups, WAF, OIDC authentication, and IngressGroups that share a single ALB across multiple services.

CO
Coding Protocols Team
Platform Engineering
AWS Load Balancer Controller: ALB and NLB for EKS

Before the AWS Load Balancer Controller, EKS used the in-tree Kubernetes cloud provider to provision load balancers — which created Classic Load Balancers for Services and provided no support for ALBs. The AWS Load Balancer Controller (formerly ALB Ingress Controller) replaced this with a controller that provisions ALBs (Application Load Balancers) for Kubernetes Ingress resources and NLBs (Network Load Balancers) for Services, with full access to ALB and NLB features through Kubernetes annotations.

The controller is now the standard approach for load balancing on EKS. Understanding its annotation model, IngressGroup feature, and target type options is essential for anyone operating EKS in production.


Installation

IAM Policy and Service Account

The controller needs IAM permissions to create and manage EC2 and ELB resources:

bash
1# Download the IAM policy document
2curl -o iam-policy.json \
3  https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
4
5# Create the IAM policy
6aws iam create-policy \
7  --policy-name AWSLoadBalancerControllerIAMPolicy \
8  --policy-document file://iam-policy.json
9
10# Create PodIdentityAssociation (recommended over IRSA for new clusters)
11aws eks create-pod-identity-association \
12  --cluster-name my-cluster \
13  --namespace kube-system \
14  --service-account aws-load-balancer-controller \
15  --role-arn arn:aws:iam::123456789:role/AWSLoadBalancerControllerRole

Attach the policy to the IAM role and ensure the role's trust policy allows pods.eks.amazonaws.com.

Helm Install

bash
1helm repo add eks https://aws.github.io/eks-charts
2helm repo update
3
4helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
5  --namespace kube-system \
6  --set clusterName=my-cluster \
7  --set serviceAccount.create=true \
8  --set serviceAccount.name=aws-load-balancer-controller \
9  --set region=us-east-1 \
10  --set vpcId=vpc-xxxxx

Verify the controller is ready:

bash
kubectl get deployment -n kube-system aws-load-balancer-controller
# NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
# aws-load-balancer-controller   2/2     2            2           5m

ALB via Ingress

Create an ALB by creating a Kubernetes Ingress with ingressClassName: alb:

yaml
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4  name: my-app
5  namespace: production
6  annotations:
7    # ALB scheme: internet-facing for public; internal for VPC-only
8    alb.ingress.kubernetes.io/scheme: internet-facing
9    # Target type: ip routes directly to pod IPs; instance routes via NodePort
10    alb.ingress.kubernetes.io/target-type: ip
11    # HTTPS listener + HTTP-to-HTTPS redirect
12    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
13    alb.ingress.kubernetes.io/ssl-redirect: "443"
14    # ACM certificate ARN
15    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789:certificate/xxxxx
16    # SSL policy
17    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
18    # Health check
19    alb.ingress.kubernetes.io/healthcheck-path: /healthz
20    alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
21    alb.ingress.kubernetes.io/success-codes: "200-299"
22spec:
23  ingressClassName: alb
24  rules:
25    - host: api.example.com
26      http:
27        paths:
28          - path: /
29            pathType: Prefix
30            backend:
31              service:
32                name: api
33                port:
34                  number: 80

Target Type: ip vs instance

Featureipinstance
RoutingDirect to pod IPsVia NodePort
CNI requirementVPC CNI (pods need real VPC IPs)Any CNI
HTTP/2SupportedSupported
WebSocketSupportedSupported
LatencyLower (one fewer hop)Slightly higher
Connection drainingPod-levelNode-level

Use ip when on VPC CNI (default on EKS) — it gives lower latency and pod-level health checks, so traffic drains from a pod being terminated rather than the entire node.


IngressGroup: Sharing One ALB

By default, each Ingress creates a separate ALB. ALBs cost ~$16/month plus data processing fees — a cluster with 50 services means 50 ALBs. IngressGroup consolidates multiple Ingresses onto a single ALB, with each Ingress adding listener rules:

yaml
1# Ingress for the API service
2apiVersion: networking.k8s.io/v1
3kind: Ingress
4metadata:
5  name: api
6  namespace: production
7  annotations:
8    alb.ingress.kubernetes.io/scheme: internet-facing
9    alb.ingress.kubernetes.io/target-type: ip
10    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:...:certificate/xxxxx
11    alb.ingress.kubernetes.io/group.name: platform-alb    # IngressGroup name
12    alb.ingress.kubernetes.io/group.order: "10"           # Rule priority within the group
13spec:
14  ingressClassName: alb
15  rules:
16    - host: api.example.com
17      http:
18        paths:
19          - path: /
20            pathType: Prefix
21            backend:
22              service:
23                name: api
24                port:
25                  number: 80
26---
27# Ingress for the dashboard — shares the same ALB
28apiVersion: networking.k8s.io/v1
29kind: Ingress
30metadata:
31  name: dashboard
32  namespace: production
33  annotations:
34    alb.ingress.kubernetes.io/scheme: internet-facing
35    alb.ingress.kubernetes.io/target-type: ip
36    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:...:certificate/xxxxx
37    alb.ingress.kubernetes.io/group.name: platform-alb    # Same group → same ALB
38    alb.ingress.kubernetes.io/group.order: "20"
39spec:
40  ingressClassName: alb
41  rules:
42    - host: dashboard.example.com
43      http:
44        paths:
45          - path: /
46            pathType: Prefix
47            backend:
48              service:
49                name: dashboard
50                port:
51                  number: 80

Both Ingresses share one ALB with separate listener rules. The ALB is named after the IngressGroup and persists even if individual Ingresses are deleted.

Multi-namespace IngressGroups: Ingresses across different namespaces can share an ALB by using the same group.name. The controller merges their rules. Ensure you control the group membership — a team adding group.name: platform-alb to their Ingress would join your shared ALB. Use RBAC or Kyverno to restrict which teams can set specific group names.


NLB via Service

Create an NLB by annotating a LoadBalancer Service:

yaml
1apiVersion: v1
2kind: Service
3metadata:
4  name: grpc-api
5  namespace: production
6  annotations:
7    service.beta.kubernetes.io/aws-load-balancer-type: external
8    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
9    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
10    # TLS termination at NLB (pass-through to pods is also possible)
11    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:...:certificate/xxxxx
12    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
13    # Cross-zone load balancing
14    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
15    # Preserve client IP (disable if using `ip` target type — pods see real client IPs already)
16    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=false
17spec:
18  type: LoadBalancer
19  selector:
20    app: grpc-api
21  ports:
22    - name: grpc
23      port: 443
24      targetPort: 9090
25      protocol: TCP

NLBs are preferable over ALBs for:

  • gRPC or HTTP/2 workloads (ALBs support HTTP/2 but NLBs provide more transparent proxying)
  • Very high throughput (NLBs handle millions of requests per second)
  • Non-HTTP protocols (TCP/UDP)
  • Ultra-low latency (NLBs have lower overhead than ALBs)

TargetGroupBinding

TargetGroupBinding connects a Kubernetes Service to an existing ALB/NLB target group. This is useful when the load balancer is created outside of Kubernetes (e.g., Terraform-managed ALB) and you want Kubernetes to manage target group registration:

yaml
1apiVersion: elbv2.k8s.aws/v1beta1
2kind: TargetGroupBinding
3metadata:
4  name: payments-tgb
5  namespace: production
6spec:
7  serviceRef:
8    name: payments-api
9    port: 80
10  targetGroupARN: arn:aws:elasticloadbalancing:us-east-1:123456789:targetgroup/payments/xxxxx
11  targetType: ip
12  networking:
13    ingress:
14      - from:
15          - securityGroup:
16              groupID: sg-alb-security-group
17        ports:
18          - port: 80
19            protocol: TCP

The controller registers and deregisters pod IPs with the target group as pods scale up and down. The target group health check is managed by ALB, but connection draining (deregistration delay) ensures in-flight requests complete before a pod is removed.


WAF and Shield Integration

Attach a WAF Web ACL to an ALB:

yaml
annotations:
  alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-east-1:123456789:regional/webacl/my-waf/xxxxx
  # Shield Advanced (requires subscription)
  alb.ingress.kubernetes.io/shield-advanced-protection: "true"

WAFv2 rules can block common attack patterns (OWASP Top 10, SQL injection, rate limiting) at the ALB layer, before traffic reaches your pods.


OIDC Authentication via Cognito or IAM Identity Center

ALB supports offloading OIDC authentication to the load balancer layer:

yaml
1annotations:
2  # Authenticate users before forwarding to the backend
3  alb.ingress.kubernetes.io/auth-type: oidc
4  alb.ingress.kubernetes.io/auth-idp-oidc: |
5    {
6      "issuer": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxx",
7      "authorizationEndpoint": "https://my-domain.auth.us-east-1.amazoncognito.com/oauth2/authorize",
8      "tokenEndpoint": "https://my-domain.auth.us-east-1.amazoncognito.com/oauth2/token",
9      "userInfoEndpoint": "https://my-domain.auth.us-east-1.amazoncognito.com/oauth2/userInfo",
10      "secretName": "oidc-client-secret"
11    }
12  alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate
13  alb.ingress.kubernetes.io/auth-scope: "openid profile email"

The ALB redirects unauthenticated requests to the Cognito login page. After authentication, the ALB forwards requests to the backend with X-Amzn-Oidc-Identity and X-Amzn-Oidc-Accesstoken headers. This pattern works for internal dashboards and developer portals without embedding auth logic in the application.


Frequently Asked Questions

What's the difference between alb.ingress.kubernetes.io/scheme: internet-facing and internal?

internet-facing creates an ALB with a public DNS name and public IP addresses — accessible from the internet. internal creates an ALB with a private DNS name and VPC-internal IP addresses — only accessible from within the VPC or connected networks (VPN, Direct Connect). Use internal for ALBs that only receive traffic from within your AWS environment (e.g., service-to-service via ALB, admin dashboards accessible only over VPN).

How do I handle multiple certificates for different domains on the same ALB?

The controller supports alb.ingress.kubernetes.io/certificate-arn as a comma-separated list:

yaml
alb.ingress.kubernetes.io/certificate-arn: >
  arn:aws:acm:us-east-1:...:certificate/cert1,
  arn:aws:acm:us-east-1:...:certificate/cert2

ALB serves the appropriate certificate based on SNI. For IngressGroups where different Ingresses cover different domains, each Ingress specifies its own certificate-arn and the controller combines them on the shared ALB.

The ALB health check is failing but my pods are healthy — what's wrong?

Common causes:

  1. Wrong health check path: Default health check is / returning 200. Change with alb.ingress.kubernetes.io/healthcheck-path.
  2. Security group blocking health checks: The ALB health check comes from the ALB's security group. Ensure the pod/node security group allows inbound traffic from the ALB security group on the service port.
  3. Target type mismatch: If target-type: ip is set but pods don't have VPC IPs (not using VPC CNI), health checks will fail. Use instance if you're not on VPC CNI.
  4. HTTP vs HTTPS: If the backend expects HTTPS but ALB sends HTTP health checks, use alb.ingress.kubernetes.io/backend-protocol: HTTPS and alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS.

Can I use the AWS Load Balancer Controller with Istio?

Yes, but the typical pattern is to put the ALB in front of the Istio ingress gateway (not directly in front of application pods). The ALB terminates external TLS, and Istio handles internal mTLS. Set alb.ingress.kubernetes.io/target-type: ip and point the Ingress backend to the istio-ingressgateway service. Alternatively, use NLB with TLS pass-through so Istio terminates TLS end-to-end.


For EKS networking that the AWS LBC builds on (VPC CNI, IP allocation), see EKS Networking Deep Dive. For TLS certificate automation that generates the ACM or cert-manager certificates referenced in ALB annotations, see cert-manager in Production.

Migrating to the AWS Load Balancer Controller or consolidating dozens of ALBs using IngressGroups? Talk to us at Coding Protocols — we help EKS teams simplify their load balancer architecture and reduce infrastructure costs.

Related Topics

AWS
EKS
Load Balancer
ALB
NLB
Kubernetes
Ingress
Platform Engineering

Read Next