Kubernetes
13 min readMay 2, 2026

ingress-nginx in Production: Configuration, TLS, and Rate Limiting

ingress-nginx (the Kubernetes community ingress controller, not NGINX Inc.'s) handles TLS termination, routing, rate limiting, and connection handling for most Kubernetes clusters. Production configuration goes beyond the defaults: connection draining, upstream keepalive, custom error pages, and rate limiting need explicit configuration or the controller degrades under load.

CO
Coding Protocols Team
Platform Engineering
ingress-nginx in Production: Configuration, TLS, and Rate Limiting

ingress-nginx is the Kubernetes community's nginx-based Ingress controller (not NGINX Inc.'s NGINX Ingress Controller). It's the most widely deployed ingress controller on non-cloud-native clusters. The default Helm install works for development; production requires tuning connection handling, configuring real TLS, enabling rate limiting, and understanding what configuration lives in the ConfigMap versus Ingress annotations.


Installation on EKS

bash
1helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
2helm repo update
3
4helm install ingress-nginx ingress-nginx/ingress-nginx \
5  --namespace ingress-nginx \
6  --create-namespace \
7  --version 4.11.2 \
8  --values nginx-values.yaml
yaml
1# nginx-values.yaml
2controller:
3  # Run multiple replicas for HA
4  replicaCount: 3
5
6  # Use AWS NLB as the backing load balancer on EKS
7  service:
8    type: LoadBalancer
9    annotations:
10      service.beta.kubernetes.io/aws-load-balancer-type: "external"
11      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
12      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
13      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"    # Enable PROXY protocol
14    externalTrafficPolicy: Local    # Preserve client IP (required with PROXY protocol)
15
16  # Global NGINX configuration
17  config:
18    # Connection handling
19    use-proxy-protocol: "true"    # Must match NLB annotation above
20    keep-alive: "75"              # Upstream keepalive timeout (seconds)
21    keep-alive-requests: "1000"   # Requests per keepalive connection
22    upstream-keepalive-connections: "200"    # Keepalive pool size
23    upstream-keepalive-time: "1h"
24
25    # Timeouts
26    proxy-connect-timeout: "10"
27    proxy-send-timeout: "60"
28    proxy-read-timeout: "60"
29
30    # Body size
31    proxy-body-size: "10m"    # Max request body size (0 = unlimited)
32
33    # Error handling
34    custom-http-errors: "404,500,502,503,504"    # Trigger custom error page for these
35
36    # Security headers (applied globally)
37    hide-headers: "X-Powered-By,Server"
38    server-tokens: "false"
39
40    # TLS
41    ssl-protocols: "TLSv1.2 TLSv1.3"
42    ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
43    ssl-session-cache: "shared:SSL:10m"
44    ssl-session-timeout: "10m"
45
46    # Web Application Firewall (WAF) — must be under controller.config to be applied as ConfigMap settings
47    enable-modsecurity: "true"
48    enable-owasp-modsecurity-crs: "true"
49    modsecurity-snippet: |
50      SecRuleEngine On
51      SecRequestBodyAccess On
52      SecAuditLog /dev/stdout
53      SecAuditLogParts ABIFJHKZ
54      SecAuditLogType Serial
55
56  # Metrics for Prometheus
57  metrics:
58    enabled: true
59    serviceMonitor:
60      enabled: true    # Requires Prometheus Operator
61
62  # Graceful shutdown
63  lifecycle:
64    preStop:
65      exec:
66        command: ["/bin/sh", "-c", "sleep 5; /usr/local/openresty/nginx/sbin/nginx -s quit"]
67  terminationGracePeriodSeconds: 300
68
69  # Pod disruption budget for HA during upgrades
70  podDisruptionBudget:
71    enabled: true
72    minAvailable: 1
73
74  # Resource requests/limits
75  resources:
76    requests:
77      cpu: 200m
78      memory: 256Mi
79    limits:
80      cpu: 2000m
81      memory: 1Gi
82
83  # Custom error page service
84  defaultBackend:
85    enabled: true
86    image:
87      registry: registry.k8s.io
88      image: defaultbackend-amd64
89      tag: "1.5"

Ingress Resources

Basic HTTPS Ingress

yaml
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4  name: payments-api
5  namespace: payments
6  annotations:
7    # Ingress class
8    nginx.ingress.kubernetes.io/ssl-redirect: "true"
9
10    # Rewrite
11    nginx.ingress.kubernetes.io/rewrite-target: /$2
12    nginx.ingress.kubernetes.io/use-regex: "true"
13
14    # Upstream connection
15    nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
16    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
17    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
18    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
19
20    # cert-manager TLS
21    cert-manager.io/cluster-issuer: "letsencrypt-prod"
22
23spec:
24  ingressClassName: nginx
25  tls:
26    - hosts:
27        - api.codingprotocols.com
28      secretName: api-tls
29
30  rules:
31    - host: api.codingprotocols.com
32      http:
33        paths:
34          - path: /payments(/|$)(.*)
35            pathType: ImplementationSpecific
36            backend:
37              service:
38                name: payments-api
39                port:
40                  number: 8080

Rate Limiting

Basic rate limit with connection limit:

yaml
metadata:
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "10"          # 10 requests per second per IP
    nginx.ingress.kubernetes.io/limit-connections: "5"   # 5 concurrent connections per IP
    nginx.ingress.kubernetes.io/limit-req-status-code: "429"

Rate limit with burst allowance (short spikes above the steady-state rate):

yaml
1metadata:
2  annotations:
3    nginx.ingress.kubernetes.io/limit-rps: "50"
4    nginx.ingress.kubernetes.io/limit-burst-multiplier: "10"    # burst = 50 * 10 = 500 requests
5    nginx.ingress.kubernetes.io/limit-req-status-code: "429"
6    # Whitelist specific CIDRs from rate limiting (internal load balancers, health checks)
7    nginx.ingress.kubernetes.io/limit-whitelist: "10.0.0.0/8,172.16.0.0/12"

Canary Deployments

ingress-nginx supports traffic splitting between stable and canary backends via annotations on a second Ingress resource:

yaml
1# Stable Ingress (primary)
2apiVersion: networking.k8s.io/v1
3kind: Ingress
4metadata:
5  name: payments-api-stable
6  namespace: payments
7spec:
8  ingressClassName: nginx
9  rules:
10    - host: api.codingprotocols.com
11      http:
12        paths:
13          - path: /
14            pathType: Prefix
15            backend:
16              service:
17                name: payments-api-stable
18                port:
19                  number: 8080
20
21---
22# Canary Ingress (receives percentage of traffic)
23apiVersion: networking.k8s.io/v1
24kind: Ingress
25metadata:
26  name: payments-api-canary
27  namespace: payments
28  annotations:
29    nginx.ingress.kubernetes.io/canary: "true"
30    nginx.ingress.kubernetes.io/canary-weight: "10"    # 10% of requests to canary
31    # Or route by header: nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
32spec:
33  ingressClassName: nginx
34  rules:
35    - host: api.codingprotocols.com
36      http:
37        paths:
38          - path: /
39            pathType: Prefix
40            backend:
41              service:
42                name: payments-api-canary
43                port:
44                  number: 8080

TLS Passthrough

For services that need to handle their own TLS (mTLS scenarios where the backend must see the raw TLS handshake, e.g., databases requiring client certificates, Istio east-west). For gRPC, use nginx.ingress.kubernetes.io/backend-protocol: "GRPC" instead — nginx can proxy gRPC while still terminating TLS and providing load balancing:

yaml
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"

With SSL passthrough, nginx doesn't terminate TLS — it forwards the raw TLS stream to the backend. The backend must handle TLS. This disables HTTP-level features (no header injection, no body modification).


ConfigMap vs Annotations

  • ConfigMap (ingress-nginx-controller): Applies globally to all Ingresses. Set cluster-wide defaults here. (Note: the legacy bare-manifest install used nginx-configuration; the Helm chart creates ingress-nginx-controller.)
  • Annotations: Override ConfigMap settings for a specific Ingress. Prefer annotations for per-service configuration.

Some settings are ConfigMap-only (can't be set per-Ingress):

  • Worker processes and connections
  • Keepalive pool sizes
  • PROXY protocol
  • Custom error page service

Some settings are annotation-only (can't be set globally):

  • Canary routing
  • Auth backends
  • Per-Ingress rewrites

Custom Error Pages

yaml
1# Deploy a custom error page service
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5  name: custom-error-pages
6  namespace: ingress-nginx
7spec:
8  replicas: 2
9  selector:
10    matchLabels:
11      app: custom-error-pages
12  template:
13    metadata:
14      labels:
15        app: custom-error-pages
16    spec:
17      containers:
18        - name: error-pages
19          image: tarampampam/error-pages:3.3.0
20          ports:
21            - containerPort: 8080
22          env:
23            - name: TEMPLATE_NAME
24              value: l7-dark    # Error page template
yaml
1# Reference in nginx controller values
2controller:
3  defaultBackend:
4    enabled: false    # Disable default backend
5  extraVolumes:
6    - name: custom-error-pages
7      configMap:
8        name: custom-http-errors
9  config:
10    custom-http-errors: "404,500,502,503,504"
11    # The default-backend Service handles custom error responses

The nginx controller calls the defaultBackend service for requests that match custom-http-errors. The backend receives the error code in the X-Code header and returns a formatted response.


Observability

Key Metrics

promql
1# Request rate by status code
2sum(rate(nginx_ingress_controller_requests[5m])) by (status, ingress, namespace)
3
4# Error rate (5xx)
5sum(rate(nginx_ingress_controller_requests{status=~"5.."}[5m])) by (ingress, namespace) /
6sum(rate(nginx_ingress_controller_requests[5m])) by (ingress, namespace)
7
8# P99 latency by ingress
9histogram_quantile(0.99, 
10  sum(rate(nginx_ingress_controller_request_duration_seconds_bucket[5m])) 
11  by (le, ingress, namespace)
12)
13
14# Active connections per ingress controller pod
15nginx_ingress_controller_nginx_process_connections{state="active"}

Access Logs Format

yaml
1# In the ConfigMap, customize log format to include upstream info
2controller:
3  config:
4    log-format-upstream: |
5      $remote_addr - [$proxy_protocol_addr] - $remote_user [$time_local]
6      "$request" $status $body_bytes_sent "$http_referer"
7      "$http_user_agent" $request_length $request_time
8      [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr
9      $upstream_response_length $upstream_response_time $upstream_status
10      $req_id

Frequently Asked Questions

What's the difference between ingress-nginx and nginx-ingress?

Two different projects: ingress-nginx (kubernetes/ingress-nginx on GitHub) is the Kubernetes community project using NGINX open source. nginx-ingress (from NGINX Inc., now F5) is a commercial controller that supports NGINX Plus and has a different annotation scheme. They're incompatible — annotation names differ, installation differs, and they target different use cases. This post covers ingress-nginx (the community one).

Should I use ingress-nginx or Gateway API?

Gateway API is the strategic direction — it separates infrastructure concerns (which load balancer) from routing concerns (which service). ingress-nginx is battle-tested, widely supported, and simpler for teams that don't need cross-namespace routing or the role separation model. If you're starting new, evaluate Gateway API with Envoy Gateway. If you have existing ingress-nginx configuration, migration is not urgent.

How do I debug a 502 from ingress-nginx?

502 means nginx reached the upstream but got a bad response — or couldn't connect. Check:

  1. kubectl logs -n ingress-nginx deploy/ingress-nginx-controller — nginx error log
  2. kubectl describe ingress <name> — verify backends are populated
  3. kubectl get endpoints <service> — verify pods are Ready and serving traffic
  4. Try accessing the backend service directly (bypass nginx): kubectl port-forward svc/<service> 8080:8080 and curl locally

For Gateway API as the next-generation replacement for Ingress resources, see Kubernetes Gateway API: HTTPRoute, GRPCRoute, and the End of Ingress Annotations. For cert-manager that provisions TLS certificates referenced in Ingress TLS sections, see cert-manager: Automated TLS for Kubernetes. For a detailed comparison of Ingress vs Gateway API including a step-by-step migration guide, see Kubernetes Ingress vs Gateway API: When to Migrate and How.

Running ingress-nginx for a multi-team platform with hundreds of Ingress resources? Talk to us at Coding Protocols — we help platform teams configure ingress controllers for production workloads without annotation sprawl or manual TLS management.

Related Topics

ingress-nginx
Kubernetes
Ingress
TLS
Networking
Platform Engineering
Load Balancing
EKS

Read Next