Kubernetes
15 min readMay 6, 2026

Istio Service Mesh: Sidecar Mode in Production

Istio adds a data plane (Envoy proxies) alongside every pod and a control plane (Istiod) that programs them — giving you mutual TLS between all services, fine-grained traffic management (canary, circuit breaker, retries), and distributed tracing without code changes. The operational complexity is real, but so is the value for large service meshes.

AJ
Ajeet Yadav
Platform & Cloud Engineer
Istio Service Mesh: Sidecar Mode in Production

Every call between Kubernetes services is a plain HTTP or gRPC request with no built-in authentication, no circuit breaking, and no telemetry. Istio traditionally uses a sidecar Envoy proxy per pod (the mode this guide covers). As of Istio 1.24, Istio also offers Ambient mode — a sidecarless architecture using a per-node proxy. See our Istio Ambient Mode guide for a comparison.

In sidecar mode, Istio injects an Envoy proxy into every pod and programs it via the Istiod control plane to handle mutual TLS, retry logic, traffic splitting, and request tracing — transparently to the application.

The result: services communicate over mTLS without any code changes, operations teams get L7 visibility into every cross-service call, and traffic management (canaries, circuit breakers, fault injection for testing) is done via Kubernetes CRDs rather than application logic.


Architecture

Developer/Ops applies VirtualService/DestinationRule CRDs
                ↓
Istiod (control plane) translates CRDs → Envoy xDS config
                ↓
Envoy sidecars (data plane) in each pod receive and apply the config
                ↓
All service-to-service traffic flows through the sidecar pair
(caller's sidecar → network → callee's sidecar → application)

Istiod (formerly Pilot + Mixer + Citadel, merged since Istio 1.5) is a single control plane component that:

  • Watches Kubernetes services and Istio CRDs
  • Distributes xDS (Envoy's discovery service) config to all sidecars
  • Manages the certificate authority for mTLS (issues SPIFFE/X.509 certificates per workload)

Envoy sidecar is injected automatically into pods in namespaces with the istio-injection: enabled label.


Installation

bash
1# Download istioctl
2curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.24.0 sh -
3export PATH="$PWD/istio-1.24.0/bin:$PATH"
4
5# Install with the production profile
6istioctl install --set profile=default -y
7
8# Or via Helm for GitOps-managed installs
9helm repo add istio https://istio-release.storage.googleapis.com/charts
10helm repo update
11
12helm install istio-base istio/base -n istio-system --create-namespace
13helm install istiod istio/istiod -n istio-system --values istiod-values.yaml
yaml
1# istiod-values.yaml
2pilot:
3  resources:
4    requests:
5      cpu: 200m
6      memory: 256Mi
7    limits:
8      cpu: "2"
9      memory: 1Gi
10  autoscaleEnabled: true
11  autoscaleMin: 2
12  autoscaleMax: 5
13
14meshConfig:
15  accessLogFile: /dev/stdout    # Enable access logging to stdout
16  enableTracing: true
17  defaultConfig:
18    tracing:
19      sampling: 1.0    # 1% sampling (scale is 0.0–100.0; 100.0 = 100%). 1.0 is aggressive for high-traffic services — tune to 0.1 or lower in production.
20      zipkin:
21        address: jaeger-collector.monitoring:9411
bash
# Enable sidecar injection for a namespace
kubectl label namespace production istio-injection=enabled

mTLS with PeerAuthentication

By default, Istio operates in permissive mode — sidecars accept both plaintext and mTLS. Enable strict mTLS per namespace (or cluster-wide) to require mTLS for all traffic:

yaml
1apiVersion: security.istio.io/v1
2kind: PeerAuthentication
3metadata:
4  name: default
5  namespace: production     # Apply to the production namespace
6  # Use namespace: istio-system to apply mesh-wide
7spec:
8  mtls:
9    mode: STRICT    # Reject any plaintext connections — all services must use mTLS

With strict mTLS, the Envoy sidecar automatically presents an SPIFFE certificate (issued by Istiod's CA) and validates the peer's certificate on every connection. Applications see plaintext — mTLS is transparent.

yaml
1# Allow a specific workload to remain in permissive mode (e.g., during migration)
2apiVersion: security.istio.io/v1
3kind: PeerAuthentication
4metadata:
5  name: payments-legacy-permissive
6  namespace: production
7spec:
8  selector:
9    matchLabels:
10      app: payments-legacy
11  mtls:
12    mode: PERMISSIVE    # This workload accepts both mTLS and plaintext

Authorization Policy

AuthorizationPolicy controls which services can call which endpoints — L7 access control based on source workload identity (SPIFFE), namespace, and request attributes:

yaml
1apiVersion: security.istio.io/v1
2kind: AuthorizationPolicy
3metadata:
4  name: payments-api-authz
5  namespace: production
6spec:
7  selector:
8    matchLabels:
9      app: payments-api
10  action: ALLOW
11  rules:
12    - from:
13        - source:
14            principals:
15              - "cluster.local/ns/production/sa/orders-api"    # Only orders-api can call payments-api
16              - "cluster.local/ns/production/sa/billing-api"
17      to:
18        - operation:
19            methods: ["POST", "GET"]
20            paths: ["/api/v1/payments/*"]
21    - from:
22        - source:
23            namespaces: ["monitoring"]    # Allow monitoring scraping from any monitoring workload
24      to:
25        - operation:
26            ports: ["9090"]    # Metrics port only

With no AuthorizationPolicy applied to a workload, all traffic is allowed (even with strict mTLS). Apply a default deny-all and then explicit allow rules:

yaml
1# Default deny-all for the namespace — then add explicit allows per service
2apiVersion: security.istio.io/v1
3kind: AuthorizationPolicy
4metadata:
5  name: deny-all
6  namespace: production
7spec:
8  {}   # Empty spec = deny all

Traffic Management

VirtualService

VirtualService configures how traffic is routed to a service — split, retry, timeout, rewrite, fault injection:

yaml
1apiVersion: networking.istio.io/v1
2kind: VirtualService
3metadata:
4  name: payments-api
5  namespace: production
6spec:
7  hosts:
8    - payments-api    # Kubernetes service name (or FQDN)
9  http:
10    - match:
11        - headers:
12            x-canary:
13              exact: "true"
14      route:
15        - destination:
16            host: payments-api
17            subset: canary    # Route canary header traffic to the canary subset
18    - route:
19        - destination:
20            host: payments-api
21            subset: stable
22          weight: 90
23        - destination:
24            host: payments-api
25            subset: canary
26          weight: 10    # 10% of traffic to canary
27      retries:
28        attempts: 3
29        perTryTimeout: 5s
30        retryOn: "5xx,gateway-error,connect-failure"
31      timeout: 15s
32      fault:
33        abort:
34          percentage:
35            value: 0.1    # Inject 0.1% error rate for chaos testing (disable in production)
36          httpStatus: 500

DestinationRule

DestinationRule defines subsets (for traffic splitting) and connection pool settings (circuit breaking):

yaml
1apiVersion: networking.istio.io/v1
2kind: DestinationRule
3metadata:
4  name: payments-api
5  namespace: production
6spec:
7  host: payments-api
8  trafficPolicy:
9    connectionPool:
10      http:
11        http1MaxPendingRequests: 100
12        http2MaxRequests: 1000
13      tcp:
14        maxConnections: 100
15    outlierDetection:
16      consecutive5xxErrors: 5         # Trip circuit after 5 consecutive 5xx errors
17      interval: 30s                   # Evaluation window
18      baseEjectionTime: 30s          # Minimum ejection duration
19      maxEjectionPercent: 50         # Never eject more than 50% of endpoints
20  subsets:
21    - name: stable
22      labels:
23        version: stable
24    - name: canary
25      labels:
26        version: canary

outlierDetection is Istio's circuit breaker — it ejects endpoints that return consecutive 5xx errors from the load balancing pool for a period. This is passive circuit breaking based on observed errors, not a traditional half-open state machine.


Ingress with Istio Gateway

Istio's Gateway replaces a conventional Ingress controller for external traffic entering the mesh:

yaml
1apiVersion: networking.istio.io/v1
2kind: Gateway
3metadata:
4  name: production-gateway
5  namespace: production
6spec:
7  selector:
8    istio: ingressgateway    # Matches the Istio Ingress Gateway pods
9  servers:
10    - port:
11        number: 443
12        name: https
13        protocol: HTTPS
14      tls:
15        mode: SIMPLE
16        credentialName: api-tls    # Kubernetes Secret with TLS cert/key (from cert-manager)
17      hosts:
18        - "api.example.com"
19    - port:
20        number: 80
21        name: http
22        protocol: HTTP
23      tls:
24        httpsRedirect: true    # Redirect HTTP → HTTPS
25      hosts:
26        - "api.example.com"
27---
28apiVersion: networking.istio.io/v1
29kind: VirtualService
30metadata:
31  name: api-ingress
32  namespace: production
33spec:
34  hosts:
35    - "api.example.com"
36  gateways:
37    - production-gateway
38  http:
39    - route:
40        - destination:
41            host: payments-api
42            port:
43              number: 80

Observability

Istio's Envoy sidecars emit metrics, logs, and traces for every request — no instrumentation required.

bash
1# Prometheus metrics from Envoy (automatically scraped by ServiceMonitor if kube-prometheus-stack)
2# Key metrics:
3# istio_requests_total — request count with labels: source, destination, response_code
4# istio_request_duration_milliseconds — latency histogram
5# istio_tcp_connections_opened_total
6
7# Access logs for every request (to stdout → collected by Fluent Bit → Loki)
8# Format configured in meshConfig.accessLogEncoding (JSON or TEXT)

Kiali

Kiali is the Istio observability UI — service dependency graph, traffic flows, and policy visualization:

bash
helm install kiali-server kiali-operator/kiali-server \
  --namespace istio-system \
  --set auth.strategy=anonymous   # or token for production access control

Distributed Tracing

Istio propagates Zipkin-compatible trace headers (B3 or W3C TraceContext). Applications must propagate the headers downstream (pass x-request-id, x-b3-* or traceparent through to outgoing calls) — Envoy creates the span, but the app is responsible for header propagation:

go
1// Go: propagate trace headers from incoming request to outgoing call
2func forwardTraceHeaders(incoming *http.Request, outgoing *http.Request) {
3    headers := []string{
4        "x-request-id", "x-b3-traceid", "x-b3-spanid",
5        "x-b3-parentspanid", "x-b3-sampled", "x-b3-flags",
6        "traceparent", "tracestate",
7    }
8    for _, h := range headers {
9        if v := incoming.Header.Get(h); v != "" {
10            outgoing.Header.Set(h, v)
11        }
12    }
13}

Frequently Asked Questions

What's the performance overhead of the Envoy sidecar?

Envoy adds ~2-5ms of latency per hop for typical workloads (measured P99 latency increase). CPU overhead is approximately 0.5 vCPU per 1,000 RPS per sidecar. Memory overhead is ~50MB per sidecar. For very high-RPS services, use HorizontalPodAutoscaler based on custom metrics rather than CPU to account for sidecar overhead not attributed to the application container.

How do I exclude a service from the mesh?

yaml
# Opt out individual pods from sidecar injection
metadata:
  annotations:
    sidecar.istio.io/inject: "false"

Or disable injection for an entire namespace by removing the istio-injection: enabled label and restarting pods.

Ambient mesh — what is it?

Istio 1.24+ includes stable ambient mesh (Beta was 1.22): a sidecar-free architecture where networking is handled by a per-node ztunnel (L4) and optional per-namespace waypoint proxies (L7). This eliminates sidecar memory overhead while preserving mTLS. For new Istio deployments in 2026, ambient mode is production-ready and the recommended approach for clusters where sidecar overhead is a concern.


Istio Gateway's credentialName field integrates with cert-manager certificates — cert-manager issues and rotates the TLS certificate as a Kubernetes Secret, and the Gateway references it by name. See cert-manager: Automated TLS Certificates for Kubernetes for the full setup. For Argo Rollouts progressive delivery that integrates with Istio's VirtualService for traffic splitting, see Argo Rollouts: Progressive Delivery. For the Prometheus Operator that scrapes Istio metrics from Envoy sidecars, see Prometheus Operator: Production Monitoring. For Istio in sidecar mode — per-pod Envoy proxies, VirtualService traffic management, and the differences from ambient mode — see Istio Service Mesh: Sidecar Mode Deep Dive.

Evaluating Istio for a microservices platform? Talk to us at Coding Protocols — we help platform teams design service mesh architectures that add security and observability without overwhelming operational complexity.

Related Topics

Istio
Service Mesh
Kubernetes
Envoy
mTLS
Traffic Management
Observability
Platform Engineering

Read Next