Kubernetes
15 min readMay 2, 2026

Cilium: Advanced Networking, Security, and Observability on Kubernetes

Cilium replaces kube-proxy with eBPF-native routing, adds identity-based NetworkPolicy with FQDN support, and provides deep network observability through Hubble — all without adding a sidecar per pod. This covers the production configuration: replacing kube-proxy, writing FQDN-based egress policies, and using Hubble for real-time network flow visibility.

AJ
Ajeet Yadav
Platform & Cloud Engineer
Cilium: Advanced Networking, Security, and Observability on Kubernetes

Standard Kubernetes networking has three separate layers that don't talk to each other: kube-proxy (service routing via iptables), NetworkPolicy (filtered by CNI), and network observability (whatever you bolt on). Cilium replaces all three with a single eBPF data plane: kube-proxy replacement routes service traffic in the kernel, CiliumNetworkPolicy adds FQDN-based egress and identity-aware rules that vanilla NetworkPolicy can't express, and Hubble surfaces the flow data that eBPF already collects.

The result isn't smaller attack surface for its own sake — it's operationally simpler. One agent on each node handles forwarding, filtering, and flow export.


Installation on EKS

Prerequisites

bash
1# EKS requires specific ENI configuration for Cilium
2# Disable aws-node (VPC CNI) before installing Cilium's ENI mode
3kubectl -n kube-system set image daemonset/aws-node \
4  aws-node=public.ecr.aws/amazonlinux/amazonlinux:latest
5
6kubectl -n kube-system set image daemonset/aws-node \
7  aws-vpc-cni-init=public.ecr.aws/amazonlinux/amazonlinux:latest

Warning: Replacing the aws-node image with Amazon Linux is an unsupported workaround. For production, prefer kubectl delete daemonset aws-node -n kube-system (after patching the CNI config to hand off IPAM to Cilium), or use Cilium's chaining mode (cni.chainingMode=aws-cni) which leaves aws-node intact and only adds Cilium's policy enforcement layer on top.

Helm Install

bash
1helm repo add cilium https://helm.cilium.io/
2helm repo update
3
4helm install cilium cilium/cilium \
5  --version 1.17.3 \
6  --namespace kube-system \
7  --values cilium-values.yaml
yaml
1# cilium-values.yaml
2# Use Cilium's ENI mode on EKS (assigns secondary IPs to pods directly)
3eni:
4  enabled: true
5ipam:
6  mode: eni
7
8# Replace kube-proxy with eBPF (disable kube-proxy daemonset separately)
9kubeProxyReplacement: "true"
10# k8sServiceHost: auto-detected on EKS from instance metadata; omit this field
11k8sServicePort: "443"
12
13# WireGuard transparent encryption (encrypt pod-to-pod traffic)
14encryption:
15  enabled: true
16  type: wireguard
17
18# Hubble observability
19hubble:
20  enabled: true
21  relay:
22    enabled: true
23  ui:
24    enabled: true
25  metrics:
26    enabled:
27      - dns
28      - drop
29      - tcp
30      - flow
31      - port-distribution
32      - httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
33
34# Enable Prometheus metrics for Cilium agent
35prometheus:
36  enabled: true
37  serviceMonitor:
38    enabled: true    # Requires Prometheus Operator

Disable kube-proxy

bash
1# On EKS, kube-proxy runs as a managed addon
2# Disable it after Cilium is running
3aws eks update-addon \
4  --cluster-name my-cluster \
5  --addon-name kube-proxy \
6  --configuration-values '{"nodeSelector":{"no-nodes":"available"}}'
7# Alternative: `aws eks delete-addon --cluster-name my-cluster --addon-name kube-proxy`
8# or set `kubeProxyReplacement: true` in Cilium's Helm values (Cilium will then
9# handle all service routing via eBPF, making kube-proxy a no-op).
10
11# Or delete the kube-proxy DaemonSet if self-managed
12kubectl -n kube-system delete daemonset kube-proxy

Verify Installation

bash
# Check Cilium agent status on all nodes
cilium status --wait

# Connectivity test (deploys test pods and validates end-to-end)
cilium connectivity test

CiliumNetworkPolicy: Beyond Standard NetworkPolicy

Cilium extends standard NetworkPolicy with CiliumNetworkPolicy — adding FQDN-based rules, L7 (HTTP) filtering, and entity-based selectors.

Default Deny

yaml
1# Deny all ingress and egress for the payments namespace
2# Note: an empty rule object {} means "allow all" in CiliumNetworkPolicy.
3# To deny all, provide an empty array [] or omit the key entirely.
4apiVersion: "cilium.io/v2"
5kind: CiliumNetworkPolicy
6metadata:
7  name: default-deny
8  namespace: payments
9spec:
10  endpointSelector: {}    # Matches all pods in namespace
11  ingress: []             # Empty array = deny all ingress (no rules = no allowed sources)
12  egress: []              # Empty array = deny all egress

FQDN-Based Egress

Standard NetworkPolicy can only filter by IP/CIDR. FQDN rules let you write policies against hostnames that Cilium resolves dynamically:

yaml
1apiVersion: "cilium.io/v2"
2kind: CiliumNetworkPolicy
3metadata:
4  name: payments-egress
5  namespace: payments
6spec:
7  endpointSelector:
8    matchLabels:
9      app: payments-api
10
11  egress:
12    # Allow DNS (required for FQDN resolution)
13    - toEndpoints:
14        - matchLabels:
15            io.kubernetes.pod.namespace: kube-system
16            k8s-app: kube-dns
17      toPorts:
18        - ports:
19            - port: "53"
20              protocol: ANY
21          rules:
22            dns:
23              - matchPattern: "*"    # Allow all DNS queries
24
25    # Allow access to AWS Secrets Manager (FQDN)
26    - toFQDNs:
27        - matchName: "secretsmanager.us-east-1.amazonaws.com"
28        - matchPattern: "*.s3.us-east-1.amazonaws.com"    # Wildcard pattern
29      toPorts:
30        - ports:
31            - port: "443"
32              protocol: TCP
33
34    # Allow access to internal services
35    - toEndpoints:
36        - matchLabels:
37            app: postgres
38            io.kubernetes.pod.namespace: payments
39      toPorts:
40        - ports:
41            - port: "5432"
42              protocol: TCP
43
44    # Allow access to Kubernetes API (for workloads that need it)
45    - toEntities:
46        - kube-apiserver
47      toPorts:
48        - ports:
49            - port: "443"
50              protocol: TCP

Entities are Cilium-specific: world (external traffic), cluster (all cluster endpoints), kube-apiserver, host, remote-node. These avoid needing to know IP addresses for well-known targets.

L7 HTTP Policy

Cilium can filter HTTP requests at layer 7 — block specific methods or paths:

yaml
1apiVersion: "cilium.io/v2"
2kind: CiliumNetworkPolicy
3metadata:
4  name: admin-api-access
5  namespace: payments
6spec:
7  endpointSelector:
8    matchLabels:
9      app: payments-admin-api
10
11  ingress:
12    # Only allow GET from monitoring namespace (no POST/DELETE)
13    - fromEndpoints:
14        - matchLabels:
15            io.kubernetes.pod.namespace: monitoring
16      toPorts:
17        - ports:
18            - port: "8080"
19              protocol: TCP
20          rules:
21            http:
22              - method: GET
23                path: /metrics
24
25    # Allow full access from internal ops tools
26    - fromEndpoints:
27        - matchLabels:
28            io.kubernetes.pod.namespace: ops
29      toPorts:
30        - ports:
31            - port: "8080"
32              protocol: TCP

Hubble: Network Flow Observability

Hubble is Cilium's observability layer — it exposes the eBPF flow data that Cilium already collects. No sidecars needed.

Hubble CLI

bash
1# Install hubble CLI
2HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
3curl -L --fail --remote-name-all \
4  "https://github.com/cilium/hubble/releases/download/${HUBBLE_VERSION}/hubble-linux-amd64.tar.gz"
5tar xzvf hubble-linux-amd64.tar.gz
6sudo mv hubble /usr/local/bin/
7
8# Port-forward Hubble relay
9cilium hubble port-forward &
10
11# Observe all flows in the payments namespace
12hubble observe --namespace payments --follow
13
14# Show only dropped flows (policy denials)
15hubble observe --namespace payments --verdict DROPPED
16
17# Show flows to/from a specific pod
18hubble observe --pod payments/payments-api-7d8f9b-xyz
19
20# Observe with JSON for analysis
21hubble observe --namespace payments --output json | jq '.flow | {src: .source, dst: .destination, verdict: .verdict}'

Hubble UI

bash
# Port-forward Hubble UI
kubectl port-forward -n kube-system svc/hubble-ui 12000:80

# Open http://localhost:12000
# Select namespace from dropdown — see real-time flow graph

The Hubble UI shows a service dependency graph in real-time: which services talk to which, which connections are being dropped by policy, and traffic volumes between namespaces.

Hubble Metrics in Prometheus

The Cilium values above enable Hubble metrics. They're scraped by Prometheus and exposed as:

# Total HTTP requests by namespace, workload, and response code
hubble_http_requests_total{source_namespace, source_workload, destination_namespace, destination_workload, method, protocol, reporter, status_code}

# DNS query latency
hubble_dns_queries_total{source_namespace, source_workload, qtypes, rcode}

# Drop reasons (policy deny, forwarding error, etc.)
hubble_drop_total{source_namespace, destination_namespace, reason, direction}

Example Grafana query for policy drop rate:

promql
# Drops per minute by namespace
rate(hubble_drop_total[5m]) * 60

Cilium Transparent Encryption (WireGuard) and Identity Policy

Note: WireGuard provides node-to-node transparent encryption — it encrypts pod traffic in transit between nodes using pre-shared keys managed at the kernel level. This is NOT per-service mutual TLS with X.509 certificates. For per-service mTLS with cryptographic service identity, use Cilium Mutual Auth (SPIFFE/SPIRE integration, available in Cilium 1.13+), which issues SPIFFE SVIDs to pods and validates them on every connection.

Cilium provides transparent node-to-node encryption using WireGuard. For per-service mTLS with X.509 identity (Cilium Mutual Auth), see the note above. The WireGuard configuration — no Envoy sidecar per pod:

yaml
1# Enable mTLS for the payments namespace (all pod-to-pod traffic encrypted)
2# This is already done at the cluster level with encryption.type: wireguard
3# For per-namespace policy enforcement:
4apiVersion: "cilium.io/v2"
5kind: CiliumNetworkPolicy
6metadata:
7  name: require-mtls
8  namespace: payments
9spec:
10  endpointSelector: {}
11  ingress:
12    # Only accept traffic from pods with a Cilium identity (i.e., cluster pods)
13    - fromEntities:
14        - cluster

For full L7 traffic management (retries, circuit breaking, header manipulation), Cilium integrates with Envoy as a shared per-node proxy (not per-pod), configured via CiliumEnvoyConfig CRD.


Monitoring Cilium Agent Health

bash
1# Check per-node Cilium status
2kubectl get ciliumnode -o wide
3
4# Check endpoint (pod) policy enforcement
5kubectl get ciliumendpoints -A
6
7# Detailed status on specific node
8kubectl -n kube-system exec -it cilium-xxxxx -- cilium status --verbose
9
10# Policy verification — what policies apply to a pod
11kubectl -n kube-system exec -it cilium-xxxxx -- \
12  cilium endpoint list
13kubectl -n kube-system exec -it cilium-xxxxx -- \
14  cilium policy get

Cilium ServiceMonitor

yaml
# The Helm install creates this automatically when prometheus.serviceMonitor.enabled=true
# Key metrics to alert on:
# cilium_forward_count_total — forwarded packets (should be increasing)
# cilium_drop_count_total — dropped packets (policy denials)
# cilium_endpoint_state — endpoint health states
# cilium_controllers_failing — number of failing controllers (should be 0)

Frequently Asked Questions

Does Cilium replace NetworkPolicy?

Cilium is a drop-in superset. Standard NetworkPolicy resources work unchanged — Cilium enforces them. CiliumNetworkPolicy extends them with additional capabilities (FQDN, L7, entities). You can run both in the same cluster and the same namespace.

What's the performance difference versus kube-proxy + iptables?

eBPF service routing bypasses iptables entirely. For clusters with hundreds of services, iptables has O(n) lookup time — adding a service means rewriting the full chain. eBPF uses kernel-native hash tables: O(1) regardless of service count. The practical difference becomes significant above ~1,000 services or on nodes processing >50k connections/second.

Is WireGuard encryption compatible with EKS?

Yes. WireGuard is available in the Linux 5.6+ kernel, which all EKS-optimized AMIs (AL2, AL2023) include. The overhead is typically <5% for CPU and <3% for latency at normal workloads. It encrypts node-to-node traffic — pod-to-pod across nodes is encrypted; same-node pod-to-pod uses loopback and is not encrypted (not required).


For network policies that enforce egress controls across all workloads from the start, see Kubernetes Network Policies: Zero-Trust Networking. For Tetragon (eBPF-based runtime security also from the Cilium project), see eBPF for Platform Engineering: Cilium and Tetragon. For Cilium fundamentals, installation, and kube-proxy replacement on EKS, see Cilium and eBPF: High-Performance Kubernetes Networking.

Migrating from kube-proxy to Cilium on production EKS clusters? Talk to us at Coding Protocols — we help platform teams adopt Cilium incrementally, starting with observability and extending to kube-proxy replacement and policy enforcement over time.

Related Topics

Cilium
eBPF
Networking
Kubernetes
NetworkPolicy
Hubble
EKS
Platform Engineering
Security

Read Next