Networking

Install Cilium as Your Kubernetes CNI with Hubble Observability

Advanced30 min to complete16 min read

Install Cilium as your cluster's CNI plugin, replace kube-proxy with eBPF, enable Hubble for per-flow network observability, run the connectivity test suite, and enforce a NetworkPolicy — all with working commands.

Before you begin

  • A running Kubernetes cluster (or a new cluster with no CNI installed)
  • kubectl configured with cluster-admin access
  • Helm 3 installed
  • For fresh clusters: access to API server address
Cilium
eBPF
Kubernetes
Networking
Hubble
Network Policy
CNI

Most Kubernetes clusters run with whatever CNI came with the managed service — usually something that handles basic pod networking but gives you no visibility into what's actually talking to what. Cilium is different: it uses eBPF to implement networking directly in the Linux kernel, replacing kube-proxy with a faster and more observable alternative, and gives you per-flow network visibility through Hubble — every TCP connection, DNS query, and HTTP request, visible in real time.

This tutorial installs Cilium, enables Hubble, validates the installation, and enforces a NetworkPolicy.

Step 1: Install the Cilium CLI

The Cilium CLI provides cilium status, cilium connectivity test, and cilium hubble ui — the three commands you'll use most:

bash
1# macOS
2brew install cilium-cli
3
4# Linux
5CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
6curl -L --fail --remote-name-all \
7  "https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz" \
8  "https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz.sha256sum"
9sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
10sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
11rm cilium-linux-amd64.tar.gz cilium-linux-amd64.tar.gz.sha256sum

Verify:

bash
cilium version --client

Step 2: Add the Cilium Helm repository

bash
helm repo add cilium https://helm.cilium.io/
helm repo update

Step 3: Install Cilium

Get your API server address first. For self-managed clusters (kubeadm), read from the kubeconfig — it is more reliable than querying the endpoints object, which returns the internal service VIP on managed clusters like EKS:

bash
API_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' | sed 's|https://||' | cut -d: -f1)
API_PORT=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' | sed 's|.*:||')
echo "API server: ${API_SERVER}:${API_PORT}"

Install Cilium with kube-proxy replacement:

bash
helm install cilium cilium/cilium \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=${API_SERVER} \
  --set k8sServicePort=${API_PORT}

kubeProxyReplacement=true requires Cilium 1.15 or later. On Cilium 1.14 and earlier, use --set kubeProxyReplacement=strict instead.

For EKS specifically, use the cluster endpoint:

bash
1API_SERVER=$(aws eks describe-cluster \
2  --name my-cluster \
3  --query "cluster.endpoint" \
4  --output text | sed 's|https://||; s|/$||')
5
6helm install cilium cilium/cilium \
7  --namespace kube-system \
8  --set kubeProxyReplacement=true \
9  --set k8sServiceHost=${API_SERVER} \
10  --set k8sServicePort=443

Step 4: Wait for Cilium to be ready

bash
cilium status --wait --wait-duration 10m

The default timeout is 5 minutes. --wait-duration 10m gives more time for images to pull on the first install.

Expected output:

    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble Relay:   disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1

All desired pods must be Ready before proceeding.

Step 5: Enable Hubble

Hubble is Cilium's observability layer. It records every network flow at the eBPF level — lower overhead than iptables logging and more detail than anything you can get from application logs:

bash
helm upgrade cilium cilium/cilium \
  --namespace kube-system \
  --reuse-values \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Verify Hubble is running:

bash
cilium status

Hubble Relay should now show OK.

Step 6: Run the connectivity test suite

The connectivity test deploys test pods and validates all critical network paths — pod-to-pod, pod-to-service, external access, DNS, and more:

bash
cilium connectivity test

This takes 5–10 minutes. All tests must pass:

✅ All N tests (M actions) successful, 0 tests skipped, 0 scenarios skipped.

The exact test count varies by cilium-cli version and cluster topology.

If any tests fail, check cilium status and the cilium pod logs before proceeding to production use.

Step 7: Explore the Hubble UI

The Hubble UI shows a live service map with all network flows in your cluster:

bash
cilium hubble ui

On macOS this opens a browser tab automatically. On Linux, open http://localhost:12000 manually. Click on any pod to see its inbound and outbound flows, with HTTP status codes, DNS resolutions, and dropped connections highlighted.

Step 8: Observe flows with the Hubble CLI

The Hubble CLI gives you a tcpdump-style view of network flows:

First install the Hubble CLI if you don't have it:

bash
1# macOS
2brew install hubble
3
4# Linux — download from the Hubble releases page
5HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
6curl -L --fail --remote-name-all \
7  "https://github.com/cilium/hubble/releases/download/${HUBBLE_VERSION}/hubble-linux-amd64.tar.gz"
8sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
9rm hubble-linux-amd64.tar.gz

Then start the relay port-forward and observe flows:

bash
1# Start Hubble relay port-forward (runs in background)
2cilium hubble port-forward &
3
4# Observe all flows in the default namespace
5hubble observe --namespace default --follow
6
7# Filter to Layer 7 (HTTP, gRPC, Kafka) traffic
8hubble observe --namespace default -t l7 --follow
9
10# Show only dropped flows (policy violations)
11hubble observe --namespace default --verdict DROPPED --follow

Step 9: Enforce a NetworkPolicy

Without NetworkPolicy, every pod can reach every other pod. This policy restricts ingress to the frontend deployment to only accept traffic from pods labelled app=api on port 80:

bash
1cat <<EOF | kubectl apply -f -
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: frontend-ingress
6  namespace: default
7spec:
8  podSelector:
9    matchLabels:
10      app: frontend
11  policyTypes:
12  - Ingress
13  ingress:
14  - from:
15    - podSelector:
16        matchLabels:
17          app: api
18    ports:
19    - protocol: TCP
20      port: 80
21EOF

Step 10: Verify policy enforcement via Hubble

Deploy test pods and verify the policy is working:

bash
1# Deploy an api pod (should be allowed)
2kubectl run api --image=busybox --labels=app=api -- sleep 3600
3
4# Deploy an attacker pod (should be blocked)
5kubectl run attacker --image=busybox -- sleep 3600
6
7# Try to connect from api pod (should succeed)
8kubectl exec api -- wget -qO- http://frontend.default.svc.cluster.local
9
10# Try to connect from attacker pod (should be blocked)
11kubectl exec attacker -- wget -qO- --timeout=3 http://frontend.default.svc.cluster.local

Watch Hubble to see the FORWARDED and DROPPED flows:

bash
hubble observe --namespace default --verdict DROPPED --follow

Dropped flows appear within milliseconds of the policy violation — you can use this to debug NetworkPolicy issues in production without any application logging changes.

What you built

Cilium is your cluster's CNI with kube-proxy replaced by eBPF. Every network connection goes through the eBPF datapath — faster forwarding, lower CPU overhead, and full observability via Hubble. NetworkPolicies are enforced at the kernel level: packets are dropped before they reach the application, with zero overhead on allowed flows. The Hubble UI gives you a live service map that automatically updates as new pods are deployed — no manual diagram maintenance required.

We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.

Struggling with this in production?

We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.