Security
13 min readMay 2, 2026

Kubernetes NetworkPolicy: Zero-Trust Networking for Multi-Team Clusters

By default, every pod in a Kubernetes cluster can talk to every other pod — no firewall, no segmentation. NetworkPolicy lets you define exactly which pods can send traffic to which other pods, implementing zero-trust networking at the Kubernetes layer. Here are the patterns that work in production, the AND/OR semantics that trip everyone up, and what to do when your CNI doesn't enforce NetworkPolicy.

AJ
Ajeet Yadav
Platform & Cloud Engineer
Kubernetes NetworkPolicy: Zero-Trust Networking for Multi-Team Clusters

Kubernetes NetworkPolicy is the standard API for pod-level firewall rules — but it has several non-obvious behaviours that make it easy to write policies that look correct and silently don't work the way you think. The AND/OR semantics of from and to selectors trip up experienced engineers. The default-allow posture means a namespace without any NetworkPolicy is wide open. And NetworkPolicy only works if your CNI enforces it — several common CNIs don't.

Getting NetworkPolicy right is foundational to multi-tenant cluster security. This post covers the patterns that work, the gotchas to avoid, and the Cilium extensions that handle what standard NetworkPolicy can't.


How NetworkPolicy Works

NetworkPolicy resources are namespace-scoped and select pods using podSelector. Policies are additive — if two policies match the same pod, the union of their rules applies. There is no "deny" rule; absence of a matching policy means allow-all.

Critically: NetworkPolicy only applies to pods explicitly selected by at least one policy. A namespace with no NetworkPolicy objects has no restrictions on any pod. A namespace with one NetworkPolicy that selects only the payments-api pod leaves all other pods in that namespace unrestricted.

The practical implication: you need a default-deny policy in every namespace, then explicit allow policies for each service. Without default-deny, any NetworkPolicy you write is additive to an implicit allow-all.


CNI Enforcement: Who Actually Enforces These Policies?

NetworkPolicy is a Kubernetes API, but enforcement is the CNI's responsibility. Not all CNIs enforce it:

CNINetworkPolicy Support
CalicoFull
CiliumFull + extended (CiliumNetworkPolicy, FQDN, L7)
Weave NetFull
FlannelNone — NetworkPolicy objects are accepted by the API but silently ignored
Amazon VPC CNI (default EKS)None by default — requires Calico or the EKS Network Policy feature
EKS Network Policy (aws-node eBPF)Full Kubernetes NetworkPolicy (no extended policies)

EKS Network Policy (available since 2023, based on eBPF in the VPC CNI) enforces standard Kubernetes NetworkPolicy without requiring a separate CNI. Install as an EKS add-on or enable ENABLE_NETWORK_POLICY=true on the aws-node DaemonSet:

bash
kubectl set env daemonset aws-node -n kube-system ENABLE_NETWORK_POLICY=true

Or enable via the amazon-vpc-cni managed add-on configuration with enableNetworkPolicy: true.

For extended policy capabilities (FQDN-based egress, L7 HTTP rules, deny policies), use Cilium. See Cilium eBPF Kubernetes Networking for Cilium as a full CNI replacement.


Default Deny: The Foundation

Start every namespace with a default deny policy, then open specific paths:

yaml
1# Default deny ALL ingress traffic in the namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: default-deny-ingress
6  namespace: production
7spec:
8  podSelector: {}       # {} selects ALL pods in the namespace
9  policyTypes:
10    - Ingress
yaml
1# Default deny ALL ingress AND egress traffic
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: default-deny-all
6  namespace: production
7spec:
8  podSelector: {}
9  policyTypes:
10    - Ingress
11    - Egress

Default deny all (ingress + egress) is the stricter starting point. Note: egress default-deny blocks DNS resolution — you must add a DNS egress exception immediately (see Egress Patterns below), or pods won't be able to resolve service names.


The AND/OR Selector Trap

The most common NetworkPolicy mistake. In an ingress.from (or egress.to) list:

  • Multiple items in the array are evaluated as OR (any one matching allows traffic)
  • Multiple conditions in the same array item are evaluated as AND (all must match)
yaml
1# THIS IS OR — allows traffic from the monitoring namespace
2# OR from any pod with label app=prometheus (in any namespace)
3ingress:
4  - from:
5      - namespaceSelector:         # ← Item 1: allow from monitoring namespace
6          matchLabels:
7            kubernetes.io/metadata.name: monitoring
8      - podSelector:               # ← Item 2: allow from any pod with app=prometheus
9          matchLabels:
10            app: prometheus
yaml
1# THIS IS AND — allows traffic ONLY from pods with app=prometheus
2# that are also in the monitoring namespace
3ingress:
4  - from:
5      - namespaceSelector:         # ← Single item with two conditions (AND)
6          matchLabels:
7            kubernetes.io/metadata.name: monitoring
8        podSelector:               # ← Same item, indented at same level as namespaceSelector
9          matchLabels:
10            app: prometheus

The difference is a single level of YAML indentation. Both look similar; one is dramatically more permissive.

Always prefer the AND form (combined selector in one array item) unless you genuinely intend the OR semantics. When debugging, kubectl describe networkpolicy doesn't format the YAML in a way that makes this obvious — test with actual traffic.


Common Ingress Patterns

Allow from a Specific Namespace

yaml
1# Allow ingress to the payments-api from the api-gateway namespace only
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-from-api-gateway
6  namespace: production
7spec:
8  podSelector:
9    matchLabels:
10      app: payments-api
11  policyTypes:
12    - Ingress
13  ingress:
14    - from:
15        - namespaceSelector:
16            matchLabels:
17              kubernetes.io/metadata.name: api-gateway
18          podSelector:           # AND: must be in api-gateway namespace AND have app=gateway label
19            matchLabels:
20              app: gateway
21      ports:
22        - protocol: TCP
23          port: 8080

Allow from Prometheus for Metrics Scraping

yaml
1# Platform-level: allow Prometheus to scrape metrics from all pods in production
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-prometheus-scraping
6  namespace: production
7spec:
8  podSelector: {}       # Applies to all pods — they all expose metrics
9  policyTypes:
10    - Ingress
11  ingress:
12    - from:
13        - namespaceSelector:
14            matchLabels:
15              kubernetes.io/metadata.name: monitoring
16          podSelector:
17            matchLabels:
18              app.kubernetes.io/name: prometheus
19      ports:
20        - protocol: TCP
21          port: 9090   # Or whatever port your app exposes metrics on

Allow Ingress from Ingress Controller

yaml
1# Allow inbound traffic from the nginx ingress controller
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-ingress-controller
6  namespace: production
7spec:
8  podSelector:
9    matchLabels:
10      app: my-app
11  policyTypes:
12    - Ingress
13  ingress:
14    - from:
15        - namespaceSelector:
16            matchLabels:
17              kubernetes.io/metadata.name: ingress-nginx
18          podSelector:
19            matchLabels:
20              app.kubernetes.io/name: ingress-nginx
21      ports:
22        - protocol: TCP
23          port: 80
24        - protocol: TCP
25          port: 443

Egress Patterns

DNS Exception (Required with Egress Deny)

Without this rule, pods under a default-deny egress policy can't resolve DNS and nothing works:

yaml
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4  name: allow-dns-egress
5  namespace: production
6spec:
7  podSelector: {}     # Apply to all pods in namespace
8  policyTypes:
9    - Egress
10  egress:
11    - to:
12        - namespaceSelector:
13            matchLabels:
14              kubernetes.io/metadata.name: kube-system
15          podSelector:
16            matchLabels:
17              k8s-app: kube-dns     # CoreDNS pod label
18      ports:
19        - protocol: UDP
20          port: 53
21        - protocol: TCP
22          port: 53

If using NodeLocal DNSCache, the DNS traffic goes to 169.254.20.10 (link-local) which is on-node and not subject to NetworkPolicy (NetworkPolicy only applies to pod-to-pod and pod-to-service traffic, not link-local). The above rule covers the standard kube-dns path; with NodeLocal DNSCache deployed, DNS queries bypass NetworkPolicy anyway.

Allow Egress to a Database

yaml
1# Allow the payments-api to reach the Postgres StatefulSet on port 5432
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: payments-api-to-postgres
6  namespace: production
7spec:
8  podSelector:
9    matchLabels:
10      app: payments-api
11  policyTypes:
12    - Egress
13  egress:
14    - to:
15        - podSelector:
16            matchLabels:
17              app: postgres
18      ports:
19        - protocol: TCP
20          port: 5432

Allow Egress to External CIDR

yaml
1# Allow HTTPS egress to the internet (but not to internal RFC-1918 ranges)
2egress:
3  - to:
4      - ipBlock:
5          cidr: 0.0.0.0/0
6          except:
7            - 10.0.0.0/8
8            - 172.16.0.0/12
9            - 192.168.0.0/16
10    ports:
11      - protocol: TCP
12        port: 443

Multi-Namespace Patterns

Namespace Labels for Policy Targeting

NetworkPolicy uses namespaceSelector with label matching. By default, namespaces only have the kubernetes.io/metadata.name label (added automatically since Kubernetes 1.21). For custom segmentation, add labels to namespaces and use them in policies:

yaml
1# Label namespaces by environment
2kubectl label namespace production environment=production
3kubectl label namespace staging environment=staging
4
5# Policy: staging cannot send traffic to production (enforce environment isolation)
6apiVersion: networking.k8s.io/v1
7kind: NetworkPolicy
8metadata:
9  name: deny-staging-to-production
10  namespace: production
11spec:
12  podSelector: {}
13  policyTypes:
14    - Ingress
15  ingress:
16    - from:
17        - namespaceSelector:
18            matchExpressions:
19              - key: environment
20                operator: NotIn
21                values: [staging]   # Block staging; allow everything else (combine with default-deny)

Platform-Level Policy via Kyverno

For policies that need to apply across all namespaces (e.g., always allow monitoring namespace to scrape, always deny cross-environment traffic), use Kyverno's ClusterPolicy to auto-generate NetworkPolicy objects in each namespace:

yaml
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4  name: add-default-deny
5spec:
6  rules:
7    - name: generate-default-deny
8      match:
9        any:
10          - resources:
11              kinds: [Namespace]
12              selector:
13                matchLabels:
14                  network-policy: managed
15      generate:
16        apiVersion: networking.k8s.io/v1
17        kind: NetworkPolicy
18        name: default-deny-all
19        namespace: "{{request.object.metadata.name}}"
20        synchronize: true
21        data:
22          spec:
23            podSelector: {}
24            policyTypes:
25              - Ingress
26              - Egress

Apply the network-policy: managed label to new namespaces and Kyverno automatically generates the default-deny policy.


Cilium Extended Policies: FQDN Egress

Standard NetworkPolicy can't express "allow egress to api.github.com" — only IP CIDRs or pod selectors. Cilium's CiliumNetworkPolicy extends this with FQDN-based egress:

yaml
1apiVersion: cilium.io/v2
2kind: CiliumNetworkPolicy
3metadata:
4  name: allow-external-apis
5  namespace: production
6spec:
7  endpointSelector:
8    matchLabels:
9      app: ci-runner
10  egress:
11    # Allow DNS egress (required for FQDN resolution)
12    - toEndpoints:
13        - matchLabels:
14            k8s-app: kube-dns
15            kubernetes.io/metadata.name: kube-system
16      toPorts:
17        - ports:
18            - port: "53"
19              protocol: ANY
20          rules:
21            dns:
22              - matchPattern: "*"     # Allow all DNS queries; Cilium intercepts them to track IPs
23    # Allow HTTPS to specific FQDNs
24    - toFQDNs:
25        - matchName: api.github.com
26        - matchName: registry-1.docker.io
27        - matchPattern: "*.amazonaws.com"   # Wildcard for all AWS services
28      toPorts:
29        - ports:
30            - port: "443"
31              protocol: TCP

Cilium intercepts DNS responses to learn which IP addresses map to each FQDN, then enforces the IP-level policy in real time. This is the standard solution for "allow egress to a specific external service" — standard Kubernetes NetworkPolicy can't do this.


Testing NetworkPolicy

Don't assume policies work because they're applied. Test them:

bash
1# Deploy two test pods in different namespaces
2kubectl run test-source --image=nicolaka/netshoot -n monitoring -- sleep infinity
3kubectl run test-target --image=nginx -n production
4
5# Test that monitoring → production port 80 is ALLOWED
6kubectl exec -n monitoring test-source -- curl -s --connect-timeout 5 \
7  http://$(kubectl get pod test-target -n production -o jsonpath='{.status.podIP}')
8# Expected: returns nginx response
9
10# Test that default namespace → production is BLOCKED
11kubectl run test-blocked --image=nicolaka/netshoot -n default -- sleep infinity
12kubectl exec -n default test-blocked -- curl -s --connect-timeout 5 \
13  http://$(kubectl get pod test-target -n production -o jsonpath='{.status.podIP}')
14# Expected: connection timed out (blocked by NetworkPolicy)

Cilium provides a connectivity test suite:

bash
# Run Cilium's built-in connectivity tests (validates NetworkPolicy enforcement)
cilium connectivity test --test networkpolicy

Frequently Asked Questions

Does NetworkPolicy apply to traffic within a pod (localhost)?

No. NetworkPolicy only controls pod-to-pod and pod-to-external network traffic. Processes within the same pod communicating via localhost are not affected by NetworkPolicy — they share the same network namespace.

Can I use NetworkPolicy to block traffic from a specific IP address?

Yes, using ipBlock with a /32 CIDR:

yaml
ingress:
  - from:
      - ipBlock:
          cidr: 0.0.0.0/0
          except:
            - 1.2.3.4/32   # Block this specific IP

This allows all traffic except from the specified IP. However, IP-based blocking in Kubernetes is fragile — pod IPs change on restart, and external IPs change on instance replacement. For blocking known bad actors at the IP level, consider doing this at the load balancer or WAF level rather than in Kubernetes NetworkPolicy.

Why is my NetworkPolicy not working after I applied it?

In order of likelihood:

  1. CNI doesn't enforce NetworkPolicy — check with kubectl get nodes -o yaml | grep networkPlugin. If using Flannel or VPC CNI without the network policy agent, policies are silently no-ops.
  2. Wrong namespace — NetworkPolicy is namespace-scoped. Applied to the wrong namespace, it selects no pods.
  3. Selector mismatch — the podSelector doesn't match the pod's actual labels. Use kubectl get pod <name> -o jsonpath='{.metadata.labels}' to check.
  4. AND vs OR confusion — see the AND/OR Selector Trap section above.
  5. No default-deny — if there's no default-deny policy, all traffic is allowed regardless of what your allow policies say (they're additive to allow-all, not restrictive).

For FQDN-based policies and L7 HTTP enforcement that extend standard NetworkPolicy, see Cilium eBPF Kubernetes Networking. For how NetworkPolicy integrates with the broader security hardening posture, see Kubernetes Security Hardening: A Production Checklist. For Kyverno-based policy generation across namespaces, see Kubernetes Admission Webhooks: Validating and Mutating Workloads. For the foundational NetworkPolicy patterns (default-deny, ingress/egress basics, namespace isolation), see Kubernetes Network Policies: A Practical Guide. For a complete zero-trust network architecture, see Kubernetes Network Policies: Zero-Trust Networking.

Implementing network segmentation across a multi-team Kubernetes platform? Talk to us at Coding Protocols — we help platform teams design NetworkPolicy architectures that enforce zero-trust without breaking developer productivity.

Related Topics

NetworkPolicy
Kubernetes
Security
Zero Trust
Cilium
Networking
Platform Engineering

Read Next