Kubernetes Network Policies: Zero-Trust Networking
Kubernetes NetworkPolicy restricts which pods can communicate with each other and with external endpoints. Without NetworkPolicy, every pod in the cluster can reach every other pod — a security posture that fails any zero-trust audit. This covers the NetworkPolicy API: default deny-all patterns, ingress and egress rules, namespace isolation via namespaceSelector, and the operational traps (DNS must be explicitly allowed, NetworkPolicy has no enforcement without a CNI plugin, multiple policies combine additively).

By default, Kubernetes has no network isolation between pods. A pod in the payments namespace can reach a pod in the database namespace on any port. A compromised frontend pod can make direct database calls. This is the network security posture every Kubernetes cluster starts with.
NetworkPolicy resources restrict network traffic at the pod level. They are implemented by the CNI plugin — Calico, Cilium, Weave Net, or Amazon VPC CNI with the Network Policy Controller add-on. Flannel does not enforce NetworkPolicy. A cluster without a NetworkPolicy-capable CNI ignores NetworkPolicy resources silently: the policies are stored in etcd but have no effect.
How NetworkPolicy Works
NetworkPolicy resources are additive and namespace-scoped. Two key rules govern how they combine:
- No policy = allow all: If a pod has no NetworkPolicy selecting it, all ingress and egress traffic is allowed.
- Multiple policies = union of allows: If two NetworkPolicies select the same pod, a connection is allowed if either policy permits it. There is no deny rule in a NetworkPolicy — only allow rules.
This means the only way to deny traffic is to create a policy that selects a pod (activating the deny-by-default for that pod) and only include allow rules for the traffic you want.
Default Deny-All
The first NetworkPolicy to apply in any namespace is a default deny-all:
1# Deny all ingress to pods in the payments namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-ingress
6 namespace: payments
7spec:
8 podSelector: {} # Empty selector = select all pods in namespace
9 policyTypes:
10 - Ingress
11 # No ingress rules = deny all ingress
12
13---
14# Deny all egress from pods in the payments namespace
15apiVersion: networking.k8s.io/v1
16kind: NetworkPolicy
17metadata:
18 name: default-deny-egress
19 namespace: payments
20spec:
21 podSelector: {}
22 policyTypes:
23 - Egress
24 # No egress rules = deny all egressAfter applying these, nothing can reach pods in payments and pods can't reach anything. Now add specific allow rules.
Allowing Specific Traffic
Allow Ingress from Another Namespace
1# Allow the orders service to call payments-api on port 8080
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: allow-orders-to-payments-api
6 namespace: payments
7spec:
8 podSelector:
9 matchLabels:
10 app: payments-api # Target: payments-api pods
11
12 policyTypes:
13 - Ingress
14
15 ingress:
16 - from:
17 # Source: pods with app=orders-api in any namespace
18 - podSelector:
19 matchLabels:
20 app: orders-api
21 namespaceSelector:
22 matchLabels:
23 kubernetes.io/metadata.name: orders # Specific source namespace
24 ports:
25 - protocol: TCP
26 port: 8080The podSelector and namespaceSelector within the same from list item are ANDed: the source must be in the orders namespace AND have the app: orders-api label. If you put them in separate list items, they're ORed:
1# This is WRONG if you mean "pods labeled orders-api IN the orders namespace"
2# It actually means: any pod labeled orders-api OR any pod in the orders namespace
3ingress:
4 - from:
5 - podSelector:
6 matchLabels:
7 app: orders-api
8 - namespaceSelector:
9 matchLabels:
10 kubernetes.io/metadata.name: ordersThis is one of the most common NetworkPolicy mistakes. Use a single from item with both selectors to get AND semantics.
DNS Egress: The Critical Allowlist
When you apply a default-deny-egress policy, pods can no longer resolve DNS — every service call fails with a connection error, not a DNS error. Always allow DNS before denying everything else:
1# Allow DNS queries to CoreDNS
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: allow-dns-egress
6 namespace: payments
7spec:
8 podSelector: {} # All pods in namespace
9 policyTypes:
10 - Egress
11 egress:
12 - to:
13 - namespaceSelector:
14 matchLabels:
15 kubernetes.io/metadata.name: kube-system
16 podSelector:
17 matchLabels:
18 k8s-app: kube-dns # Standard label used by kubeadm, EKS, and GKE
19 # Note: some distributions use k8s-app: coredns instead
20 # Verify with: kubectl get pods -n kube-system --show-labels | grep dns
21 ports:
22 - protocol: UDP
23 port: 53
24 - protocol: TCP
25 port: 53 # DNS over TCP for responses truncated at 512 bytes (DNSSEC, large record sets)Apply this before the default-deny-egress policy. If you forget it, pods silently fail to resolve names and every health check starts failing.
Namespace Isolation: Namespace Label Requirements
namespaceSelector matches namespaces by their labels. The kubernetes.io/metadata.name label is automatically set to the namespace name on all namespaces (since Kubernetes 1.21). Before 1.21, you had to add labels manually:
# Add a label to allow traffic selection from the monitoring namespace
kubectl label namespace monitoring team=platform-engThen reference it in namespaceSelector:
namespaceSelector:
matchLabels:
team: platform-eng # Any namespace labeled team=platform-engFor per-namespace isolation (only allow traffic within the same namespace), use a namespaceSelector that matches the pod's own namespace:
1# Allow all ingress from pods in the same namespace
2ingress:
3 - from:
4 - podSelector: {} # Any pod
5 namespaceSelector:
6 matchLabels:
7 kubernetes.io/metadata.name: payments # Only from the payments namespaceExternal Traffic: ipBlock
To allow egress to external IP ranges (on-premises databases, external APIs with static IPs):
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-payments-gateway-egress
5 namespace: payments
6spec:
7 podSelector:
8 matchLabels:
9 app: payments-api
10 policyTypes:
11 - Egress
12 egress:
13 # Allow HTTPS to payment gateway IP range
14 - to:
15 - ipBlock:
16 cidr: 192.168.1.0/24
17 except:
18 - 192.168.1.100/32 # Exclude specific IP within the range
19 ports:
20 - protocol: TCP
21 port: 443For cloud APIs (S3, Secrets Manager, etc.) that don't have stable IP ranges, use Cilium's CiliumNetworkPolicy with toFQDNs instead — standard NetworkPolicy can't express DNS-based egress rules. See Cilium: eBPF-Powered Networking and Security for Kubernetes.
Monitoring and Ingress Traffic
When pods need to receive traffic from a monitoring namespace (Prometheus scraping metrics):
1# Allow Prometheus to scrape metrics from payments pods
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: allow-prometheus-scrape
6 namespace: payments
7spec:
8 podSelector:
9 matchLabels:
10 monitoring: "true" # Label your pods to opt into monitoring scraping
11 policyTypes:
12 - Ingress
13 ingress:
14 - from:
15 - namespaceSelector:
16 matchLabels:
17 kubernetes.io/metadata.name: monitoring
18 podSelector:
19 matchLabels:
20 app: prometheus
21 ports:
22 - protocol: TCP
23 port: 9090 # Metrics port (varies by app)Complete Zero-Trust Pattern for a Namespace
Putting it together — apply these policies to a namespace for complete zero-trust:
1# 1. Default deny all ingress and egress
2kubectl apply -f default-deny-ingress.yaml
3kubectl apply -f default-deny-egress.yaml
4
5# 2. Allow DNS (must come before deny-egress is effective)
6kubectl apply -f allow-dns-egress.yaml
7
8# 3. Allow specific service-to-service calls
9kubectl apply -f allow-orders-to-payments-api.yaml
10
11# 4. Allow Prometheus scraping
12kubectl apply -f allow-prometheus-scrape.yaml
13
14# 5. Allow egress to Kubernetes API (if pods use the k8s client)
15# kubectl apply -f allow-k8s-api-egress.yamlDenying Cross-Namespace Traffic
For multi-tenant clusters, the default-deny pattern should extend beyond a single namespace to block all cross-namespace communication. This pattern is typically applied to every tenant namespace by the platform team — often automated via a Kyverno generate rule:
1# Applied to every namespace by the platform team
2# Blocks all ingress and egress traffic from/to other namespaces
3# while allowing communication within the same namespace
4apiVersion: networking.k8s.io/v1
5kind: NetworkPolicy
6metadata:
7 name: deny-cross-namespace
8 namespace: NAMESPACE_NAME # Applied per namespace; substitute the actual namespace name
9spec:
10 podSelector: {} # Selects all pods in the namespace
11 policyTypes:
12 - Ingress
13 - Egress
14 ingress:
15 - from:
16 - podSelector: {} # Allow ingress from pods in the SAME namespace only
17 # No namespaceSelector = matches pods in the same namespace as the policy
18 egress:
19 - to:
20 - podSelector: {} # Allow egress to pods in the SAME namespace onlyThis creates complete namespace isolation: pods can communicate freely within their namespace, but cross-namespace traffic is blocked unless an explicit allow policy is added. Add it alongside the DNS egress rule (DNS traffic to kube-system must still be permitted with a separate policy):
1# Apply to a namespace (substitute actual namespace)
2kubectl apply -f deny-cross-namespace.yaml -n payments
3
4# Verify the policy is active
5kubectl get networkpolicy deny-cross-namespace -n payments
6
7# Test: this should fail (cross-namespace blocked)
8kubectl exec -n orders test-pod -- curl http://payments-api.payments.svc.cluster.local:8080The deny-cross-namespace policy combines ingress and egress in one object — any service that needs to communicate across namespaces (e.g., Prometheus scraping, ingress controllers) requires an explicit allow policy added on top.
Frequently Asked Questions
Does NetworkPolicy work without a CNI plugin?
No. NetworkPolicy resources are stored in the Kubernetes API server (etcd) but have no effect unless the CNI plugin enforces them. Flannel alone does not enforce NetworkPolicy — but Canal (Flannel for routing + Calico for policy) and k3s's built-in network policy controller do. Calico, Cilium, Weave Net, and the Amazon VPC CNI with the Network Policy Controller add-on all enforce standard NetworkPolicy. On EKS, enable the Network Policy Controller in the add-on configuration or use Cilium.
To verify your CNI enforces NetworkPolicy:
# Apply a default deny policy and test connectivity from another pod
kubectl apply -f default-deny-ingress.yaml -n test
kubectl run test-pod --image=busybox -n test -- sleep 3600
kubectl exec test-pod -n test -- wget -qO- http://kubernetes.default.svc # Should failIf the wget succeeds, NetworkPolicy isn't being enforced.
Why is my pod getting more traffic than the policy allows?
Check for these common causes:
- The
podSelectorisn't matching the target pod — verify labels:kubectl get pod -n <ns> --show-labels - Another NetworkPolicy is allowing the traffic — NetworkPolicies are additive;
kubectl get networkpolicy -n <ns> - The CNI isn't enforcing policies — see above
- Traffic comes via a Service and the source IP is the node IP (NodePort) or cluster IP (ClusterIP) —
podSelectordoesn't match these; useipBlockwith the node CIDR instead
How does NetworkPolicy interact with Istio mTLS?
They operate at different layers. NetworkPolicy is enforced at the kernel/IP layer — it filters packets before they reach the application. Istio AuthorizationPolicy enforces at the HTTP/mTLS identity layer, after the connection is established. Both should be used together for defense in depth: NetworkPolicy blocks unexpected IP-level connections; Istio AuthorizationPolicy enforces identity-based access on allowed connections. See Istio Service Mesh on Kubernetes: mTLS, Traffic Management, and Observability for AuthorizationPolicy configuration.
For Cilium's extended CiliumNetworkPolicy that adds DNS-based egress and HTTP path-level rules beyond what standard NetworkPolicy supports, see Cilium: eBPF-Powered Networking and Security for Kubernetes. For Istio mTLS that complements NetworkPolicy with identity-based access control at the application layer, see Istio Service Mesh on Kubernetes: mTLS, Traffic Management, and Observability. For foundational NetworkPolicy patterns (default-deny, ingress/egress basics), see Kubernetes Network Policies: A Practical Guide. For Cilium-extended policies including FQDN egress and identity-based rules, see Kubernetes NetworkPolicy: Zero-Trust Networking for Multi-Team Clusters.
Designing a zero-trust NetworkPolicy model for a multi-team EKS cluster? Talk to us at Coding Protocols — we help platform teams implement namespace isolation that blocks lateral movement without breaking the service mesh, monitoring, and DNS dependencies that existing workloads rely on.


