Kubernetes Network Policies: A Practical Guide to Pod-Level Traffic Control
By default, every pod in a Kubernetes cluster can talk to every other pod. Network policies are how you fix that. Here's a practical guide to writing, testing, and maintaining network policies without accidentally taking down your services.

Kubernetes ships with a flat network model: every pod can reach every other pod by default. For a single-team cluster running trusted workloads, this is fine. For a multi-tenant cluster, a regulated environment, or anything where a compromised pod should not be able to freely communicate with every other service in the cluster — it's a serious exposure.
Network policies are Kubernetes' native mechanism for pod-level traffic isolation. They're declarative, namespace-scoped, and enforced by your CNI plugin. They're also the feature most teams know they should implement and haven't, because the first time you apply a restrictive policy and break something in production is a memorable experience.
This guide covers the mechanics, the patterns that work, and the mistakes that don't.
Prerequisites: Network Policy Requires a Compatible CNI
NetworkPolicy objects are Kubernetes API resources, but enforcement is the CNI plugin's responsibility. If your CNI doesn't support network policy, the objects are accepted by the API server and silently ignored.
CNIs that enforce NetworkPolicy:
- Calico — most widely deployed, full NetworkPolicy support plus Calico-specific GlobalNetworkPolicy
- Cilium — eBPF-based, NetworkPolicy plus extended Cilium policies (FQDN-based, L7)
- Weave Net — supports NetworkPolicy
- Azure CNI with Azure Network Policy or Calico
- AWS VPC CNI with the AWS Network Policy Controller (added 2023) or Calico
CNIs that do NOT enforce NetworkPolicy:
- Flannel — no network policy support
- AWS VPC CNI alone (without the Network Policy Controller add-on)
- kubenet (AKS simple networking mode)
Verify enforcement is active:
# Check which CNI is running
kubectl get pods -n kube-system | grep -E "calico|cilium|weave|aws-node"
# Verify network policy is being enforced (create a test policy and check connectivity)
kubectl describe networkpolicy <policy-name> -n <namespace>How NetworkPolicy Works
A NetworkPolicy selects a set of pods (via podSelector) and defines allowed ingress and egress traffic. Traffic not explicitly allowed is denied for pods selected by at least one policy.
The critical behaviour: a pod with no policies applied to it has open access in both directions. Network policies are additive — if pod A has two policies, the allowed traffic is the union of both policies. You cannot create a policy that denies specific traffic; you can only allow traffic.
Default-deny is implemented by selecting all pods with an empty podSelector and specifying no ingress/egress rules — which allows nothing.
The Default-Deny Foundation
Start with default-deny in every namespace, then add allow rules for what you need. Applying default-deny to a namespace that already has running workloads without allow rules will immediately break those workloads — do this in staging first.
1# Deny all ingress to pods in this namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-ingress
6 namespace: production
7spec:
8 podSelector: {} # Selects all pods in the namespace
9 policyTypes:
10 - Ingress
11 # No ingress rules = deny all ingress1# Deny all egress from pods in this namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-egress
6 namespace: production
7spec:
8 podSelector: {}
9 policyTypes:
10 - Egress
11 # No egress rules = deny all egressApply both to enforce bidirectional isolation. Then add specific allow rules.
Common Patterns
Allow Ingress from a Specific Application
The canonical case: allow the frontend to reach the API, but nothing else can reach the API directly.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: api-allow-from-frontend
5 namespace: production
6spec:
7 podSelector:
8 matchLabels:
9 app: api
10 policyTypes:
11 - Ingress
12 ingress:
13 - from:
14 - podSelector:
15 matchLabels:
16 app: frontend
17 ports:
18 - protocol: TCP
19 port: 8080This selects pods labelled app: api and allows ingress from pods labelled app: frontend on port 8080 only. All other ingress to app: api pods is denied (assuming default-deny-ingress is applied).
Allow Ingress from a Specific Namespace
For a monitoring namespace where Prometheus scrapes targets across namespaces:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-prometheus-scrape
5 namespace: production
6spec:
7 podSelector: {} # All pods in production namespace
8 policyTypes:
9 - Ingress
10 ingress:
11 - from:
12 - namespaceSelector:
13 matchLabels:
14 kubernetes.io/metadata.name: monitoring
15 ports:
16 - protocol: TCP
17 port: 9090
18 - protocol: TCP
19 port: 8080Label your namespaces to use this pattern:
# kubernetes.io/metadata.name is auto-set on all namespaces since K8s 1.21 — no manual label needed
# Verify: kubectl get namespace monitoring --show-labelskubernetes.io/metadata.name is automatically set by Kubernetes on all namespaces since 1.21 — you don't need to set it manually.
Allow Egress to a Database
Pods that need to reach PostgreSQL running in a separate namespace:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: api-allow-egress-to-postgres
5 namespace: production
6spec:
7 podSelector:
8 matchLabels:
9 app: api
10 policyTypes:
11 - Egress
12 egress:
13 - to:
14 - namespaceSelector:
15 matchLabels:
16 kubernetes.io/metadata.name: databases
17 podSelector:
18 matchLabels:
19 app: postgres
20 ports:
21 - protocol: TCP
22 port: 5432Note the namespaceSelector and podSelector are in the same to entry — this means "pods matching both selectors" (AND logic). If they were in separate list items, it would be OR logic (pods in the databases namespace OR pods labelled app: postgres anywhere).
1# AND: pods labelled app: postgres IN the databases namespace
2to:
3 - namespaceSelector:
4 matchLabels:
5 kubernetes.io/metadata.name: databases
6 podSelector:
7 matchLabels:
8 app: postgres
9
10# OR: any pod in databases namespace, OR any pod labelled app: postgres
11to:
12 - namespaceSelector:
13 matchLabels:
14 kubernetes.io/metadata.name: databases
15 - podSelector:
16 matchLabels:
17 app: postgresThis AND vs OR distinction is the most common source of "my network policy isn't doing what I thought" bugs.
Allow DNS Resolution
After applying default-deny-egress, pods immediately lose DNS resolution — they can't resolve service names or external hostnames. DNS (UDP/TCP port 53 to kube-dns) must be explicitly allowed:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-dns-egress
5 namespace: production
6spec:
7 podSelector: {}
8 policyTypes:
9 - Egress
10 egress:
11 - to:
12 - namespaceSelector:
13 matchLabels:
14 kubernetes.io/metadata.name: kube-system
15 podSelector:
16 matchLabels:
17 k8s-app: kube-dns
18 ports:
19 - protocol: UDP
20 port: 53
21 - protocol: TCP
22 port: 53This is easily forgotten. Symptoms of missing DNS egress: pods fail with dial tcp: lookup <hostname>: no such host — which looks like an application misconfiguration but is actually a network policy gap.
Allow Egress to External IPs (CIDR-Based)
For pods that need to reach external services (a third-party API, an on-premises system):
1egress:
2 - to:
3 - ipBlock:
4 cidr: 203.0.113.0/24 # External API IP range
5 except:
6 - 203.0.113.100/32 # Exclude a specific IP within the range
7 ports:
8 - protocol: TCP
9 port: 443For FQDN-based egress rules (allow api.stripe.com rather than IP ranges), standard Kubernetes NetworkPolicy doesn't support this — you need Cilium's CiliumNetworkPolicy with toFQDNs, or a service mesh with L7 egress filtering.
Testing Network Policies
Never apply network policies to a production namespace without testing in a staging environment first. For testing in place, use a temporary debug pod:
1# Create a debug pod in the namespace
2kubectl run netpol-test \
3 --image=nicolaka/netshoot \
4 --restart=Never \
5 -n production \
6 -- sleep 3600
7
8# Test connectivity to the API service
9kubectl exec -it netpol-test -n production -- \
10 curl -v http://api-service:8080/health
11
12# Test that database access is denied from a non-API pod
13kubectl exec -it netpol-test -n production -- \
14 nc -zv postgres-service.databases.svc.cluster.local 5432
15
16# Clean up
17kubectl delete pod netpol-test -n productionFor Cilium clusters, cilium connectivity test provides structured network policy testing:
cilium connectivity test --test network-policiesFor a more systematic approach, netpol (network policy validator) takes a network policy spec and a source/destination pod description and tells you whether traffic would be allowed:
# Would traffic from app:frontend to app:api on port 8080 be allowed?
kubectl neat get networkpolicies -n production -o yaml | \
netpol --src-pod app=frontend --dst-pod app=api --port 8080 --namespace productionCommon Mistakes
Forgetting That Policies Are Additive
If two policies apply to the same pod and one allows traffic from namespace A and the other allows traffic from namespace B, both are allowed. There is no "this policy overrides that one" — the allowed set is always the union.
This means you cannot create a policy that restricts access that another policy already grants. If a legacy policy grants broad access, you must modify or remove it rather than adding a more restrictive one.
Selecting Pods Without Labels
A policy with podSelector: {} selects all pods in the namespace. A policy with podSelector: {matchLabels: {app: api}} selects only pods with that label. If your pod doesn't have the label, the policy doesn't apply to it — its traffic is governed by whatever other policies select it (or none, meaning open access).
Verify your pod labels before writing the policy:
kubectl get pods -n production --show-labelsIP-Based Rules in Dynamic Environments
ipBlock rules specify static IP ranges. In a dynamic cluster where pod IPs change on restart, ipBlock is almost never the right approach for pod-to-pod communication. Use podSelector and namespaceSelector instead — they track pods by label, not IP.
ipBlock is appropriate for external traffic (load balancers, on-premises systems, specific external APIs with stable IP ranges).
Missing Port Specification
A policy without a ports section allows traffic on all ports for the matched sources:
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
# No ports: allows ALL ports from frontend to this podThis is often more permissive than intended. Always specify ports explicitly unless you genuinely want to allow all ports from a source.
Visualising Policies
Network policies are notoriously hard to reason about across a full namespace. Two tools help:
np-viewer generates a visual graph of network policies, showing which pods can communicate with which. Available as a kubectl plugin.
Cilium Hubble (on Cilium clusters) shows real-time network flow data — which connections are being allowed and denied, with policy name attribution. This makes debugging much faster: instead of guessing which policy is blocking traffic, Hubble shows the specific policy name.
# Real-time network flow observation with Cilium
hubble observe --namespace production --followkube-network-policy-visualizer renders a D3.js graph from your cluster's network policies. Useful for auditing a namespace's policy landscape.
Migrating a Live Namespace to Default-Deny
The safest migration path:
-
Audit existing traffic. Enable network policy logging (Calico's
GlobalNetworkPolicyaudit mode, or Cilium Hubble) and observe all inter-pod communication for 24–48 hours. Build a list of every source/destination/port combination. -
Write allow policies for observed traffic. Convert the traffic list into
NetworkPolicyobjects. Don't add anything that wasn't observed. -
Apply in audit mode (Calico) or dry-run. Some CNIs support a "log but don't enforce" mode for testing. Use it.
-
Apply to staging. Verify that all expected traffic still flows. Fix any gaps.
-
Apply default-deny to production during low-traffic window. Monitor for errors in the first hour. Have a rollback ready:
kubectl delete networkpolicy default-deny-ingress default-deny-egress -n production. -
Add remaining allow rules. Iterate based on any connectivity issues caught post-deployment.
Frequently Asked Questions
Do network policies affect traffic within a pod?
No. Network policies control traffic between pods. Processes within the same pod on the same localhost interface are unaffected.
Do network policies apply to host-networked pods?
Pods with hostNetwork: true use the node's network namespace and are not subject to pod-level network policies. They bypass CNI entirely. This is another reason to restrict hostNetwork: true via Pod Security Admission — it's a policy bypass.
Can network policies block traffic from kube-apiserver?
The kube-apiserver communicates with pods (for exec, port-forward, and webhooks) from specific IP ranges, not from pods. To block this traffic, you'd need ipBlock rules — which is almost never the right approach. Leave API server communication unblocked.
What happens when I delete a NetworkPolicy?
The traffic it was permitting continues to flow only if another policy permits it. If the deleted policy was the only one allowing certain traffic and default-deny is in place, that traffic is immediately blocked. Always check dependencies before deleting policies.
How do I handle Ingress controller traffic?
Your ingress controller (Nginx, Traefik, etc.) runs as pods in its own namespace. To allow it to reach application pods, apply a policy that allows ingress from the ingress controller namespace:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginxFor the security context around network policies, see RBAC vs ABAC in Kubernetes and Supply Chain Security Tools for Kubernetes. For eBPF-based network policy with Cilium, see eBPF for Platform Engineering: Cilium and Tetragon in Production. For advanced NetworkPolicy patterns using Cilium's extended policy model (FQDN-based egress, HTTP path rules), see Kubernetes NetworkPolicy: Zero-Trust Networking for Multi-Team Clusters. For a full zero-trust network architecture with default-deny and namespace isolation, see Kubernetes Network Policies: Zero-Trust Networking.
Setting up network segmentation for a multi-team Kubernetes platform? Talk to us at Coding Protocols — we help platform teams implement zero-trust networking that doesn't break services.


