Configuring NetworkPolicies to Isolate Namespaces
By default, every pod in a Kubernetes cluster can talk to every other pod. NetworkPolicies let you enforce zero-trust networking — allowing only the traffic you explicitly permit. This tutorial shows you how.
Before you begin
- kubectl configured with cluster access
- A CNI that supports NetworkPolicy (Calico
- Cilium
- or Weave — not Flannel by default)
- Basic understanding of Kubernetes namespaces and pods
Kubernetes networking is flat by default. Every pod can reach every other pod on any port, regardless of namespace. In a multi-tenant cluster, that means a compromised pod in the dev namespace can reach your production database.
NetworkPolicies fix this. They're declarative firewall rules at the pod level, enforced by your CNI plugin.
Verify Your CNI Supports NetworkPolicy
NetworkPolicies require a CNI that enforces them. Check which CNI you're running:
kubectl get pods -n kube-system | grep -E "calico|cilium|weave|flannel"
Flannel does not enforce NetworkPolicies. Calico, Cilium, and Weave do. If you're on a managed cluster (EKS, GKE, AKS), NetworkPolicy support is available but may need enabling.
The Default: No Policies = Allow All
Without any NetworkPolicy, all pods can communicate freely:
# Create two test namespaces
kubectl create namespace frontend
kubectl create namespace backend
# Deploy pods in each
kubectl run web --image=nginx -n frontend
kubectl run db --image=nginx -n backend
# Verify the db pod IP
DB_IP=$(kubectl get pod db -n backend -o jsonpath='{.status.podIP}')
# web can reach db — this is what we'll prevent
kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP
# Returns HTML — unrestricted access
Step 1: Default Deny All Ingress
Start by denying all ingress to the backend namespace. Any pod that doesn't match a subsequent allow rule gets dropped:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: backend
spec:
podSelector: {} # Applies to all pods in namespace
policyTypes:
- Ingress
EOF
Test that the frontend can no longer reach backend:
kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP
# curl: (28) Connection timed out
Step 2: Allow Specific Ingress from Frontend
Now allow only the frontend namespace to access the backend database on port 5432:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-db
namespace: backend
spec:
podSelector:
matchLabels:
app: db # Only applies to pods labelled app=db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: frontend
podSelector:
matchLabels:
app: web # Only from pods labelled app=web
ports:
- protocol: TCP
port: 5432
EOF
Label the pods so the selectors match:
kubectl label pod web -n frontend app=web
kubectl label pod db -n backend app=db
The namespaceSelector and podSelector within the same -from list item are ANDed — traffic must come from both a pod labelled app=web AND in the frontend namespace. If they were separate list items, they'd be ORed.
Step 3: Default Deny All Egress
Deny all outbound traffic from the backend namespace, then explicitly allow what's needed:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: backend
spec:
podSelector: {}
policyTypes:
- Egress
EOF
This blocks everything outbound — including DNS. Your pods can't resolve hostnames now. Always allow DNS when denying egress:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: backend
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
EOF
Step 4: Allow Egress to a Specific External Service
If your backend needs to reach an external database or API:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-db-egress
namespace: backend
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.1.50/32 # RDS endpoint IP
ports:
- protocol: TCP
port: 5432
EOF
For managed cloud databases, get the IP via nslookup or your cloud console and use ipBlock.
Step 5: Complete Namespace Isolation Pattern
This is the pattern I apply to every production namespace:
# 1. Deny all ingress and egress
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
# 2. Allow DNS
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
EOF
# 3. Allow intra-namespace communication
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
EOF
After these three, pods within production can talk to each other, but nothing from other namespaces can get in, and pods can't reach outside the namespace (except DNS).
Step 6: Verify Your Policies
# List all policies in a namespace
kubectl get networkpolicy -n backend
# Describe a policy
kubectl describe networkpolicy allow-frontend-to-db -n backend
# Test connectivity — should be blocked
kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP:80
# curl: (28) Connection timed out
# Test DNS still works (if you applied allow-dns)
kubectl exec -n backend db -- nslookup kubernetes.default.svc.cluster.local
With Cilium, you can get a network policy verdict in real time:
cilium monitor --type drop
Common Mistakes
Forgetting DNS egress — the most common mistake when adding a default-deny-egress policy. Your pods immediately stop resolving hostnames. Always add the DNS egress policy in the same apply.
OR vs AND in from selectors — multiple items in the from list are ORed. Items within a single from entry (namespaceSelector + podSelector together) are ANDed.
Policies don't apply to host-networked pods — pods with hostNetwork: true bypass NetworkPolicies. kube-proxy and CNI pods typically use host networking.
Missing policyTypes — if you don't specify policyTypes, a policy with only ingress rules only affects ingress, and egress is unaffected.
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.