Kubernetes Multi-Tenancy: Namespaces, Resource Quotas, and Network Isolation
Running multiple teams on a shared Kubernetes cluster without proper isolation leads to noisy neighbours, quota conflicts, and security boundaries that exist only on paper. Here's how to build multi-tenancy that actually holds under production load.

Kubernetes namespaces are the primary multi-tenancy boundary, but they're not isolation by themselves. A namespace is a name scope — it doesn't prevent a pod in one namespace from consuming all CPU on a shared node, reaching services in other namespaces, or escalating privileges through cluster-level RBAC.
Real multi-tenancy requires layering four controls: resource isolation (quotas), network isolation (policies), access isolation (RBAC), and workload security (Pod Security Admission). This post covers how to implement all four, the design patterns that scale, and the gaps each approach leaves.
The Isolation Stack
For a team running workloads in namespace team-a, these are the controls that define their blast radius:
| Layer | What It Controls | Mechanism |
|---|---|---|
| Resource isolation | CPU/memory consumption | ResourceQuota + LimitRange |
| Network isolation | Pod-to-pod traffic | NetworkPolicy |
| Access isolation | Kubernetes API calls | RBAC (Role + RoleBinding) |
| Workload security | Pod spec constraints | Pod Security Admission |
| Storage isolation | PVC capacity | ResourceQuota (storage) |
Each layer is independent. A namespace with tight RBAC but no ResourceQuota lets a team deploy a workload that consumes all node memory. A namespace with strict NetworkPolicy but no RBAC lets any authenticated user in the cluster read the team's secrets.
Namespace Design Patterns
One Namespace Per Team
The simplest model: each team gets one namespace (team-a, team-b, platform). Teams own their namespace entirely — full control within it, no access outside.
Strengths: simple to reason about, simple RBAC, clear ownership.
Weaknesses: a team can't separate dev and prod workloads without separate namespaces. A bug in dev that causes a resource spike affects prod in the same namespace.
One Namespace Per Team Per Environment
Each team gets a namespace per environment: team-a-prod, team-a-staging, team-a-dev.
Strengths: environment isolation within the team, separate quotas per environment, dev can have relaxed PSA while prod is strict.
Weaknesses: namespace count grows quickly (3 teams × 3 environments = 9 namespaces minimum). Quota management becomes per-namespace. For 20 teams, this is 60+ namespaces.
Hierarchical Namespaces (HNC)
The Kubernetes Hierarchical Namespace Controller (HNC) allows namespaces to have parent-child relationships. A parent namespace can propagate RBAC, LimitRange, NetworkPolicy, and ResourceQuota to children automatically.
# Create a parent namespace for team-a
kubectl create namespace team-a
kubectl hns create team-a-prod -n team-a
kubectl hns create team-a-staging -n team-a
kubectl hns create team-a-dev -n team-aPolicies defined in team-a propagate to all child namespaces. Team-level RBAC, default LimitRange, and base NetworkPolicies are set once on the parent. Environment-specific overrides go in the child namespace.
HNC is a kubernetes-sigs project under SIG Multitenancy (not a CNCF project). It is functional and used in production at some organisations, but adoption is narrower than tools like Kyverno or cert-manager — evaluate it carefully for your environment before committing. For organisations running more than 10 teams, it can significantly reduce the overhead of managing namespace-level policies at scale.
ResourceQuota: Per-Namespace Compute Budgets
ResourceQuota caps the total resources a namespace can consume. When the quota is reached, new pods are rejected with a clear error.
1apiVersion: v1
2kind: ResourceQuota
3metadata:
4 name: team-a-prod-quota
5 namespace: team-a-prod
6spec:
7 hard:
8 # Compute
9 requests.cpu: "8"
10 requests.memory: 16Gi
11 limits.cpu: "16"
12 limits.memory: 32Gi
13 # Pods
14 pods: "50"
15 # Storage
16 requests.storage: 500Gi
17 persistentvolumeclaims: "20"
18 # Object counts
19 services: "20"
20 services.loadbalancers: "3"
21 secrets: "50"
22 configmaps: "50"Check current usage against quota:
kubectl describe resourcequota team-a-prod-quota -n team-a-prodSizing Quotas
Quota values should reflect actual team needs, not arbitrary limits. The process:
- Measure actual usage per namespace over the last 30 days
- Set quota at P95 usage × 1.5 (50% headroom for spikes)
- Review and adjust quarterly
Starting without measurement leads to quotas that are either too tight (constant quota errors) or too loose (quota exists on paper but doesn't constrain anything).
ResourceQuota and Namespace Admission
When a ResourceQuota exists in a namespace, every pod must have resource requests set. Pods without requests are rejected with:
Error from server (Forbidden): pods "my-pod" is forbidden: failed quota: team-a-prod-quota: must specify limits.cpu, limits.memory
This is actually a useful forcing function — ResourceQuota mandates that teams set resource requests, which improves scheduler accuracy across the cluster.
Use LimitRange to set defaults so teams don't need to set requests on every pod:
1apiVersion: v1
2kind: LimitRange
3metadata:
4 name: team-a-defaults
5 namespace: team-a-prod
6spec:
7 limits:
8 - type: Container
9 defaultRequest:
10 cpu: "100m"
11 memory: "128Mi"
12 default:
13 cpu: "500m"
14 memory: "512Mi"
15 max:
16 cpu: "4"
17 memory: "8Gi"With this LimitRange, containers without explicit resource specs get the defaults automatically, satisfying the ResourceQuota requirement without requiring every developer to set resources on every pod.
Network Isolation Per Namespace
By default, namespaces have no network isolation — team-a pods can freely reach team-b pods. Apply default-deny NetworkPolicies to each namespace and add explicit allow rules for cross-namespace traffic that's required.
Default-Deny Template
1# Apply to every namespace on creation
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-all
6 namespace: team-a-prod
7spec:
8 podSelector: {}
9 policyTypes:
10 - Ingress
11 - Egress
12 egress:
13 # Always allow DNS
14 - to:
15 - namespaceSelector:
16 matchLabels:
17 kubernetes.io/metadata.name: kube-system
18 podSelector:
19 matchLabels:
20 k8s-app: kube-dns
21 ports:
22 - protocol: UDP
23 port: 53
24 - protocol: TCP
25 port: 53This denies all ingress and egress except DNS. Teams add allow rules for their specific inter-service communication.
Allow Intra-Namespace Traffic
Most teams need pods within their namespace to communicate freely:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-intra-namespace
5 namespace: team-a-prod
6spec:
7 podSelector: {}
8 policyTypes:
9 - Ingress
10 - Egress
11 ingress:
12 - from:
13 - podSelector: {} # Any pod in this namespace
14 egress:
15 - to:
16 - podSelector: {} # Any pod in this namespaceAllow Access from Shared Platform Services
Monitoring (Prometheus), logging (Fluentd), and ingress controllers need access to all namespaces:
1# Applied to all team namespaces — allows scraping from monitoring namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: allow-platform-ingress
6 namespace: team-a-prod
7spec:
8 podSelector: {}
9 policyTypes:
10 - Ingress
11 ingress:
12 - from:
13 - namespaceSelector:
14 matchLabels:
15 platform.codingprotocols.com/role: monitoring
16 ports:
17 - port: 9090
18 - port: 8080
19 - from:
20 - namespaceSelector:
21 matchLabels:
22 platform.codingprotocols.com/role: ingressLabel platform namespaces with role labels and use namespaceSelector in team namespace policies. This avoids hardcoding namespace names in policies — when you add a new monitoring tool, you label its namespace and existing policies include it automatically.
RBAC Per Team
Each team gets a RoleBinding in their namespace connecting their group to an appropriate ClusterRole:
1# Team members get the built-in edit ClusterRole in their namespace
2apiVersion: rbac.authorization.k8s.io/v1
3kind: RoleBinding
4metadata:
5 name: team-a-edit
6 namespace: team-a-prod
7subjects:
8 - kind: Group
9 name: team-a # Maps to SSO group via OIDC or aws-auth
10 apiGroup: rbac.authorization.k8s.io
11roleRef:
12 kind: ClusterRole
13 name: edit # Built-in: create/update/delete workloads, no RBAC changes
14 apiGroup: rbac.authorization.k8s.ioThe built-in edit ClusterRole grants full workload management (Deployments, Services, ConfigMaps, Secrets) but not RBAC changes (no creating Roles or Bindings). Platform engineers hold RBAC management — teams cannot grant themselves or others additional permissions.
For read-only access (on-call rotation, junior engineers):
roleRef:
kind: ClusterRole
name: view # Built-in: read everything, write nothingPreventing Cross-Namespace API Access
Team RBAC should be namespace-scoped via RoleBinding, not cluster-scoped via ClusterRoleBinding. A ClusterRoleBinding to edit gives the team edit access to every namespace. A RoleBinding to edit in team-a-prod gives them edit access only there.
Audit for accidental ClusterRoleBindings:
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name == "edit" or .roleRef.name == "admin") |
{name: .metadata.name, subjects: .subjects}'Any edit or admin ClusterRoleBinding to a non-platform group is a misconfiguration — it grants cluster-wide write access.
Pod Security Admission Per Namespace
Set PSA profiles per namespace based on what the team's workloads require:
1# Production namespaces — restricted
2kubectl label namespace team-a-prod \
3 pod-security.kubernetes.io/enforce=restricted \
4 pod-security.kubernetes.io/enforce-version=latest
5
6# Dev namespaces — baseline (developers run debug containers)
7kubectl label namespace team-a-dev \
8 pod-security.kubernetes.io/enforce=baseline \
9 pod-security.kubernetes.io/warn=restricted
10
11# Platform infrastructure namespaces — privileged (DaemonSets, CNI)
12kubectl label namespace monitoring \
13 pod-security.kubernetes.io/enforce=privilegedWhen using HNC, set PSA labels on the parent namespace and they propagate to children. Override in child namespaces where environment-specific policies differ.
Automating Namespace Provisioning
Manually applying ResourceQuota, LimitRange, NetworkPolicy, RBAC, and PSA labels to every new namespace is error-prone and doesn't scale. Automate it.
Option 1: Kyverno Generate Rules
Kyverno's generate rules create resources in new namespaces automatically:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: add-namespace-defaults
5spec:
6 rules:
7 - name: generate-default-deny-network-policy
8 match:
9 any:
10 - resources:
11 kinds: ["Namespace"]
12 selector:
13 matchLabels:
14 team: "true" # Only apply to team namespaces
15 generate:
16 apiVersion: networking.k8s.io/v1
17 kind: NetworkPolicy
18 name: default-deny-all
19 namespace: "{{request.object.metadata.name}}"
20 synchronize: true # Keep in sync — if deleted, recreate
21 data:
22 spec:
23 podSelector: {}
24 policyTypes: ["Ingress", "Egress"]
25 egress:
26 - to:
27 - namespaceSelector:
28 matchLabels:
29 kubernetes.io/metadata.name: kube-system
30 podSelector:
31 matchLabels:
32 k8s-app: kube-dns
33 ports:
34 - protocol: UDP
35 port: 53With synchronize: true, if a team deletes the default-deny policy (intentionally or accidentally), Kyverno recreates it. This is enforcement, not just defaults.
Option 2: Namespace Operator / GitOps Template
A GitOps approach: a Namespace object in Git triggers a Helm chart or Kustomize overlay that creates all required resources in the namespace. Teams submit a PR to create a namespace; the PR template includes quota requests, team group names, and environment type.
1# namespaces/team-a-prod.yaml — checked into GitOps repo
2apiVersion: v1
3kind: Namespace
4metadata:
5 name: team-a-prod
6 labels:
7 team: team-a
8 env: prod
9 gateway-access: allowed
10 platform.codingprotocols.com/role: team
11 annotations:
12 team-quota-cpu: "8"
13 team-quota-memory: "16Gi"A Kustomize generator or Argo CD ApplicationSet reads this and creates the full set of namespace resources from a template.
Soft Multi-Tenancy vs Hard Multi-Tenancy
Everything in this post implements soft multi-tenancy: multiple tenants on a shared Kubernetes cluster with namespace-level isolation. A sufficiently privileged or compromised workload can still break out — through a kernel vulnerability, a misconfigured privileged pod, or a zero-day in the container runtime.
Hard multi-tenancy — where tenants are fully isolated even against a malicious co-tenant — requires:
- Separate clusters per tenant (simplest, most expensive)
- Kata Containers or gVisor — sandboxed container runtimes that use VMs or seccomp-based kernel isolation between tenants
- VMs in Kubernetes via KubeVirt — each tenant workload runs in a VM, not a shared kernel
For most enterprise multi-team use cases, soft multi-tenancy (namespace isolation + RBAC + NetworkPolicy + PSA) is sufficient. For SaaS platforms hosting untrusted tenants, or regulated environments where audit mandates complete tenant isolation, evaluate hard multi-tenancy approaches.
Frequently Asked Questions
How many namespaces is too many?
The etcd limit is not namespaces but total object count. 1,000 namespaces with 50 objects each is 50,000 objects — well within etcd's practical limits. The operational concern is management overhead, not technical limits. HNC helps by propagating policies hierarchically. Beyond 50–100 namespaces, automation is essential.
Can I restrict which container images a namespace can use?
Not with namespaces alone — use Kyverno or OPA. A Kyverno ClusterPolicy with namespace-scoped matching can restrict images to an approved registry for specific namespaces while allowing broader access in others.
Should dev and prod be on the same cluster?
Operationally simpler: yes (one cluster to manage). Security and blast-radius-wise: arguable. A mis-deployed dev workload that consumes all cluster CPU can affect prod even with ResourceQuota (at the node level). Most medium-sized organisations run separate prod and non-prod clusters, using the same namespace patterns on each.
What's the difference between ResourceQuota and LimitRange?
ResourceQuota is a namespace-level ceiling — total resources the namespace can consume. LimitRange is a per-container policy — defaults and bounds for individual containers. Both are needed: ResourceQuota without LimitRange means teams must set resources on every pod manually (or face admission errors). LimitRange without ResourceQuota sets per-container bounds but doesn't prevent 100 containers from running simultaneously and consuming all node capacity.
For advanced multi-tenancy patterns with Capsule and vcluster — including virtual clusters, Tenant CRDs, and GitHub Actions namespace provisioning workflows — see Kubernetes Multi-Tenancy: Namespace Isolation, Capsule, and vcluster. For the RBAC detail behind this model, see Kubernetes RBAC in Practice. For the network policy mechanics, see Kubernetes Network Policies: A Practical Guide. For Pod Security Admission setup, see Kubernetes Pod Security Admission: The PodSecurityPolicy Replacement Guide.
Designing a multi-team platform on Kubernetes? Talk to us at Coding Protocols — we help platform teams build tenant isolation that holds up operationally, not just architecturally.


