Kubernetes Namespaces — Multi-Tenancy and Resource Isolation
Your cluster has three teams deploying apps. The frontend team accidentally deletes the backend team's ConfigMap because everything is in the default namespace. The ML team's GPU-hungry job starves everyone else of resources. Sound familiar? Namespaces fix this by giving each team their own isolated sandbox with enforced resource limits.
Default Namespaces — What Ships Out of the Box
Every Kubernetes cluster starts with four namespaces:
| Namespace | Purpose | Should You Deploy Here? |
|---|---|---|
| default | Where resources go when you do not specify a namespace | No — treat it like your desktop: clean it up |
| kube-system | Kubernetes control plane components (API server, scheduler, CoreDNS) | Never — hands off |
| kube-public | Publicly readable data (cluster info) | Rarely |
| kube-node-lease | Node heartbeat objects for node health detection | Never — managed by Kubernetes |
# See all namespaces
kubectl get namespaces
# See what's running in kube-system
kubectl get pods -n kube-system
# See resources across ALL namespaces
kubectl get pods -A
Creating and Managing Namespaces
apiVersion: v1
kind: Namespace
metadata:
name: team-backend
labels:
team: backend
environment: production
# Create from YAML
kubectl apply -f namespace.yaml
# Or create imperatively
kubectl create namespace team-frontend
kubectl create namespace staging
kubectl create namespace production
# List all namespaces with labels
kubectl get ns --show-labels
# Delete a namespace (WARNING: deletes ALL resources inside it)
kubectl delete namespace staging
When you delete a namespace, Kubernetes deletes everything inside it — pods, services, ConfigMaps, secrets, everything. There is no undo. Use this carefully.
Resource Quotas — Prevent Resource Hogging
A ResourceQuota sets hard limits on total resource consumption within a namespace. Without quotas, one team can consume the entire cluster.
apiVersion: v1
kind: ResourceQuota
metadata:
name: backend-quota
namespace: team-backend
spec:
hard:
# Compute limits
requests.cpu: "10" # Total CPU requests across all pods
requests.memory: "20Gi"
limits.cpu: "20"
limits.memory: "40Gi"
# Object count limits
pods: "50"
services: "10"
configmaps: "20"
secrets: "20"
persistentvolumeclaims: "10"
replicationcontrollers: "5"
# Storage limits
requests.storage: "100Gi"
# Apply the quota
kubectl apply -f resource-quota.yaml
# Check quota usage
kubectl get resourcequota -n team-backend
# Detailed view showing used vs hard limits
kubectl describe resourcequota backend-quota -n team-backend
# Name: backend-quota
# Resource Used Hard
# -------- ---- ----
# configmaps 2 20
# limits.cpu 2 20
# limits.memory 4Gi 40Gi
# pods 5 50
# requests.cpu 1 10
# requests.memory 2Gi 20Gi
Once a quota is set, every pod in that namespace must define resource requests and limits. Without them, pod creation will be rejected.
Limit Ranges — Default Resource Constraints
LimitRange sets default, minimum, and maximum resource constraints for individual containers. It catches the pods that developers forget to configure.
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: team-backend
spec:
limits:
- type: Container
default: # Applied if no limits specified
cpu: "500m"
memory: "256Mi"
defaultRequest: # Applied if no requests specified
cpu: "100m"
memory: "128Mi"
min: # Minimum allowed
cpu: "50m"
memory: "64Mi"
max: # Maximum allowed
cpu: "2"
memory: "2Gi"
- type: Pod
max:
cpu: "4"
memory: "4Gi"
- type: PersistentVolumeClaim
min:
storage: "1Gi"
max:
storage: "50Gi"
# Apply and verify
kubectl apply -f limit-range.yaml
kubectl describe limitrange default-limits -n team-backend
# Now create a pod WITHOUT resource specs — LimitRange fills them in
kubectl run test --image=nginx -n team-backend
kubectl describe pod test -n team-backend | grep -A 5 "Limits:"
Namespace-Scoped vs Cluster-Scoped Resources
Not everything lives inside a namespace. Some resources are cluster-wide:
| Namespace-Scoped | Cluster-Scoped |
|---|---|
| Pods | Nodes |
| Services | Namespaces |
| Deployments | ClusterRoles |
| ConfigMaps | ClusterRoleBindings |
| Secrets | PersistentVolumes |
| Roles | StorageClasses |
| RoleBindings | IngressClasses |
| PersistentVolumeClaims | CustomResourceDefinitions |
| NetworkPolicies | PriorityClasses |
# List all namespace-scoped resources
kubectl api-resources --namespaced=true
# List all cluster-scoped resources
kubectl api-resources --namespaced=false
Network Policies — Namespace Isolation
By default, all pods can talk to all other pods across namespaces. Network policies restrict this.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: team-backend
spec:
podSelector: {} # Apply to all pods in this namespace
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
team: backend # Only allow traffic from same team's namespace
- podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
team: backend
- podSelector: {}
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Network policies require a CNI plugin that supports them (Calico, Cilium, Weave Net). The default kubenet does not enforce them.
RBAC Per Namespace
Grant team members access only to their namespace:
# Role — defines what actions are allowed
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: team-backend
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["pods", "deployments", "services", "configmaps", "jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec"]
verbs: ["get", "create"]
---
# RoleBinding — attaches the role to a user/group
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: team-backend
subjects:
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: backend-devs
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
# Check what a user can do in a namespace
kubectl auth can-i --list --as=alice@company.com -n team-backend
# Test specific permissions
kubectl auth can-i create deployments -n team-backend --as=alice@company.com
# yes
kubectl auth can-i delete namespaces --as=alice@company.com
# no
Context and Namespace Switching
Typing -n team-backend on every command gets old fast. Set a default namespace:
# View current context
kubectl config current-context
# Set default namespace for current context
kubectl config set-context --current --namespace=team-backend
# Now all commands default to team-backend
kubectl get pods # equivalent to: kubectl get pods -n team-backend
# Switch to a different namespace
kubectl config set-context --current --namespace=team-frontend
# View all contexts
kubectl config get-contexts
# Pro tip: Use kubens for faster switching (install via krew)
# kubens team-backend
# kubens team-frontend
Best Practices for Namespace Organization
By environment:
kubectl create namespace dev
kubectl create namespace staging
kubectl create namespace production
By team:
kubectl create namespace team-backend
kubectl create namespace team-frontend
kubectl create namespace team-data
By application (for large orgs):
kubectl create namespace payments-prod
kubectl create namespace payments-staging
kubectl create namespace auth-prod
kubectl create namespace auth-staging
The right strategy depends on your organization size. For most teams, namespace-per-environment with RBAC per team works well. For larger organizations, namespace-per-team-per-environment provides the most granular control.
Always label your namespaces for easy filtering and policy application:
kubectl label namespace team-backend team=backend environment=production cost-center=engineering
Next up: ConfigMaps and Secrets — how to manage application configuration without baking it into your container images.
