Skip to main content

Kubernetes Security — Pod Security Standards, Network Policies, and OPA

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Here is a scenario that happens far too often: a developer deploys a container that runs as root, mounts the host filesystem, and has no network restrictions. An attacker exploits a vulnerability in the application, escapes the container, and now has root access to the node — and from there, to the entire cluster. Kubernetes gives you powerful security primitives, but none of them are enabled by default.

The Four Layers of Kubernetes Security

Security in Kubernetes is not a single feature — it is a stack of defenses. If one layer fails, the next one catches the threat.

LayerWhat It ProtectsKey Tools
ClusterAPI server, etcd, control planeRBAC, audit logging, API encryption
NodeHost OS, kubelet, container runtimeNode hardening, CIS benchmarks
PodContainer configurationPod Security Standards, security contexts
NetworkPod-to-pod trafficNetwork Policies, service mesh mTLS
Supply ChainContainer imagesImage policies, vulnerability scanning

Pod Security Standards (PSS)

Pod Security Standards define three escalating levels of restriction. They replaced the deprecated PodSecurityPolicy (PSP) in Kubernetes 1.25.

LevelWhat It AllowsUse Case
PrivilegedEverything, no restrictionsSystem-level workloads (CNI, monitoring)
BaselineBlocks known privilege escalationsMost workloads, reasonable defaults
RestrictedMaximum lockdownSensitive workloads, multi-tenant clusters

Enforcing Pod Security with Admission

Pod Security Admission is built into Kubernetes (no third-party tools needed). You apply it per-namespace using labels:

apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted — reject pods that violate
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Warn on baseline — allow but log a warning
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
# Audit — record violations in audit logs
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest

Now try deploying a pod that runs as root in this namespace:

# This will be REJECTED in the production namespace
kubectl run bad-pod --image=nginx --namespace=production \
--overrides='{
"spec": {
"containers": [{
"name": "nginx",
"image": "nginx",
"securityContext": {"runAsUser": 0, "privileged": true}
}]
}
}'
# Error: pods "bad-pod" is forbidden: violates PodSecurity "restricted:latest"

Security Context — Hardening Every Container

A security context defines privilege and access settings for a pod or container. Every production pod should set these:

apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myregistry.io/app:v2.1.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}

The key settings explained:

  • runAsNonRoot: true — Kubernetes rejects the pod if the image runs as UID 0
  • readOnlyRootFilesystem: true — Container cannot write to its filesystem (mount emptyDir for /tmp)
  • capabilities: drop: ALL — Removes all Linux capabilities (NET_BIND, SYS_ADMIN, etc.)
  • allowPrivilegeEscalation: false — Prevents gaining more privileges than the parent process
  • seccompProfile: RuntimeDefault — Applies the container runtime's default syscall filter

Network Policies — Microsegmentation

By default, every pod in Kubernetes can talk to every other pod. Network Policies let you define firewall rules at the pod level. The strategy: deny everything, then allow what is needed.

# Step 1: Deny all ingress and egress in the namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {} # Apply to ALL pods
policyTypes:
- Ingress
- Egress
# Step 2: Allow frontend to talk to backend on port 8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080

---
# Step 3: Allow backend to reach the database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53

Always remember to allow DNS (port 53 UDP to kube-dns). Without it, your pods cannot resolve service names and everything breaks silently.

OPA Gatekeeper — Policy as Code

Pod Security Standards handle pod-level security, but what about enforcing organizational policies? "All images must come from our private registry." "Every deployment must have resource limits." "No service of type LoadBalancer in dev namespaces."

OPA Gatekeeper lets you define these policies as code using Rego:

# Install Gatekeeper
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper gatekeeper/gatekeeper --namespace gatekeeper-system --create-namespace

Create a constraint template (the policy logic):

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sallowedregistries
spec:
crd:
spec:
names:
kind: K8sAllowedRegistries
validation:
openAPIV3Schema:
type: object
properties:
registries:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedregistries
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not startswith(container.image, input.parameters.registries[_])
msg := sprintf("Image '%v' is not from an allowed registry. Allowed: %v", [container.image, input.parameters.registries])
}

Apply the constraint (the policy configuration):

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
name: only-trusted-registries
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet", "DaemonSet"]
namespaces:
- production
- staging
parameters:
registries:
- "gcr.io/my-project/"
- "myregistry.azurecr.io/"

Now any deployment using nginx:latest from Docker Hub will be rejected in production and staging.

Secrets Encryption at Rest

By default, Kubernetes stores Secrets in etcd as base64-encoded plaintext. Anyone with access to etcd can read every secret. Enable encryption at rest:

# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback for reading existing unencrypted secrets

After applying this configuration to the API server, re-encrypt existing secrets:

# Re-encrypt all secrets with the new key
kubectl get secrets --all-namespaces -o json | kubectl replace -f -

Security Scanning with Trivy Operator

Trivy Operator runs as a Kubernetes operator and continuously scans your running workloads for vulnerabilities:

helm repo add aqua https://aquasecurity.github.io/helm-charts
helm install trivy-operator aqua/trivy-operator \
--namespace trivy-system \
--create-namespace

# Check vulnerability reports
kubectl get vulnerabilityreports -A -o wide
# NAMESPACE NAME CRITICAL HIGH MEDIUM LOW
# production deploy-payment-api 0 2 5 12
# production deploy-frontend 0 0 3 8

Wrapping Up

Kubernetes security is not a one-time setup — it is a layered, ongoing practice. Start with Pod Security Standards and security contexts to lock down containers, add Network Policies to segment traffic, use OPA Gatekeeper to enforce organizational policies, encrypt secrets at rest, and scan images continuously. No single tool covers everything, but together they form a defense-in-depth strategy that makes your cluster significantly harder to compromise.

Production issues do not always come from attackers, though. In the next post, we will tackle the most common Kubernetes failures — CrashLoopBackOff, ImagePullBackOff, and Pending pods — and build a systematic troubleshooting workflow.