Skip to main content

Kubernetes RBAC — Who Can Do What in Your Cluster

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Your cluster is running in production. Three teams share it. A junior developer accidentally deletes a Deployment in the production namespace. Sound familiar? This is what happens when everyone has cluster-admin. RBAC exists to make sure every user and every service account has exactly the permissions they need — and nothing more.

Authentication vs Authorization

Before RBAC kicks in, Kubernetes needs to know who is making the request (authentication), then decides what they can do (authorization).

LayerQuestionMechanism
AuthenticationWho are you?Client certificates, OIDC tokens, ServiceAccount tokens, webhook
AuthorizationAre you allowed to do this?RBAC, ABAC, Webhook, Node authorizer
Admission ControlShould this request be modified or denied?Validating/Mutating webhooks, PodSecurity

RBAC is the default authorization mode in every modern Kubernetes cluster. You almost certainly have it enabled already.

# Confirm RBAC is enabled
kubectl api-versions | grep rbac
# rbac.authorization.k8s.io/v1

RBAC API Objects

RBAC has exactly four object types. Two define permissions, two bind permissions to users.

ObjectScopePurpose
RoleNamespaceDefines permissions within a single namespace
ClusterRoleCluster-wideDefines permissions across all namespaces or for cluster-scoped resources
RoleBindingNamespaceGrants a Role or ClusterRole to a user within a namespace
ClusterRoleBindingCluster-wideGrants a ClusterRole to a user across the entire cluster

Verbs — What Actions Can Be Performed

Every RBAC rule specifies which verbs (actions) are allowed on which resources:

VerbHTTP MethodDescription
getGET (single)Read a specific resource
listGET (collection)List all resources of a type
watchGET (streaming)Watch for changes in real time
createPOSTCreate a new resource
updatePUTReplace an entire resource
patchPATCHModify specific fields
deleteDELETEDelete a resource
deletecollectionDELETE (collection)Delete multiple resources at once

Creating Roles for Common Scenarios

Read-Only User (View-Only Access)

A developer who needs to see what is running but should not change anything:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: read-only
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "endpoints", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-read-only
namespace: production
subjects:
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: dev-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: read-only
apiGroup: rbac.authorization.k8s.io

Namespace Admin

A team lead who can manage everything within their namespace but not touch other namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-alpha-admin
namespace: team-alpha
subjects:
- kind: User
name: bob@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin # Built-in ClusterRole — full access within a namespace
apiGroup: rbac.authorization.k8s.io

Notice we are binding a ClusterRole (admin) with a RoleBinding. This gives the ClusterRole's permissions but scoped to only the team-alpha namespace.

CI/CD Deployer

A ServiceAccount for your CI/CD pipeline that can deploy and manage workloads but cannot touch RBAC or Secrets:

apiVersion: v1
kind: ServiceAccount
metadata:
name: ci-deployer
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployer
namespace: production
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ci-deployer-binding
namespace: production
subjects:
- kind: ServiceAccount
name: ci-deployer
namespace: production
roleRef:
kind: Role
name: deployer
apiGroup: rbac.authorization.k8s.io

ServiceAccounts

Every pod runs with a ServiceAccount. If you do not specify one, it uses the default ServiceAccount in the namespace.

# Create a ServiceAccount
kubectl create serviceaccount app-runner -n production

# List ServiceAccounts
kubectl get sa -n production

# Use a ServiceAccount in a pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod
namespace: production
spec:
serviceAccountName: app-runner
automountServiceAccountToken: false # Disable if the pod does not need API access
containers:
- name: app
image: myapp:1.0

Setting automountServiceAccountToken: false is a security best practice. If a pod does not need to talk to the Kubernetes API, do not give it a token.

Testing Permissions with kubectl auth can-i

This is your go-to command for debugging RBAC issues:

# Can I create pods?
kubectl auth can-i create pods
# yes

# Can alice create deployments in production?
kubectl auth can-i create deployments --as=alice@company.com -n production
# no

# Can the ci-deployer ServiceAccount update deployments?
kubectl auth can-i update deployments \
--as=system:serviceaccount:production:ci-deployer \
-n production
# yes

# List all permissions for a user in a namespace
kubectl auth can-i --list --as=alice@company.com -n production

# Am I cluster-admin?
kubectl auth can-i '*' '*'

Default ClusterRoles

Kubernetes ships with four built-in ClusterRoles that cover most use cases:

ClusterRolePermissions
cluster-adminFull access to everything (superuser)
adminFull access within a namespace (no ResourceQuotas or namespace itself)
editRead/write most resources in a namespace (no RBAC, no Secrets read)
viewRead-only access to most resources in a namespace (no Secrets)
# See what permissions a built-in role grants
kubectl describe clusterrole admin
kubectl describe clusterrole view

# Use built-in roles with RoleBindings for quick setup
kubectl create rolebinding dev-view \
--clusterrole=view \
--user=alice@company.com \
--namespace=staging

Aggregated ClusterRoles

You can extend built-in ClusterRoles by aggregating custom rules into them:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-viewer
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true" # Auto-aggregates into "view"
rbac.authorization.k8s.io/aggregate-to-edit: "true" # Also into "edit"
rules:
- apiGroups: ["custom.metrics.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]

Anyone who has the view or edit ClusterRole now automatically gets permissions to read custom metrics. No need to update existing RoleBindings.

Common RBAC Mistakes

1. Everyone gets cluster-admin. This is the most common mistake. Even in dev clusters, practice least privilege.

2. Using the default ServiceAccount. The default SA often accumulates permissions over time. Create dedicated ServiceAccounts for each workload.

3. Granting Secrets access when it is not needed. The edit ClusterRole intentionally excludes reading Secrets. Do not add it back unless required.

4. Forgetting namespace scope. A Role and RoleBinding must be in the same namespace. If your role is in default but your binding references production, it will not work.

5. Not testing with kubectl auth can-i. Always verify permissions after creating roles. Do not assume — test.

# Quick RBAC audit — find all ClusterRoleBindings granting cluster-admin
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name'

Next, we will dive deep into Kubernetes networking — how pods talk to each other, how DNS works, and how to use NetworkPolicies to lock down communication between services.