Kubernetes Identity and Security Guide: RBAC, Service Accounts, and Pod Identity
Secure Kubernetes workloads with proper RBAC configuration, service account hardening, OIDC integration, pod identity, and secrets management best practices.
Kubernetes Identity and Security Guide: RBAC, Service Accounts, and Pod Identity
Kubernetes has become the de facto container orchestration platform, but its identity model is one of the most misunderstood aspects of cloud-native security. Out of the box, Kubernetes provides a powerful but complex identity framework — Role-Based Access Control for API authorization, service accounts for workload identity, and multiple extension points for integrating with external identity providers.
The problem is that most Kubernetes deployments use the defaults, and the defaults are not secure. Default service accounts have tokens auto-mounted into every pod. RBAC roles are often overly permissive. Secrets are stored as base64-encoded (not encrypted) values. This guide walks you through securing identity in Kubernetes from the ground up.
Prerequisites
- A running Kubernetes cluster (v1.27+) — Managed (EKS, AKS, GKE) or self-managed.
- kubectl configured with cluster-admin access for initial setup.
- Familiarity with Kubernetes concepts — Pods, Deployments, Namespaces, ConfigMaps, Secrets.
- An external identity provider if integrating OIDC (Entra ID, Okta, Dex, Keycloak).
- A secrets management solution (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) for production workloads.
Architecture: Kubernetes Identity Model
Human Identity vs. Workload Identity
Kubernetes distinguishes between two identity types:
Human users — Developers, operators, and administrators who interact with the Kubernetes API via kubectl or dashboards. Kubernetes does not manage human user accounts internally. Instead, it relies on external identity providers through authentication plugins (OIDC, webhook token authentication, client certificates).
Service accounts — Identities assigned to workloads (pods) running inside the cluster. These are Kubernetes-native objects managed by the API server. Service accounts are used for pod-to-API-server communication and, with workload identity federation, for pod-to-external-service communication.
The Authentication and Authorization Flow
Request → Authentication → Authorization → Admission → API Server
Authentication: Who are you?
- OIDC tokens (recommended for humans)
- Service account tokens (for workloads)
- Client certificates
- Webhook token review
Authorization: What can you do?
- RBAC (primary, recommended)
- ABAC (legacy, not recommended)
- Webhook authorization
- Node authorization
Admission: Should this action be allowed?
- Validating webhooks
- Mutating webhooks
- Pod Security Admission
Step-by-Step Implementation
Step 1: Configure OIDC Authentication for Human Users
Stop using client certificates or static token files for human authentication. OIDC provides proper identity, supports MFA through the IdP, and integrates with your existing identity infrastructure.
Configure the API server for OIDC:
For managed clusters, this is configured through the cloud provider's Kubernetes service. For self-managed clusters, add these flags to the API server:
# API server configuration
--oidc-issuer-url=https://login.microsoftonline.com/{tenant-id}/v2.0
--oidc-client-id=your-kubernetes-client-id
--oidc-username-claim=email
--oidc-groups-claim=groups
--oidc-username-prefix=oidc:
--oidc-groups-prefix=oidc:
For Amazon EKS:
aws eks associate-identity-provider-config \
--cluster-name my-cluster \
--oidc \
--identity-provider-config '{
"identityProviderConfigName": "entra-id",
"issuerUrl": "https://login.microsoftonline.com/{tenant-id}/v2.0",
"clientId": "your-client-id",
"usernameClaim": "email",
"groupsClaim": "groups",
"usernamePrefix": "oidc:"
}'
For Azure AKS:
AKS natively integrates with Entra ID:
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--enable-aad \
--aad-admin-group-object-ids "your-admin-group-id"
Configure kubectl for OIDC:
Use kubelogin or oidc-login plugin:
kubectl config set-credentials oidc-user \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubelogin \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://login.microsoftonline.com/{tenant-id}/v2.0 \
--exec-arg=--oidc-client-id=your-client-id \
--exec-arg=--oidc-client-secret=your-client-secret
Step 2: Design and Implement RBAC
RBAC in Kubernetes uses four objects: Role, ClusterRole, RoleBinding, and ClusterRoleBinding.
Principle 1: Namespace isolation
Use namespaces to isolate teams and environments. Bind roles at the namespace level, not the cluster level.
apiVersion: v1
kind: Namespace
metadata:
name: team-payments
labels:
team: payments
environment: production
Principle 2: Least privilege roles
Never use the built-in cluster-admin role for regular operations. Create custom roles with minimal permissions:
# Developer role - can manage deployments and view logs
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-payments
name: developer
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods", "pods/log", "services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: [] # explicitly deny exec in production
# SRE role - broader access but still not cluster-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-payments
name: sre
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec", "services", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["networkpolicies", "ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
Principle 3: Bind to groups, not users
Always bind roles to groups from your IdP, not individual users. This allows access management through group membership rather than Kubernetes RBAC modifications:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payments-developers
namespace: team-payments
subjects:
- kind: Group
name: oidc:payments-developers # matches IdP group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
Step 3: Harden Service Accounts
Disable auto-mounting of service account tokens:
By default, Kubernetes mounts a service account token into every pod. Most pods do not need to communicate with the Kubernetes API. Disable auto-mounting at the service account level:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: team-payments
automountServiceAccountToken: false
Or at the pod level:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
automountServiceAccountToken: false
serviceAccountName: my-app
Create dedicated service accounts per workload:
Never use the default service account. Create a purpose-specific service account for each application:
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-processor
namespace: team-payments
annotations:
description: "Service account for the payment processing service"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: payment-processor-role
namespace: team-payments
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch"]
resourceNames: ["payment-config"] # restrict to specific resources
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payment-processor-binding
namespace: team-payments
subjects:
- kind: ServiceAccount
name: payment-processor
namespace: team-payments
roleRef:
kind: Role
name: payment-processor-role
apiGroup: rbac.authorization.k8s.io
Use bound service account tokens (TokenRequest API):
Instead of the legacy long-lived tokens, use bound tokens that are audience-scoped and time-limited:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
serviceAccountName: payment-processor
automountServiceAccountToken: false
containers:
- name: app
volumeMounts:
- name: token
mountPath: /var/run/secrets/tokens
readOnly: true
volumes:
- name: token
projected:
sources:
- serviceAccountToken:
audience: "https://my-api.example.com"
expirationSeconds: 3600
path: token
Step 4: Implement Workload Identity Federation
Workload identity federation allows Kubernetes pods to authenticate to cloud services without storing static credentials. Each cloud provider has its own implementation.
AWS EKS Pod Identity / IRSA:
apiVersion: v1
kind: ServiceAccount
metadata:
name: s3-reader
namespace: team-payments
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/s3-reader-role
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-processor
spec:
template:
spec:
serviceAccountName: s3-reader
containers:
- name: app
# AWS SDK automatically uses the IRSA token
Azure AKS Workload Identity:
apiVersion: v1
kind: ServiceAccount
metadata:
name: keyvault-reader
namespace: team-payments
annotations:
azure.workload.identity/client-id: "your-managed-identity-client-id"
labels:
azure.workload.identity/use: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-consumer
spec:
template:
metadata:
labels:
azure.workload.identity/use: "true"
spec:
serviceAccountName: keyvault-reader
GCP GKE Workload Identity:
gcloud iam service-accounts add-iam-policy-binding \
gcs-reader@my-project.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:my-project.svc.id.goog[team-payments/gcs-reader]"
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcs-reader
namespace: team-payments
annotations:
iam.gke.io/gcp-service-account: gcs-reader@my-project.iam.gserviceaccount.com
Step 5: Secure Secrets Management
Kubernetes Secrets are base64-encoded, not encrypted at rest by default. For production workloads, use external secrets management.
Enable encryption at rest:
Configure the API server to encrypt secrets in etcd:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-key>
- identity: {}
For managed clusters, enable the cloud provider's KMS integration (AWS KMS, Azure Key Vault, GCP Cloud KMS).
Use External Secrets Operator:
The External Secrets Operator syncs secrets from external vaults into Kubernetes:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: team-payments
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-creds
data:
- secretKey: username
remoteRef:
key: payments/database
property: username
- secretKey: password
remoteRef:
key: payments/database
property: password
Use CSI Secrets Store Driver for direct mount:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-keyvault
Best Practices
Audit RBAC Regularly
Use tools like kubectl-who-can, rakkess, or rbac-lookup to audit who has what access:
# Who can delete pods in the payments namespace?
kubectl-who-can delete pods -n team-payments
# What can the payment-processor service account do?
kubectl auth can-i --list --as=system:serviceaccount:team-payments:payment-processor -n team-payments
Run these audits monthly and feed results into your IGA program.
Implement Network Policies Alongside Identity
RBAC controls API access, but network policies control pod-to-pod communication. Use both:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-processor-policy
namespace: team-payments
spec:
podSelector:
matchLabels:
app: payment-processor
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
egress:
- to:
- podSelector:
matchLabels:
app: database
Use Pod Security Admission
Enforce security standards at the namespace level:
apiVersion: v1
kind: Namespace
metadata:
name: team-payments
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
The restricted profile prevents running as root, requires dropping all capabilities, and enforces read-only root filesystems.
Testing
- RBAC testing — Create test users/service accounts with each role and verify they can perform exactly the allowed actions and nothing more.
- Workload identity testing — Deploy a test pod with workload identity and verify it can access the target cloud service. Then remove the identity binding and verify access is denied.
- Secrets testing — Verify that secrets are encrypted at rest by examining etcd directly (in test environments). Verify that External Secrets Operator syncs correctly and handles rotation.
- Penetration testing — Use tools like kube-hunter or kube-bench to identify RBAC misconfigurations and service account vulnerabilities.
Common Pitfalls
Granting cluster-admin to CI/CD
CI/CD pipelines often receive cluster-admin because "it is easier." This means a compromised pipeline can destroy the entire cluster. Create scoped roles that allow only the specific actions your pipeline needs (deploy to specific namespaces, update specific resources).
Not Rotating Service Account Tokens
Legacy (pre-1.24) service account tokens do not expire. If your cluster has workloads using these tokens, migrate to bound tokens with expiration. Audit for long-lived tokens regularly.
Storing Secrets in ConfigMaps
ConfigMaps are not encrypted and are often less strictly controlled than Secrets. Never store sensitive data in ConfigMaps, even temporarily.
Ignoring the Default Namespace
The default namespace often has overly permissive RBAC and should not be used for production workloads. Deploy everything to purpose-specific namespaces.
Conclusion
Kubernetes identity security requires a multi-layered approach: OIDC for human authentication, RBAC for authorization, hardened service accounts for workload identity, workload identity federation for cloud service access, and external secrets management for credential protection. The defaults are not secure enough for production. Invest the time to configure each layer properly, audit regularly, and treat Kubernetes identity as part of your broader IAM program.
Frequently Asked Questions
Q: Should we use RBAC or ABAC in Kubernetes? A: RBAC. ABAC is a legacy authorization mode that requires API server restarts to update policies. RBAC is dynamic, auditable, and the standard for Kubernetes authorization.
Q: How do we manage RBAC at scale across many clusters? A: Use GitOps tools (Flux, ArgoCD) to manage RBAC manifests as code across clusters. For policy enforcement, use tools like OPA Gatekeeper or Kyverno to enforce RBAC standards.
Q: Can pods assume different cloud IAM roles dynamically? A: With workload identity federation, a pod assumes the cloud IAM role mapped to its Kubernetes service account. To change roles, you would need different service accounts or pods. Dynamic role assumption within a single pod is not the standard pattern.
Q: How do we handle third-party operators that need cluster-wide access? A: Review the operator's RBAC requirements carefully. Many operators request more permissions than they need. Create custom ClusterRoles with the minimum required permissions and bind them to the operator's service account. Monitor the operator's API calls to verify it uses only what it needs.
Q: Is there an equivalent to AWS IAM policies for Kubernetes? A: Kubernetes RBAC is the equivalent. For more complex policy needs (ABAC-like conditions, resource-level constraints), use admission controllers like OPA Gatekeeper, which allows Rego-based policies that can evaluate request attributes, resource labels, and annotations.
Share this article