Introduction
In enterprise Kubernetes environments, security risks often arise from overlapping administrative access. Platform engineers, infrastructure operators and developers may all touch sensitive resources, like secrets. This creates opportunities for privilege misuse or data exposure. By separating admin duties using Confidential Containers (CoCo), organizations can now prevent insider threats, simplify compliance, and align with zero-trust principles.
In this blog we will discuss how CoCo augments Kubernetes role-based access control (RBAC) to provide fine-grained access to sensitive resources for infrastructure admins, cluster admins and workload admins.
Challenges of Kubernetes role-based access control
Admin access privileges
For ease of understanding, let's consider three different admin personas in a typical Kubernetes cluster and their level of access:
Can access | Can't access | |
Infrastructure admin | Physical hosts / hypervisors Networks Storage Workload Secrets Unencrypted memory of the workloads | |
Cluster admin | Host OS Kubernetes API and control plane RBAC roles and namespaces Kubernetes network policies Scheduling Workload secrets Unencrypted memory of workloads | Physical hosts / hypervisors Networks Storage |
Workload admin | Specific namespaces, deployments, pods Workloads Environment variables secrets | Node settings Kubernetes network policies Cluster-wide configs |
Kubernetes role-based access control (RBAC) enforces access policies on the Kubernetes objects. For example, RBAC can help limit a workload admin or a developer to only deploy applications within specific namespaces, or a team to have view-only access for monitoring tasks.
While Kubernetes RBAC provides fine-grained access control for cluster resources, it does not natively separate infrastructure or cluster admins from workload security. For example, the cluster admin has complete control over all Kubernetes resources, including all the workload secrets. It's also important to note that Kubernetes RBAC does not apply to the underlying infrastructure. Therefore, an infrastructure admin, who has access to the physical hosts, hypervisors, networks, and storage, can potentially bypass Kubernetes RBAC by accessing the unencrypted memory of workloads or the underlying storage where secrets might reside. This level of control creates a challenge for security-conscious organizations that want to prevent Kubernetes admins or infrastructure admins from having access to all the workload secrets.
Before we look at a potential solution to secure the workload secrets from the cluster or the infrastructure admins, let's understand in brief how Kubernetes provides secrets to the workload pods.
How Kubernetes secrets are made available to pods
Kubernetes mounts secrets into pods as:
- Volumes: Pods can retrieve secrets from volumes mounted as files within the pod.
- Environment variables: Pods can retrieve secrets through environment variables, but this method exposes secrets to any admin who can inspect running processes or environment variables inside a pod
A cluster admin with access to the worker node's filesystem can also read the secrets mounted as volumes or can inspect running processes and their environment variables. Consequently anyone with admin access to the Kubernetes cluster nodes has access to all the workload secrets.
Splitting administrative responsibilities with CoCo
Leveraging CoCo for RBAC
Confidential Containers (CoCo) enables a new governance model, where secrets can be delivered securely to workloads inside a Trusted Execution Environment (TEE), bypassing the Kubernetes cluster (and admin) itself. This capability can redefine administrative roles, and segregate cluster and workload admins in Kubernetes. This provides another layer to Kubernetes RBAC enabling a separation between a cluster and workload admin.
CoCo can be used to enable a strong governance model that segregates infrastructure, cluster, and workload admins, each with tightly scoped responsibilities and no overlapping privileges.
Admin control with CoCo
By leveraging CoCo you can introduce a three-way split-administration model that segregates infrastructure, cluster, and workload responsibilities. This governance model allows each group to operate independently within tightly scoped boundaries, significantly reducing the risk of privilege escalation and data exposure.
Infrastructure Admins:
- Operate and manage the underlying infrastructure (hosts, hypervisors, physical machines, and networks)
- In CoCo setups, infrastructure admins cannot access the workloads data or secrets
Cluster Admins:
- Manages the Kubernetes control plane and cluster-wide resources such as nodes, networking policies, RBAC roles, namespaces, and scheduling
- Has complete visibility and control within the cluster however cannot access CoCo workload data or secrets and tamper with CoCo workloads
Workload Admins:
- Own and manage specific workloads and the secrets needed to run them
- Uses CoCo and the associated Trustee attestation to securely provision secrets
- Secrets are sealed and can only be unsealed inside the TEE environment by the workload, never exposed in plaintext to the cluster or infrastructure admins.
- Workload admins are mapped to specific Trustee servers, ensuring strict boundaries and access control
The following shows the evolved admins when using CoCo/Trustee attestation for infrastructure, cluster and workload (changes from previous table marked in bold):
Can access | Can't access | |
Infrastructure admin | Physical hosts / hypervisors Networks Storage | Workloads Secrets Unencrypted memory of the workloads |
Cluster admin | Host OS Kubernetes API and control plane RBAC roles and namespaces Kubernetes network policies Scheduling | Physical hosts / hypervisors Networks Storage Workloads secrets Unencrypted memory of workloads |
Workload admin | Specific namespaces, deployments, pods Workloads Environment variables secrets | Node settings Kubernetes network policies Cluster-wide configs |
Business outcomes: Risk reduction and compliance
For decision makers, this model brings tangible benefits:
- Regulatory compliance: Ensures compliance with industry regulations (GDPR, PCI-DSS, HIPAA, and so on) by limiting privileged access to sensitive data
- Enhanced security posture: Reduces the attack surface by preventing unauthorised access to application secrets
- Seamless multi-tenancy: Supports environments where multiple teams within an organisation share the same Kubernetes cluster, without compromising security
- Zero-trust architecture: Aligns with modern security principles by enforcing strict boundaries between infrastructure and application security
How it works in practice
Following is an example of using CoCo for segregating a cluster admin and 2 workload admins:

CoCo for cluster and workload admins segregation
In this example we have a k8s cluster admin and 2 workload admins (Admin A and Admin B).
- The workload admins A and B provision secrets for their CoCo workloads through separate Trustee instances (Trustee A and trustee B)
- It should be noted that Trustee A and Trustee B are hosted externally from the Kubernetes cluster used for attesting the workloads
- Each of the workload admins creates a reference to these secrets in the Kubernetes cluster
- CoCo provides a sealed secret functionality for this purpose where the plaintext secrets are available only inside the CoCo pod
- This means that only workload admin A can access secret A, only workload admin B can access secret B and the k8s cluster admin can’t access secret A or secret B (although being the cluster admin)
- Note that the infrastructure admin also has no access to those secrets
- This ensures that an organisation can implement a least privilege access model for secrets management, while keeping Kubernetes infrastructure and cluster operational
Imagine for example a financial services team deploying a sensitive trading app. The cluster admin sets up networking and policies but cannot access the workload secrets. Using the sealed secrets feature of CoCo, secrets are only provisioned by the workload admin using the Trustee service. Even if the cluster is compromised, the secrets remain protected inside the CoCo.
For additional details on CoCo and Trustee solutions we recommend reading our previous blog Exploring the OpenShift confidential containers solution.
Summary
Confidential computing and CoCo is more than just a security enhancement. It enables a fundamental shift in Kubernetes administration. You can reduce risk, enforce governance, and build a more secure cloud-native environment for your organization by separating infrastructure, cluster and workload administration.
For the enterprise, this governance model provides a robust way to embrace Kubernetes while maintaining complete control over sensitive data.
Want to implement this in your environment? Learn how Red Hat and confidentialcontainers.org secure Kubernetes. Request a demo or read our blog series.
Prueba del producto
Red Hat Learning Subscription | Versión de prueba
Sobre los autores
Pradipta is working in the area of confidential containers to enhance the privacy and security of container workloads running in the public cloud. He is one of the project maintainers of the CNCF confidential containers project.
Jens Freimann is a Software Engineering Manager at Red Hat with a focus on OpenShift sandboxed containers and Confidential Containers. He has been with Red Hat for more than six years, during which he has made contributions to low-level virtualization features in QEMU, KVM and virtio(-net). Freimann is passionate about Confidential Computing and has a keen interest in helping organizations implement the technology. Freimann has over 15 years of experience in the tech industry and has held various technical roles throughout his career.
Master of Business Administration at Christian-Albrechts university, started at insurance IT, then IBM as Technical Sales and IT Architect. Moved to Red Hat 7 years ago into a Chief Architect role. Now working as Chief Architect in the CTO Organization focusing on FSI, regulatory requirements and Confidential Computing.
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Virtualización
El futuro de la virtualización empresarial para tus cargas de trabajo locales o en la nube