Confidential computing leverages a trusted execution environment (TEE) to protect memory in use, which helps ensure encryption for data at rest, in transit, and in use. Confidential Containers (CoCo) combine the TEE with Kubernetes deployments. Deploying a TEE at the pod level allows strong isolation of workloads, not just from other workloads on the cluster, but also from cluster administrators.
The challenge with Confidential Containers is in getting started. Making the decision to deploy a pod into a confidential container is a single line change to a pod manifest. However, to get started you need to successfully deploy a number of components together, ideally across multiple clusters. Red Hat has recently released support for Confidential Containers on Microsoft Azure in Red Hat OpenShift Sandboxed Container Operator v1.9 and above, and support for remote attestation with Red Hat build of Trustee.
In this blog, I describe how validated patterns can be used to achieve three goals:
- Simplify getting started with Confidential Containers (CoCo) using OpenShift Sandboxed Containers Operator and Red Hat build of Trustee on Azure
- Provide a consistent and declarative foundation for CoCo to enable best practice deployments
- Demonstrate how to deploy applications using CoCo
Overview of validated patterns
Validated patterns are living code architectures for different multi-cloud and hybrid cloud use cases. Each pattern is tested and, when mature, added to Red Hat’s continuous integration (CI) system. This ensures we're testing against the latest version of operators, OpenShift releases, and across multiple public cloud environments.
Each Red Hat validated pattern repository shows a business use case in the form of Kubernetes resources (helm charts, kustomize, and primitive objects) that describe a hybrid cloud stack declaratively and comprehensively, from services down to supporting infrastructure. Validated patterns facilitate complex, highly reproducible deployments, and are ideal for operating these deployments at scale using GitOps operational practices.
Why use validated patterns?
Deploying complex business solutions involves multiple steps. Each step, if done haphazardly, could introduce potential errors or inefficiencies. Validated patterns address this by offering a pre-validated, automated deployment process:
- Uses a GitOps model to deliver the use case as code
- Serves as a proof of concept (PoC) modified to fit a particular need, and you can evolve it into a real deployment
- It's highly reproducible, so it's great for operating at scale
- Validated patterns are open for collaboration. Anyone can suggest improvements, contribute to them, or use them because all the Git repositories are upstream
- Each validated pattern can be modified to suit your specific needs. If you would like to swap out a component (for example, use Ceph storage instead of S3), it's as easy as commenting out sections in the configuration and including another repo.
- It's tested. Once made a validated pattern, the use case is included in Red Hat's CI and continues to be tested across product versions while the pattern remains active.
Validated patterns are a "batteries included" solution. Wherever you start with the patterns framework, both the core and a configurable set of components are delivered out of the box. For this article, I used a validated pattern to create an easy way to get started with Confidential Containers.
How to deploy a pattern
The validated patterns website has extensive documentation on how to use the validated patterns. Best practice requires:
- A Git repository for the pattern, such as a fork of the pattern. Validated patterns use GitOps, so you must control the repository you're using
- A developer laptop with oc, Git, and Podman installed
- A blank OpenShift cluster where the patterns operator "manages" the cluster
Here's how these requirements interact:

With this set up, the validated pattern "owns" what is on the cluster, leaving you with a single place to start.
Using validated patterns for Confidential Containers
Red Hat OpenShift sandboxed containers is built on Kata Containers, and it provides the additional capability to run Confidential Containers. Confidential containers are containers deployed within an isolated hardware enclave that help protect data and code from privileged users, such as cloud or cluster administrators. The CNCF Confidential Containers project is the foundation for the OpenShift CoCo solution.
Confidential computing helps protect your data while it's in use by leveraging dedicated hardware-based solutions. Using hardware, you can create isolated environments that are owned by you, and help protect against unauthorized access or changes to your workload's data while it’s being executed (data in use).
CoCo enables cloud-native confidential computing using a number of hardware platforms and supporting technologies. CoCo aims to standardize confidential computing at the pod level, and simplify its consumption in Kubernetes environments. By doing so, Kubernetes users can deploy CoCo workloads using familiar workflows and tools without needing a deep understanding of the underlying confidential computing technologies.
For additional information, read Exploring the OpenShift confidential containers solution.
Confidential Containers architecture
The Red Hat confidential containers solution is based on two key operators:
- Red Hat OpenShift confidential containers: A feature added to the Red Hat OpenShift sandbox containers operator responsible for deploying the building blocks for connecting workloads (pods) and confidential virtual machines (CVM) that run inside the TEE provided by hardware
- Remote attestation: Red Hat build of Trustee is responsible for deploying and managing the Key Broker Service (KBS) in a Red Hat OpenShift cluster.
For additional information, read Introducing Confidential Containers trustee: Attestation services solution overview and use cases.
CoCo typically has two environments: a trusted zone and an untrusted zone. In these zones, Trustee and the sandbox container operator are deployed, respectively:

So what's the challenge? Understanding how to do this requires understanding and specific details for your cloud or on-premise infrastructure. It’s important to consider several questions, for example: Which region are you in? What chipset (Intel, AMD, IBM Power, s390) and hypervisor are you targeting?
For additional information on deploying CoCo, see Deployment considerations for Red Hat OpenShift confidential containers solution.
Introducing the confidential container validated pattern
The objective of the confidential container validated pattern is to make it easy to get started, and to understand how to deploy Confidential Containers. It uses the validated pattern architecture to:
- Deploy the necessary operators for running CoCo
- Configure the peripheral background components including certificates (using Let’s Encrypt, if required.)
- Abstract the user deploying CoCo onto the cluster from the cloud using tools such as Red Hat Advanced Cluster Manager
- Deploy a set of sample applications to demonstrate various features of Confidential Containers, including manipulating CoCo
Currently the pattern is deployed on Microsoft Azure to a single cluster with all components stemming from a single validated pattern (additional deployments will be added in the future.)
How does it work?
We leverage the validated patterns operator to deploy Argo CD, and Argo CD deploys the additional required operators.
The problem is that the peer-pods configuration map, including init-data and kata-policy, must be configured to point to the Trustee Key Broker Service (KBS). This information is dynamic, and requires the user to either use the Azure CLI or to access the Azure portal. From a security and visibility perspective, init-data and kata-policy are also problematic because they are base64 serialized before being pushed to a config map, which makes it hard for a user to verify the posture.
These issues are solved by using metadata injected by the validated patterns operator, allowing us to access information on the cluster easily in our application. Advanced cluster manager policies are used to collect information defined by the cloud controller manager, and to inject it into the appropriate configuration maps and secrets for the sandboxed container operator.
Hashicorp Vault is used as a KMS within the cluster with the validated patterns secrets configuration, allowing users to bootstrap Vault consistently from a developer workstation environment. We use this to provide secrets for Trustee, which are synchronized using the external secrets operator.
The combination of these capabilities allows installation with a single command and is shown below:

Requirements
Currently, we are limited to Azure as a platform with the simple
pattern topology being a single OpenShift Cluster.
Users can bring either an Azure Red Hat OpenShift cluster or a self managed OpenShift cluster on Azure. The pattern includes documentation on how to use openshift-install
to build a cluster. The cluster and azure account need to have access to and availability of Azure CVMs in the region. Today the pattern assumes DCasv5
class virtual machines for the Confidential Containers, however, this can be customized.
The only additional configuration needed for Azure is to deploy a NAT Gateway for the worker node subnet. This will happen automatically.
For the developer workstation a POSIX workstation (Mac OS or Linux) with oc
and podman
installed is a required.
Step-by-step instructions
There are just three steps.
1. Create a fork
First, create a fork of the validated patterns GitHub repository within your organization. Note that due to the eventual consistency of Argo CD, it’s not safe to directly use the validated patterns repository.
git clone https://github.com/(YOUR ORG}/coco-pattern.git
2. Generate random keys
Next, generate the baseline secrets. The pattern includes scripts to generate random keys:
sh scripts/gen-secrets.sh
3. Install
Log in to the cluster using oc login and run the pattern install:
./pattern.sh make install
That's it! Wait for the system to come online, and then explore the deployed applications.
Exploring the deployed applications
The pattern deploys an Argo CD instance called Simple ArgoCD
in the 9 box menu in the OpenShift web console. A number of applications are deployed. The two critical applications to consider are hello-openshift
and kbs-access
.

The hello-openshift application deploys a web application three times:
- As a standard pod
- As a kata pod, where the agent config has been deliberately overridden to allow a user to exec into the pod
- As a "secure" application with CoCo hardening turned on
The kbs-access application is a simple demonstration on retrieving a secret from Trustee using an init container. KBS access allows you to access the secret through its web API so you can see how changes in the secret propagate through the system. This init container method for retrieving secrets is a convenient method when uplifting security on existing applications, because you can do so without requiring Trustee everywhere you develop code.
Security considerations when deploying CoCo patterns
Confidential containers are all about security, so it's important to consider the security posture of the confidential containers validated pattern. There are two primary considerations for this pattern:
- The pattern today uses simple reference values. We encourage you to read about RATS attestation flow and to develop policies fit for your security needs and the risk profile of a system.
- Separate the Trustee deployment. Related to the first point, Trustee’s attestation service is built on the principle that it operates in a different, and trusted, security zone. Ideally, this is a different environment, such as on-premise or another cloud provider.
The diagram below shows the architecture for deploying Trustee in a separated environment using the confidential container validated pattern:

CoCo patterns future work
The CoCo validated pattern is enough for you to get started. Its focus is on making you successful enough on a single cluster and environment so you can start testing. Our immediate focus is to expand this with more practical examples to enable you to continue your journey of using Confidential Containers.
We want to support multi-cluster hub-and-spoke deployments, allowing Trustee to be deployed in the hub with Red Hat Advanced Cluster Management, and the spoke clusters to be where the confidential workloads run.
We also intend to provide practical examples of the use of trustee for secret management. The examples so far are simple. Our priorities for future development is to allow for managed storage encryption within a TEE, and secret initialization for applications that are unaware of Trustee, and VPN configuration.
Also, we want to support other environments for deploying CoCo and Trustee, so that trust can be spread across resources on-premises or on multiple cloud service providers.
Summary
The confidential containers validated pattern provides a simple mechanism to get started with CoCo. It's a great mechanism to start experimentation and fork to deploy your own self-contained applications, in a single repository, leveraging a standardized app-of-apps GitOps approach with Argo CD. As you've seen, getting started can be as simple as a git clone
and make install
.
product trial
Red Hat Learning Subscription | Product Trial
About the author
Dr. Chris Butler is a Chief Architect in the APAC Field CTO Office at Red Hat, the world’s leading provider of open source solutions. Chris, and his peers, engage with clients and partners who are stretching the boundaries of Red Hat's products. Chris is currently focused on the strategy and technology to enable regulated & multi-tenant environments, often for ‘digital sovereignty’. He has been doing this with Governments and Enterprise clients across Asia Pacific.
From a technology perspective Chris is focused on: Compliance as code with OSCAL Compass; Confidential Computing to enforce segregation between tenants and providers; enabling platforms to provide AI accelerators as a service.
Prior to joining Red Hat Chris has worked at AUCloud and IBM Research. At AUCloud Chris led a team who managed AUCloud’s productization strategy and technical architecture. Chris is responsible for the design of AUCloud's IaaS & PaaS platforms across all security classifications.
Chris spent 10 years within IBM in management and technical leadership roles finishing as a Senior Technical Staff Member. Chris is an experienced technical leader, having held positions responsible for: functional strategy within the IBM Research division (Financial Services); developing the IBM Global Technology Outlook; and as development manager of IBM Cloud Services.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds