Your submission was sent successfully! Close

Jump to main content

Confidential computing in public clouds: isolation and remote attestation explained


on 31 October 2022

In the first part of this blog series, we discussed the run-time (in)security challenge, which can leave your code and data vulnerable to attacks by both the privileged system software of the public cloud infrastructure, as well as its administrators. We also introduced the concept of trusted execution environments and confidential computing, (CC), as a paradigm to address this challenge. CC takes a pragmatic approach: it considers the execution environment bootstrapped by the cloud’s system software to be untrustworthy, and proposes to run your security-sensitive workloads in an isolated trusted execution environment (TEE) instead. The TEE’s security guarantees are rooted in the deep hardware layers of the platform; security claims can be remotely verified. 

But how does confidential computing work? To understand TEEs and CC in more detail, we need to understand isolation and remote attestation.

In order to be able to reason about TEEs and confidential computing, there are two main primitives that we need to understand: 1) Isolation and 2) remote attestation.  This is what this second blog part explores. Let’s get started!


The idea of relying on hardware isolation to create a TEE with better security guarantees is not new. Over the years, different ways to realise hardware TEEs have been developed. At a high level, they can be categorized into either physical or logical isolation approaches.

Photo from unsplash by Dorrell Tibbs

Physical isolation

The code runs within a physically isolated processor, which does not share any context with the untrusted execution environment.Notable examples are co-processors, smart cards, and secure elements. Such solutions provide high protection guarantees against the host platform side channel attacks, by virtue of their complete isolation. However, they lack direct access to the system’s memory. They are also very constrained in their computational resources.

Multiplexed logical isolation

The security sensitive workloads run within the same host commodity processor, and share its same physical execution context. However, their execution is logically isolated from the main CPU as follows:

1. Memory isolation through main memory encryption: instead of bringing the workload’s code and data in cleartext to system memory at run-time, many confidential computing-capable CPUs come with a new AES-128 hardware encryption engine embedded within their memory controller which is responsible for encrypting/decrypting memory pages upon every memory read/write. As such, a malicious system administrator who is scraping data from memory, or a vulnerable operating system, can only get access to the encrypted ciphertext. The encryption key is further protected and managed at the hardware level and cannot be accessed neither by any of the cloud’s privileged system software nor its administrators

2. Additional CPU-based hardware access control mechanisms: while encryption protects the confidentiality of the memory pages of confidential workloads, other types of attacks might still be possible. For instance, a malicious host operating system might allocate the same memory page to two different processes. It might also change your encrypted memory values as part of a replay attack, thus breaking the integrity guarantees of your confidential workload. To remedy this, confidential computing-capable CPUs implement new instructions and new data structures that help in auditing the security-sensitive tasks traditionally carried out by the privileged system software, such as memory management and access to the platform’s devices. For instance, the new instructions for reading the memory pages mapped to confidential workloads should also return the previous value that was last written into the page in order to mitigate data corruption and replay attacks.

Remote attestation

Okay, so now your workload is securely running within its own isolated trusted execution environment. Or is it? How can you verify that your cloud provider has not deployed your workload in the normal non-confidential way? How can you know that it has indeed provisioned your workload into a genuine hardware TEE? And if so, how can you verify that its system software has loaded your application as you intended it to be to the TEE?  Do you just take the cloud provider’s word for it? You don’t have to. Instead, you should leverage the remote attestation capabilities of your hardware TEE, before provisioning your secrets into it, and before accepting its results as trustworthy. 

Photo from unsplash by Marc-Olivier Jodoin

At a minimum, remote attestation should provide you with a cryptographic proof that consists of:

  1. A measurement/hash that attests to the integrity of the software loaded into the TEE
  2. A cryptographic signature over the hash, which attests to the fact that the cloud’s TEE hardware used is genuine, and non-revoked

The remote attestation implementation details depend on both the underlying hardware TEE and the public cloud provider, and are going to be the topic of the next blog in this series.

Confidential computing in the public cloud

Confidential computing is an industry-wide effort that requires the cooperation of several stakeholders. On the hardware side, silicon providers have been investing considerable resources into maturing their TEE offerings. Just to cite a few, we have Intel SGX, Intel TDX, and AMD SEV on the X86 architecture; TrustZone and the upcoming ARM CCA for the ARM ecosystem; and Keystone for RISC-V architectures.

Public cloud providers (PCPs for short) have been one of the main adopters of hardware trusted execution environments. In order to make running confidential workloads easy for their users, PCPs have been focusing on enabling a “shift and lift” approach, where entire VMs can run unchanged within the TEE. 

What this means is that developers neither have to refactor their confidential applications nor rewrite them. What this also means is that the guest operating system needs to be optimised to support the user applications to leverage the platform’s underlying hardware TEE capabilities, and to further protect the VM while it’s booting, and when it’s at rest.

“Optimised Ubuntu LTS images utilising Google Cloud’s Confidential Computing capabilities to keep data-in-use secure are available on Google Cloud Console” said Nelly Porter,Group Product Manager at Google Cloud. “Together with Canonical, this makes Ubuntu-based Confidential VM deployments simple and easy-to-use.”

Today, our cloud confidential computing portfolio includes confidential VMs on Google Cloud. This is just the start! 

Canonical is committed to the confidential computing vision, and this only marks the beginning of Ubuntu’s confidential computing capabilities across various public clouds and compute classes. We look forward to sharing more news about our expanding portfolio and learning about the novel ways you are leveraging confidential computing. 

More resources

Newsletter signup

Select topics you're
interested in

In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy.

Related posts

What is confidential computing? A high-level explanation for CISOs

Privacy enhancing technologies and confidential computing are two of my favorite topics to talk about! So much so that I am writing this blog post on a sunny...

Let’s get confidential! Canonical Ubuntu Confidential VMs are now generally available on Microsoft Azure

On behalf of all Canonical teams, I am happy to announce the general availability of Ubuntu Confidential VMs (CVMs) on Microsoft Azure! They  are part of the...

How we designed Ubuntu Pro for Confidential Computing on Azure

Not all data is destined to be public. Moving workloads that handle secret or private data from an on-premise setup to a public cloud introduces a new attack...