Your submission was sent successfully! Close

Jump to main content

Build the foundation for your zero trust strategy with Ubuntu confidential computing

Why do we want to eliminate trust? Isn’t trust a good thing that we should foster and grow? And shouldn’t computing platforms trust their end-users, and vice versa? The short answer is no. And I would argue that the very goal of system security has always been to reduce trust. 

For instance, because you do not trust the network you send your data over, you use TLS to enable end-2-end encryption. Because you also do not trust the cloud’s provider with your encryption keys, you use a hardware security module to store it instead. And because the cloud providers themselves do not trust your workloads, they run them at a lower privilege level than their system software (e.g hypervisor and host OS) and in isolation from other VMs. Indeed, behind every security primitive that we build, is a trust assumption that we want to eliminate.

Blurred trust boundaries

But how did things get so complex? During the early days of digitalisation, the assets that an organisation cared about were all within its perimeter, and a lot of efforts went into securing this perimeter against outsider threats. As such, trust decisions were easier to make. If a user, a service or a device were within that perimeter, then they were considered trusted.

But this is not the world we live in anymore. The modern workforce is mobile and works remotely, and a lot of the organisations’ data and digital assets live now in the public cloud. The trust boundaries are, thus, blurred! 

Image by Thong Vo from Unsplash

Never trust, always verify

Because of this new threat model where there is no clear perimeter, a new way of thinking about security is required. Zero trust takes the default position of not trusting any access request to the organisation’s digital assets. Instead, the requesting party should be able to provide a strong verifiable claim of their security, before their request is granted. This can be an end-user having to use a hardware token besides their password in order to get access to their email account. It can also be a platform having to provide a TPM-backed measurement which reflects the security of its boot process, and thus the execution environment it bootstraps, before being granted access to the corporate network.

The challenging trust assumption underpinning it all

While such application-level security measures are important, and all organisations should be implementing them, they are not sufficient for implementing a complete meaningful zero trust strategy. To achieve that, your public workload needs a way to not trust its hosting cloud infrastructure, or at least reduce that trust as much as possible, and bring it to a root of trust whose security guarantees can be verified. Let us explore why this is important. At a high-level, data can be in one of the following states: 

  1. In-transit: For sending data to the public cloud over insecure networks, customers can use secure protocols such as TLS.
  2. At-rest: To securely store the data when it is at rest and sitting idle in the public cloud’s storage, customers can encrypt their data with a key that is generated and managed by them, and further protect it using the cloud’s hardware security modules. 
  3. Run-time: When it is time to compute over the data, the public cloud provider needs to first decrypt it and then move it in cleartext from the server’s secondary storage and into its system memory, DRAM. 

While we have had security primitives that are capable of solving the in-transit and at-rest security,  run-time (in)-security was an open problem. And a challenging one at that. Application developers could spend a lot of time tightening up the security of an application through static code analysis, penetration testing, multi-factor authentication, and more, and it still wouldn’t be enough.

Computing over the data is unavoidable. After all, this is why customers use the public cloud: to take advantage of its elasticity and great computational resources. Once in system memory, however, customers’ code and data can be compromised by a vulnerable or malicious system level software (OS, hypervisor, Bios), or even by a malicious cloud operator staff with administrator or physical access to their platforms. Indeed, the security of user-level applications depends on the security of its underlying system software. This is because privileged system software gets unrestricted access to all the resources of unprivileged user-level applications as it controls its execution, memory, and access to the underlying hardware.  It’s a feature, not a bug!

Enter privacy-enhancing technologies

As we have laid out above, run-time security is a tough problem.  You want the cloud to analyse your workloads without learning anything about the content of that very particular data. And you want the cloud’s privileged system software to manage the lifecycle of your workload, but have no impact on its security guarantees. How can you compute over data without actually looking at its values? And how can you expect a vulnerable hypervisor not to threaten the security of the user-level applications it runs? 

This challenge of resolving the tension between confidentiality and utility has been actively researched for many years, and is known as privacy enhancing technologies, PETs. PETs can be defined as the range of technologies that help us resolve the tension between data privacy and utility. They achieve this by allowing us to compute on data and derive value from it, while also preserving its privacy. This is unlike traditional cryptographic primitives, such as AES (advanced encryption standard), which only allow us to preserve data confidentiality, but make it impossible to perform any type of operation on the encrypted ciphertext. PETs  can be realised through cryptographic approaches such as differential privacy, homomorphic encryption, secure multiparty computation and zero-knowledge proofs, as well as system approaches like Trusted Execution Environments, TEE,  otherwise referred to as confidential computing (CC).

TEEs allow you to run your workload within a logically isolated hardware-rooted execution environment that you can remotely verify. They achieve this by carving out a portion of system memory which is encrypted at run-time by a new AES-128 encryption engine, and by adding new access control checks that mediate access to this memory, and prevent all but your workload from reading/writing into it.

You don’t have to trust your cloud provider that your workload is indeed running in this hardware-rooted TEE and not in an emulated one. Instead, you should leverage the remote attestation capabilities of TEEs to verify their security claims, before provisioning your secrets into it, and before accepting its results as trustworthy. At a minimum, remote attestation should provide you with a cryptographic proof that consists of:

  1. A measurement/hash that attests to the integrity of the software loaded into the TEE.
  2. A cryptographic signature over the hash, which attests to the fact that the cloud’s TEE hardware is authentic.

The zero in zero trust 

Image by Jeremy Perkins from Unsplash

Zero trust is such a great name to describe a security framework. It is succinct, powerful, firm and quite catchy. But as great as it is, it can also be confusing! After all, if it is zero trust, then why are we still asking you to trust TEEs, which do require you to trust the silicon provider’s hardware implementation and the entire remote attestation protocol? 

In fact, what zero trust means is “zero trust without verification”. This does not mean that you should nor that you will be able to eliminate today, or tomorrow, all non-verifiable trust assumptions from your systems. Therefore, what we want to achieve with zero trust, is to reduce our trusted attack surface as much as possible, and to root its verifiability in a root of trust that is secure and trustworthy. And what a better root of trust can you aim for than an immutable piece of hardware, that is manufactured by your same CPU vendor.

This is exactly where confidential computing shines. Instead of trusting the entirety of the potentially malicious system software of your public cloud provider, you can now reduce the trust assumptions to the hardware implementation of the underlying TEE of your choice, and the remote attestation protocol. When your workload runs within a TEE, and its host hypervisor becomes malicious, this should have no implications on your code and data, because that trust assumption has been effectively removed. 


One of the main drivers for the renewed attention and accelerated adoption of zero trust zero trust is the US Executive Order 14028: Improving the Nation’s Cybersecurity,  which was issued by  President Biden. This order laid out the principles of a zero trust architecture that should be adopted by  US government agencies. The executive order is itself complemented by the  NIST 800-207 standard issued by National Institute of Standards and Technology (NIST).

At Canonical, we believe that confidential computing and privacy enhancing technologies will be the default way of doing computing in the future. This is why our  confidential computing  portfolio is free on all public clouds. Ubuntu’s advances in areas like confidential computing make it the clear choice for any organisation dealing with sensitive workloads and working towards adopting zero trust.

This is just the beginning of Canonical Ubuntu’s confidential computing journey. Stay tuned for many more exciting announcements about our expanding portfolio.

Additional resources

Learn more 

If you would like to know more about the Canonical approach to confidential computing, Zero Trust architecture, and security at large, contact us

Newsletter signup

Select topics you're
interested in

In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy.

Related posts

Canonical joins the confidential computing consortium

We are happy to announce we have joined the confidential computing consortium, a project community at the Linux Foundation that is focused on accelerating...


隐私增强技术和机密计算是我们最喜欢谈论的两个话题!今天这篇博文,让我们来探讨两个好问题 – 什么是机密计算?它对我有什么影响? 在探讨细节之前,让我们想象一下,你是 PAlabs 的首席信息安全官(CISO,chief information security officer)。PAlabs...

What’s confidential, generally available, and open source? It’s Canonical Ubuntu 22.04 on Microsoft Azure!

On behalf of all Canonical teams, I am happy to announce the general availability of Ubuntu 22.04 Confidential VMs (CVMs) on Microsoft Azure! They are part of...