Logs – good or bad for Confidential Computing?

I wrote a simple workload for testing. It didn’t work.

A few weeks ago, we had a conversation on one of the Enarx calls about logging. We’re at the stage now (excitingly!) where people can write applications and run them using Enarx, in an unprotected Keep, or in an SEV or SGX Keep. This is great, and almost as soon as we got to this stage, I wrote a simple workload to test it all.

It didn’t work.

This is to be expected. First, I’m really not that good a software engineer, but also, software is buggy, and this was our very first release. Everyone expects bugs, and it appeared that I’d found one. My problem was tracing where the issue lay, and whether it was in my code, or the Enarx code. I was able to rule out some possibilities by trying the application in an unprotected (“plain KVM”) Keep, and I also discovered that it ran under SEV, but not SGX. It seemed, then, that the problem might be SGX-specific. But what could I do to look any closer? Well, with very little logging available from within a Keep, there was little I could do.

Which is good. And bad.

It’s good because one of the major points about using Confidential Computing (Enarx is a Confidential Computing framework) is that you don’t want to leak information to untrusted parties. Since logs and error messages can leak lots and lots of information, you want to restrict what’s made available, and to whom. Safe operation dictates that you should make as little information available as you possibly can: preferably none.

It’s bad because there are times when (like me) you need to work out what’s gone wrong, and find out whether it’s in your code or the environment that you’re running your application in.

This is where the conversation about logging came in. We’d started talking about it before this issue came up, but this made me realise how important it was. I started writing a short blog post about it, and then stopped when I realised that there are some really complex issues to consider. That’s why this article doesn’t go into them in depth: you can find a much more detailed discussion over on the Enarx blog. But I’m not going to leave you hanging: below, you’ll find the final paragraph of the Enarx blog article. I hope it piques your interest enough to go and find out more.

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure (e.g. the Enarx projects) and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.


Header image by philm1310 from Pixabay.

Enarx first release

Write an application, compile it to WebAssembly, and then run it in one of three Keeps types.

I was on holiday last week, and I took the opportunity not to write a blog post, but while I was sunning myself[1] at the seaside, the team did a brilliant thing: we have our first release of Enarx, and a new look for the website, to boot.

To see the new website, head over to https://enarx.dev. There, you’ll find new updated information about the project, details of how to get involved, and – here’s the big news – instructions for how to download and use Enarx. If you’re a keen Rustacean, you can also go straight to crates.io (https://crates.io/crates/enarx) and start off there. Up until now, in order to run Enarx, you’ve had to do quite a lot of low level work to get things running, run your own github branches, understand how everything fits together and manage your own development environment. This has now all changed.

This first release, version 0.1.1, is codenamed Alamo, and provides an easy way in to using Enarx. As always, it’s completely open source: you can look at every single line of our code. It doesn’t provide a full feature set, but what it does do is allow you, for the first time, to write an application, compile it to WebAssembly, and then run it in one of three Keep[2] types:

  1. KVM – this is basically a debugging Keep, in that it doesn’t provide any confidentiality or integrity protection, but it does allow you to get running and to try things even if you don’t have access to specialist hardware. A standard Linux machine should do you fine.
  2. SEV – this is a Keep using AMD’s SEV technology, specifically the newer version, SEV-SNP. This requires access to a machine which supports it[3].
  3. SGX – this is a Keep using Intel’s SGX technology. Again, this requires access to a machine which supports it[3].

The really important point here is that you’re running the same binary on each of these architectures. No recompilation for different architectures: just plain old WebAssembly[4].

Current support

There’s a lot more work to do, but what do we support at the moment?

  • running WebAssembly
  • KVM, SEV and SGX Keeps (see above)
  • stdin and stdout from/to the host – this is temporary, as the host is untrusted in the Enarx model, but until we have networking support (see below), we wanted to provide a simple way to manage input and output from a Keep.

There’s lots more to come – networking and attestation are both high on the list – but now anyone can start playing with Enarx. And, we hope, submitting enhancement and feature requests, not to mention filing bugs (we know there will be some!): to do so, hop over to https://github.com/enarx/enarx/issues.

To find out more, please head over to the website – there’s loads to see – or join us on chat channel over at https://chat.enarx.dev to find out more and get involved.


1 – it’s the British seaside, in October, so “sunning” might be a little inaccurate.

2 – a Keep is what we call a TEE instance set up for you to run an application in.

3 – we have AMD and SGX machines available for people who contribute to the project – get in touch!

4 – WebAssembly is actually rather new, but “plain old” sounds better than “vanilla”. Not my favourite ice cream flavour[5].

5 – my favourite basic ice cream flavour is strawberry. Same for milkshakes.

Defining the Edge

How might we differentiate Edge computing from Cloud computing?

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

There’s been a lot of talk about the Edge, and almost as many definitions as there are articles out there. As usual on this blog, my main interest is around trust and security, so this brief look at the Edge concentrates on those aspects, and particularly on how we might differentiate Edge computing from Cloud computing.

The first difference we might identify is that Edge computing addresses use cases where consolidating compute resource in a centralised location (the typical Cloud computing case) is not necessarily appropriate, and pushes some or all of the computing power out to the edges of the network, where it can computing resources can process data which is generated at the fringes, rather than having to transfer all the data over what may be low-bandwidth networks for processing. There is no generally accepted single industry definition of Edge computing, but examples might include:

  • placing video processing systems in or near a sports stadium for pre-processing to reduce the amount of raw footage that needs to be transmitted to a centralised data centre or studio
  • providing analysis and safety control systems on an ocean-based oil rig to reduce reliance and contention on an unreliable and potentially low-bandwidth network connection
  • creating an Internet of Things (IoT) gateway to process and analyse data environmental sensor units (IoT devices)
  • mobile edge computing, or multi-access edge computing (both abbreviated to MEC), where telecommunications services such as location and augmented reality (AR) applications are run on cellular base stations, rather than in the telecommunication provider’s centralised network location.

Unlike Cloud computing, where the hosting model is generally that computing resources are consumed by tenants – customers of a public cloud, for instance- in the Edge case, the consumer of computing resources is often the owner of the systems providing them (though this is not always the case). Another difference is the size of the host providing the computing resources, which may range from very large to very small (in the case of an IoT gateway, for example). One important factor about most modern Edge computing environments is that they employ the same virtualisation and orchestration techniques as the cloud, allowing more flexibility in deployment and lifecycle management over bare-metal deployments.

A table comparing the various properties typically associated with Cloud and Edge computing shows us a number of differences.


Public cloud computingPrivate cloud computingEdge computing
LocationCentralisedCentralisedDistributed
Hosting modelTenantsOwnerOwner or tenant(s)
Application typeGeneralisedGeneralisedMay be specialised
Host system sizeLargeLargeLarge to very small
Network bandwidthHighHighMedium to low
Network availabilityHighHighHigh to low
Host physical securityHighHighLow
Differences between Edge computing and public/private cloud computing

In the table, I’ve described two different types of cloud computing: public and private. The latter is sometimes characterised as on premises or on-prem computing, but the point here is that rather than deploying applications to dedicated hosts, workloads are deployed using the same virtualisation and orchestration techniques employed in the public cloud, the key difference being that the hosts and software are owned and managed by the owner of the applications. Sometimes these services are actually managed by an external party, but in this case there is a close commercial (and concomitant trust) relationship to this managed services provider, and, equally important, single tenancy is assured (assuming that security is maintained), as only applications from the owner of the service are hosted[1]. Many organisations will mix and match workloads to different cloud deployments, employing both public and private clouds (a deployment model known as hybrid cloud) and/or different public clouds (a deployment model known as multi-cloud). All these models – public computing, private computing and Edge computing – share an approach in common: in most cases, workloads are not deployed to bare-metal servers, but to virtualisation platforms.

Deployment model differences

What is special about each of the models and their offerings if we are looking at trust and security?

One characteristic that the three approaches share is scale: they all assume that hosts will have with multiple workloads per host – though the number of hosts and the actual size of the host systems is likely to be highest in the public cloud case, and lowest in the Edge case. It is this high workload density that makes public cloud computing in particular economically viable, and one of the reasons that it makes sense for organisations to deploy at least some of their workloads to public clouds, as Cloud Service Providers can employ economies of scale which allow them to schedule workloads onto their servers from multiple tenants, balancing load and bringing sets of servers in and out of commission (a computation- and time-costly exercise) infrequently. Owners and operators of private clouds, in contrast, need to ensure that they have sufficient resources available for possible maximum load at all times, and do not have the opportunities to balance loads from other tenants unless they open up their on premises deployment to other organisations, transforming themselves into Cloud Service Providers and putting them into direct competition with existing CSPs.

It is this push for high workload density which is one of the reasons for the need for strong workload-from-workload (type 1) isolation, as in order to be able to maintain high density, cloud owners need to be able to mix workloads from multiple tenants on the same host. Tenants are mutual untrusting; they are in fact likely to be completely unaware of each other, and, if the host is doing its job well, unaware of the presence of other workloads on the same host as them. More important than this property, however, is a strong assurance that their workloads will not be negatively impacted by workloads from other tenants. Although negative impact can occur in other contexts to computation – such as storage contention or network congestion – the focus is mainly on the isolation that hosts can provide.

The likelihood of malicious workloads increases with the number of tenants, but reduces significantly when the tenant is the same as the host owner – the case for private cloud deployments and some Edge deployments. Thus, the need for host-from-workload (type 2) isolation is higher for the public cloud – though the possibility of poorly written or compromised workloads means that it should not be neglected for the other types of deployment.

One final difference between the models is that for both public and private cloud deployments the physical vulnerability of hosts is generally considered to be low[2], whereas the opportunities for unauthorised physical access to Edge computing hosts are considered to be much higher. You can read a little more about the importance of hardware as part of the Trusted Compute Base in my article Turtles – and chains of trust, and it is a fundamental principle of computer security that if an attacker has physical access to a system, then the system must be considered compromised, as it is, in almost all cases, possible to compromise the confidentiality, integrity and availability of workloads executing on it.

All of the above are good reasons to apply Confidential Computing techniques not only to cloud computing, but to Edge computing as well: that’s a topic for another article.


1 – this is something of a simplification, but is a useful generalisation.

2 – Though this assumes that people with authorised access to physical machines are not malicious, a proposition which cannot be guaranteed, but for which monitoring can at least be put in place.

Arm joins the Confidential Computing party

Arm’s announcement of Realms isn’t just about the Edge

The Confidential Computing Consortium is a Linux Project designed to encourage open source projects around confidential computing. Arm has been part of the consortium for a while – in fact, the company is Premier Member – but things got interesting on the 30th March, 2021. That’s when Arm announced their latest architecture: Arm 9. Arm 9 includes a new set of features, called Realms. There’s not a huge amount of information in the announcement about Realms, but Arm is clear that this is their big play into Confidential Computing:

To address the greatest technology challenge today – securing the world’s data – the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA).

I happen to live about 30 minutes’ drive from the main Arm campus in Cambridge (UK, of course), and know a number of Arm folks professionally and socially – I think I may even have interviewed for a job with them many moons ago – but I don’t want to write a puff piece about the company or the technology[1]. What I’m interested in, instead, is the impact this announcement is likely to have on the Confidential Computing landscape.

Arm has had an element in their architecture for a while called TrustZone which provides a number of capabilities around security, but TrustZone isn’t a TEE (Trusted Execution Environment) on its own. A TEE is the generally accepted unit of confidential computing – the minimum building block on which you can build. It is arguably possible to construct TEEs using TrustZone, but that’s not what it’s designed for, and Arm’s decision to introduce Realms strongly suggests that they want to address this. This is borne out by the press release.

Why is all this important? I suspect that few of you have laptops or desktops that run on Arm (Raspberry Pi machines apart – see below). Few of the servers in the public cloud run Arm, and Realms are probably not aimed particularly at your mobile phone (for which TrustZone is a better fit). Why, then, is Arm bothering to make a fuss about this and to put such an enormous design effort into this new technology? There are two answers, it seems to me, one of which is probably pretty much a sure thing, and the other of which is more of a competitive gamble.

Answer 1 – the Edge

Despite recent intrusions by both AMD and Intel into the Edge space, the market is dominated by Arm-based[3] devices. And Edge security is huge, partly because we’re just seeing a large increase in the number of Edge devices, and partly because security is really hard at the Edge, where devices are more difficult to defend, both logically (they’re on remote networks, more vulnerable to malicious attack) and physically (many are out of the control of their owners, living on customer premises, up utility poles, on gas pipelines or in sports stadia, just to give a few examples). One of the problems that confidential computing aims to solve is the issue that, traditionally, once an attacker has physical access to a system, it should be considered compromised. TEEs allow some strong mitigations against that problem (at least against most attackers and timeframes), so making it easy to create and use TEEs on the Edge makes a lot of sense. With the addition of Realms to the Arm 9 architecture, Arm is signally its intent to address security on the Edge, and to defend and consolidate its position as leader in the market.

Answer 2 – the Cloud

I mentioned above that few public cloud hosts run Arm – this is true, but it’s likely to change. Arm would certainly like to see it change, and to see its chipsets move into the cloud mainstream. There has been a lot of work to improve support for server-scale Arm within Linux (in fact, open source support for Arm is generally excellent, not least because of the success of Arm-based chips in Raspberry Pi machines). Amazon Cloud Services (AWS) started offering Arm-based servers to customers as long ago as 2018. This is a market in which Arm would clearly love to be more active and carve out a larger share, and the growing importance of confidential computing in the cloud (and public and private) means that having a strong story in this space was important: Realms are Arm’s answer to this.

What next?

An announcement of an architecture is not the same as availability of hardware or software to run on it. We can expect it to be quite a few months before we see production chips running Arm 9, though evaluation hardware should be available to trusted partners well before that, and software emulation for various components of the architecture will probably come even sooner. This means that those interested in working with Realms should be able to get things moving and have something ready pretty much by the time of availability of production hardware. We’ll need to see how easy they are to use, what performance impact they have, etc., but Arm do have an advantage here: as they are not the first into the confidential computing space, they’ve had the opportunity to watch Intel and AMD and see what has worked, and what hasn’t, both technically and in terms of what the market seems to like. I have high hopes for Arm Realms, and Enarx, the open source confidential computing project with which I’m closely involved, has plans to support them when we can: our architecture was designed with multi-platform support from the beginning.


1 – I should also note that I participated in a panel session on Confidential Computing which was put together by Arm for their “Arm Vision Day”, but I was in no way compensated for this[2].

2 -in fact, the still for the video is such a terrible picture of me that I think maybe I have grounds to sue for it to be taken down.

3 – Arm doesn’t manufacture chips itself: it licenses its designs to other companies, who create, manufacture and ship devices themselves.

A User Advisory Council for the CCC

The CCC is currently working to create a User Advisory Council (UAC)

Disclaimer: the views expressed in this article (and this blog) do not necessarily reflect those of any of the organisations or companies mentioned, including my employer (Red Hat) or the Confidential Computing Consortium.

The Confidential Computing Consortium was officially formed in October 2019, nearly a year and a half ago now. Despite not setting out to be a high membership organisation, nor going out of its way to recruit members, there are, at time of writing, 9 Premier members (of which Red Hat, my employer, is one), 22 General members, and 3 Associate members. You can find a list of each here, and a brief analysis I did of their business interests a few weeks ago in this article: Review of CCC members by business interests.

The CCC has two major committees (beyond the Governing Board):

  • Technical Advisory Board (TAC) – this coordinates all technical areas in which the CCC is involved. It recommends whether software projects should be accepted into the CCC (no hardware projects have been introduced so far, thought it’s possible they might be), coordinates activities like special interest groups (we expect one on Attestation to start very soon), encourages work across projects, manages conversations with other technical bodies, and produces material such as the technical white paper listed here.
  • Outreach Committee – when we started the CCC, we decided against going with the title “Marketing Committee”, as we didn’t think it represented the work we hoped this committee would be doing, and this was a good decision. Though there are activities which might fall under this heading, the work of the Outreach Committee is much wider, including analyst and press relations, creation of other materials, community outreach, cross-project discussions, encouraging community discussions, event planning, webinar series and beyond.

These two committees have served the CCC well, but now that it’s fairly well established, and has a fairly broad industry membership of hardware manufacturers, CSPs, service providers and ISVs (see my other article), we decided that there was one set of interested parties who were not well-represented, and which the current organisational structure did not do a sufficient job of encouraging to get involved: end-users.

It’s all very well the industry doing amazing innovation, coming up with astonishingly well-designed, easy to integrate, security-optimised hardware-software systems for confidential computing if nobody wants to use them. Don’t get me wrong: we know from many conversations with organisations across multiple sectors that users absolutely want to be able to make use of TEEs and confidential computing. That is not that same, however, as understanding their use cases in detail and ensuring that what we – the members of the CCC, who are focussed mainly on creating services and software – actually provide what users need. These users are across many sectors – finance, government, healthcare, pharmaceutical, Edge, to name but a few – and their use cases and requirements are going to be different.

This is why the CCC is currently working to create a User Advisory Council (UAC). The details are being worked out at the moment, but the idea is that potential and existing users of confidential computing technologies should have a forum in which they can connect with the leaders in the space (which hopefully describes the CCC members), share their use cases, find out more about the projects which are part of the CCC, and even take a close look at those projects most relevant to them and their needs. This sort of engagement isn’t likely, on the whole, to require attendance at lots of meetings, or to have frequent input into the sorts of discussions which the TAC and the Outreach Committee typically consider, and the general feeling is that as we (the CCC) are aiming to service these users, we shouldn’t be asking them to pay for the privilege (!) of talking to us. The intention, then, is to allow a low bar for involvement in the UAC, and for there to be no membership fee required. That’s not to stop UAC members from joining the CCC as members if they wish – it would be a great outcome if some felt that they were so keen to become more involved that membership was appropriate – but there should be no expectation of that level of commitment.

I should be clear that the plans for the UAC are not complete yet, and some of the above may change. Nor should you consider this a formal announcement – I’m writing this article because I think it’s interesting, and because I believe that this is a vital next step in how those involved with confidential computing engages with the broader world, not because I represent the CCC in this context. But there’s always a danger that “cool” new technologies develop into something which fits only the fundamentally imaginary needs of technologists (and I’ll put my hand up and say that I’m one of those), rather than the actual needs of businesses and organisations which are struggling to operate around difficult issues in the real world. The User Advisory Council, if it works as we hope, should allow the techies (me, again) to hear from people and organisations about what they want our technologies to do, and to allow the CCC to steer its efforts in these directions.

Review of CCC members by business interests

Reflections on the different types of member in the Confidential Computing Consortium

This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.

The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.

The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.

What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.

Cloud Service Providers (CSPs)

Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.

  • Google
  • IBM
  • Microsoft

The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.

Silicon vendors

Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.

  • AMD
  • Arm
  • Huawei
  • IBM
  • Intel
  • Nvidia

Hardware manufacturers

Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.

  • Cisco
  • Google
  • Huawei
  • IBM
  • Nvidia
  • Western Digital
  • Xilinx

Service companies

In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.

  • Accenture
  • Ant Group
  • Bytedance
  • Facebook
  • Google
  • Huawei
  • IBM
  • Microsoft
  • Red Hat
  • Swisscom

ISVs

There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.

What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.

  • Anjuna
  • Anqlave
  • Bytedance
  • Cosmian
  • Cysec
  • Decentriq
  • Edgeless Systems
  • Fortanix
  • Google
  • Huawei
  • IBM
  • r3
  • Red Hat
  • VMware

Blockchain

As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.

  • Appliedblockchain
  • Google
  • IBM
  • iExec
  • Microsoft
  • Phala network
  • r3

Enarx end-to-end complete!

We now have a fully working end-to-end proof of concept, with no smoke and mirrors.

I’ve written lots about the Enarx project, a completely open source project around deploying workloads to Trusted Execution Environments, and you can find a few of the articles here:

I have some very exciting news to announce.

A team effort

Yesterday was a huge day for the Enarx project, in that we now have a fully working end-to-end proof of concept, with no smoke and mirrors (we don’t believe in those). The engineers on the team have been working really hard on getting all of the low-level pieces in place, with support from other members on CI/CD, infrastructure, documentation, community outreach and beyond. I won’t mention everyone, as I don’t want to miss anyone out, and I also don’t have their permission, but it’s been fantastic working with everyone. We’ve been edging closer and closer to having all the main pieces ready to go, and just before Christmas/New Year we got attested AMD SEV Keeps working, with the ability to access information from that attestation within the Keep. This allowed us to move to the final step, which is creating an end-to-end client-server architecture. It is this that we got running yesterday.

I happened to be the lucky person to be able to complete this part of the puzzle, building on work by the rest of the team. I don’t have the low-level expertise that many of the team have, but my background is in client-server and peer-to-peer distributed systems, and after I started learning Rust around March 2020, I decided to see if I could do something useful for the project code base: this is my contribution to the engineering. To give you an idea of what we’ve implemented, let’s look at a simple architectural diagram of an Enarx deployment.

Simple Enarx architectural diagram

Much of the work that’s been going on has been concentrated in the Enarx runtime component, getting WebAssembly working in SGX and SEV Trusted Execution Environments, working on syscall implementations and attestation. There’s also been quite a lot of work on glue – how we transfer information around the system in a standards-compliant way (we’re using CBOR encoding throughout). The pieces that I’ve been putting together have been the Enarx client agent, the Enarx host agent (or Enarx Keep Manager) and two pieces which aren’t visible in this diagram (but are in the more detailed one below): the Enarx Keep Loader and Enarx Wasm Loader (“App loader” in the detailed view).

Detailed Enarx architectural diagram

The components

Let’s look at what these components do, and then explain exactly what we’ve achieved. The name in bold refers to the diagram, the name in italics relates to the Rust crate (and, where already merged, the github repository) associated with the component.

  • Enarx Client Agent (client) – responsible to talking to the enarx-keepmgr and requesting a Keep. It checks that the Keep is correctly set up and attested and then sends the workload (a WebAssembly package) to the enarx-wasmldr component, using HTTPS with a one-use certificate derived from the attestation process.
  • Enarx Keep Manager (enarx-keepmgr) – creates enarx-keepldr components at the request of the client, proxying communications to them from the client as required (for certain attestation flows, for instance). It is untrusted by the client.
  • Enarx Keep Loader (enarx-keepldr) – there is an enarx-keepldr per Keep, and it performs the loading of components into the Trusted Execution Environment itself. It sits outside the TEE instance, and is therefore untrusted by the client.
  • Enarx App Loader (enarx-wasmldr) – the enarx-wasmldr component resides within the TEE instance, and is therefore has confidentiality and integrity protection from the rest of the host. It receives the WebAssembly (Wasm) workload from the client component and may access secret information provisioned into the Keep during the attestation process.

Here’s the post I made to the Enarx chat #development channel yesterday to announce what we managed to achieve:

  1. client -> keepmgr: “create sev keep”
  2. keepmgr launches sev keep via systemd
  3. client -> keepmgr: “perform attestation, include this private key” (note – private key is encrypted from keepmgr)
  4. keepmgr -> keepldr: “attestation + private key”
  5. keepldr creates keep, passes private key to it
  6. wasmldr creates certificate from private key
  7. wasmldr waits for workload
  8. client sends workload of HTTPS to wasmldr
  9. wasmldr accepts workload over HTTPS
  10. wasmldr executes workload

WE HAVE A FULLY WORKING END-TO-END DEMO! Thank you everyone

What does this mean? Well, everything works! The client requests a Keep using with an AMD SEV instance, it’s created, attested, listens for an incoming connection over HTTPS, and the client sends the workload, which then executes. The workload was written in Rust and compiled to WebAssembly – it’s a real application, in other words, and not a hand-crafted piece of WebAssembly for the purposes of testing.

What’s next?

There’s lots left to do, including:

  • merging all of the code into the main repositories (I was working in a separate set to avoid undue impact on other efforts)
  • tidying it to make it more presentable (both what the demo shows and the quality of the code!)
  • add SGX support – we hope that we’re closing in on this very soon
  • make the various components production-ready (the keepmgr, for instance, doesn’t manage multiple enarx-keepldr components very well yet)
  • define the wire protocol fully (somewhere other than in my head)
  • document everything!

But most of that’s easy: it’s just engineering. 🙂

We’d love you to become involved. If you’re interested, read some of my articles, visit project home page and repositories, hang out on our chat server or watch some of our videos on YouTube. We really welcome involvement – and not just from engineers, either. Come and have a play!

Vint Cerf’s “game changer”

I’m really proud to be involved with a movement which I believe can change the way we do computing.

Today’s article is a little self-indulgent, but please bear with me, as I’m a little excited. Vint Cerf is one of a small handful of people who have a claim to being called “greats”. He’s one of the co-developers of TCP/IP protocol with Bob Kahn in 1974, and has been working on technology – much of it pretty cool technology – since then. I turned 50 recently, and if I’d achieved half of what he had by his 50th birthday, I’d be feeling more accomplished than I do right now! As well as his work in technology, he’s also an advocate for accessibility, which is something which is also dear to my heart.

What does this have to do with Alice, Eve and Bob – a security blog? Well, last week, Dark Reading[1], an influential technology security site, published a commentary piece by Cerf under its “Cloud” heading: Why Confidential Computing is a Game Changer. I could hardly have been more pleased: this is an area which I’m very excited about, and which the Enarx project, of which I’m co-founder, addresses. The Enarx project is part of the Confidential Computing Consortium (mentioned in Cerf’s article), a Linux Foundation project to increase use of confidential computing through open source projects.

So, what is confidential computing? Cerf describes it as “a breakthrough technology that encrypts data in use, while it is being processed”. He goes on to give a good description of the technology, noting that Google (his employer[2]) has recently released a product using confidential computing. Google is actually far from the first cloud service provider to do this, but it’s only fair that Cerf should mention his employer’s services from time to time: I’m going to forgive him, given how enthusiastic he is about the technology more generally. He describes it as a transformational technology which “will and should be a part of every enterprise cloud deployment”.

I agree, and it’s really exciting to see such a luminary embracing the possibilities the confidential computing presents. For those readers who aren’t aware of what it is, confidential computing allows you to keep data and processes secret in the cloud, on private servers, on the Edge, IoT, etc. – even from administrators, hypervisors and the host kernel. It uses TEEs – Trusted Execution Environments – to protect the confidentiality and integrity of the workloads (application, programs) that you want to run. If you’re not sure you trust your cloud provider, if your regulatory body won’t let you run your applications in certain places, if you want to deploy to machines which are vulnerable to attack – physical or logical – then TEEs and confidential computing can help.

You can find a more information in some of my articles:

You can always visit the Confidential Computing Consortium[3] or visit the Enarx project (links above): all of our code and documentation is open, and we’d love to see you. I’m really proud to be involved with – in fact, deeply embedded in – a movement which I believe can change the way we do computing. And really excited that someone like Vint Cerf agrees.


1 – I have no affiliation with Dark Reading, though I do recommend it to readers of this blog.

2- neither do I have any affiliation with Google or Alphabet, its parent!

3 – I am, however, a member of both the Governing Board and the Technical Advisory Council of the Confidential Computing Consortium. I’m also the Treasurer.