There really is no excuse for not protecting your (and your customers’!) data in use.
I came across this article recently: 81% of companies had a cloud security incident in the last year. I think it’s probably factually incorrect, and that the title should be “81% of companies are aware that they had a cloud security incident last year”. Honestly, it could well be much higher than that. When I’ve worked on IT security audits, I sometimes see statements like “[Company X] experienced no data or privacy breaches over the past 12 months”, and I always send it back, insisting on a change of wording to reflect the fact that all that is known is that the organisation is not aware of any data or privacy breaches over the past 12 months.
The other statistic that really struck me in the article, however, is that the top reported type of incident was “Security incidents during runtime”, with 34% of respondents reporting it. That’s over a third of incidents!
And near the top of concerns was “Privacy/data access issues, such as those from GDPR”, at 31%.
The problem about both of these types of issues is that there’s almost nothing you can do to protect yourself from them in the cloud. Cloud computing (and virtualisation in general) is pretty good at protecting you from other workloads (type 1 isolation) and protecting the host from your workloads (type 2 isolation), but offers nothing to protect your workload from the host (type 3 isolation). If you’re interested in a short introduction to why, please have a look at my article Isolationism – not a 4 letter word (in the cloud).
The good news is that there are solutions out there that do allow you to run sensitive applications (and applications with sensitive data) in the cloud: that’s what Confidential Computing is all about. Confidential Computing protects your data not just at rest (when it’s in storage) and in transit (on the network), but actually at runtime: “data in use”. And it seems that industry is beginning to realise that it’s time to be sitting up and paying attention: the problem is that not enough people know about Confidential Computing.
So – now’s the time to become the expert on Confidential Computing for your organisation, and show your manager, your C-levels and your board how to avoid becoming part of the 81% (or the larger, unknowing percentage). The industry body is the Confidential Computing Consortium, and they have lots of information, but if you want to dive straight in, I encourage you to visit Profian and download one or more of our white papers (there’s one about runtime isolation there, as well). There really is no excuse for not protecting your (and your customers’!) data in use.
Without attestation, you’re not really doing Confidential Computing.
This post – or the title of this post – has been sitting in my “draft” pile for about two years. I don’t know how this happened, because I’ve been writing about Confidential Computing for three years or so years by now, and attestation is arguably the most important part of the entire subject.
I know I’ve mentioned attestation in passing multiple times, but this article is devoted entirely to it. If you’re interested in Confidential Computing, then you must be interested in attestation, because, without it, you’re not doing Confidential Computing right. Specifically, without attestation, any assurances you may think you have about Confidential Computing are worthless.
Let’s remind ourselves what Confidential Computing is: it’s the protection of applications and data in use by a hardware-based TEE (Trusted Execution Environment). The key benefit that this brings you is isolation from the host running your workload: you can run applications in the public cloud, on premises or in the Edge, and have cryptographic assurances that no one with access to the host system – hypervisor access, kernel access, admin access, even standard hardware access[1] – can tamper with your application. This, specifically, is Type 3 – workload from host – isolation (see my article Isolationism – not a 4 letter word (in the cloud) for more details), and is provided by TEEs such as AMD’s SEV and Intel’s SGX – though not, crucially, by AWS Nitro, which does not provide Confidential Computing capabilities as defined by the Confidential Computing Consortium.
Without attestation, you’re not really doing Confidential Computing. Let’s consider a scenario where you want to deploy an application using Confidential Computing on a public cloud. You ask your CSP (Cloud Service Provider) to deploy it. The CSP does so. Great – your application is now protected: or is it? Well, you have no way to tell, because your CSP could just have taken your application, deployed it in the normal way, and told you that it had deployed it using a TEE. What you need is to take advantage of a capability that TEE chips provide called an attestation measurement to check that a TEE instance was actually launched and that your application was deployed into it. You (or your application) asks the TEE-enabled chip to perform a cryptographically signed measurement of the TEE set-up (which is basically a set of encrypted memory pages). It does so, and that measurement can then be checked to ensure that it has been correctly set up: there’s a way to judge whether you’re actually doing Confidential Computing.
So, who does that checking? Doing a proper cryptographic check of an attestation measurement – the attestation itself – is surprisingly[2] tricky, and, unless you’re an expert in TEEs and Confidential Computing (and one of the points of Confidential Computing is to make is easy for anyone to use these capabilities), then you probably don’t want to be doing it.
Who can perform the validation? Well, one option might be for the validation to be done on the host machine that’s running the TEE. But wait a moment – that makes no sense! You’re trying to isolate yourself from that machine and anyone who has access to it: that’s the whole point of Confidential Computing. You need a remote attestation service – a service running on a different machine which can be trusted to validate the attestation and either halt execution if it fails, or let you know so that you can halt execution.
So who can run that remote attestation service? The obvious – obvious but very, very wrong – answer is the CSP who’s running your workload. Obvious because, well, they presumably run Confidential Computing workloads for lots of people, but wrong because your CSP is part of your threat model. What does this mean? Well, we talked before about “trying to isolate yourself from that machine and anyone who has access to it”, the anyone who has access to it is exactly your CSP. If the reason to be using Confidential Computing is to be able to put workloads in the public cloud even when you can’t fully trust your CSP (for regulatory reasons, for auditing reasons, or just because you need higher levels of assurance than existing cloud computing), then you can’t trust your CSP to provide the remote attestation service. To be entirely clear: if you allow your CSP to do your attestation, you lose the benefits of Confidential Computing.
Attestation – remote attestation – is vital, but if we can’t trust the host or the CSP to do it, what are your options? Well, either you need to do the attestation yourself (which I already noted is surprisingly difficult), or you’re going to need to find a third party to do that. I’ll be discussing the options for this in a future article – keep an eye out.
1 – the TEEs used for Confidential Computing don’t aim to protect against long-term access to the CPU by a skilled malicious actor – but this isn’t a use case that’s relevant to most users.
2 – actually, not that surprising if you’ve done much work with cryptography, cryptographic libraries, system-level programming or interacted with any silicon vendor documentation.
It’s being a big month for Enarx. Last week, I announced that we’d released Enarx 0.3.0 (Chittorgarh Fort), with some big back-end changes, and some new functionality as well. This week, the news is that we’ve hit a couple of milestones around activity and involvement in the project.
1500 commits
The first milestone is 1500 commits to the core project repository. When you use a git-based system, each time you make a change to a file or set of files (including deleting old ones, creating new one and editing or removing sections), you create a new commit. Each commit has a rather long identifier, and its position in the project is also recorded, along with the name provided by the committer and any comments. Commit 1500 to the enarx was from Nathaniel McCallum, and entitled feat(wasmldr): add Platform API. He committed it on Saturday, 2022-03-19, and its commit number is 8ec77de0104c0f33e7dd735c245f3b4aa91bb4d2.
I should point out that this isn’t the 1500th commit to the Enarx project, but the 1500th commit to the enarx/enarx repository on GitHub. This is the core repository for the Enarx project, but there are quite a few others, some of which also have lots of commits. As an example, the enarx/enarx-shim-sgx repository ,which provides some SGX-specific capabilities within Enarx, had 968 commits at time of writing.
500 Github stars
The second milestone is 500 GitHub stars. Stars are measure of how popular a repository or project is, and you can think of them as the Github of a “like” on social media: people who are interested in it can easily click a button on the repository page to “star” it (they can “unstar” it, too, if they change their mind). We only tend to count stars on the main enarx/enarx repository, as that’s the core one for the Enarx project. The 500th star was given to the project by a GitHub user going by the username shebuel-oss, a self-described “Open Source contributor, Advocate, and Community builder”: we’re really pleased to have attracted their interest!
There’s a handy little website which allows you to track stars to a project called GitHub Star History where you can track the addition (or removal!) of stars, and compare other projects. You can check the status of Enarx whenever you’re reading by following this link, but for the purposes of this article, the important question is how did we get to 500? Here’s a graph:
Enarx GitHub star history to 500 stars
You’ll see a nice steep line towards the end which corresponds to Nick Vidal’s influence as community manager, actively working to encourage more interest and involvement, and contributions to the Enarx project.
Why do these numbers matter?
Objectively, they don’t, if I’m honest: we could equally easily have chosen a nice power of two (like 512) for the number of stars, or the year that Michelangelo started work on the statue David (1501) for the number of commits. Most humans, however like round decimal numbers, and the fact that we hit 1500 and 500 commits and stars respectively within a couple of days of each provides a nice visual symmetry.
Subjectively, there’s the fact that we get to track the growth in interest (and the acceleration in growth) and contribution via these two measurements and their historical figures. The Enarx project is doing very well by these criteria, and that means that we’re beginning to get more visibility of the project. This is good for the project, it’s good for Profian (the company Nathaniel and I founded last year to take Enarx to market) and I believe that it’s good for Confidential Computing and open source more generally.
But don’t take my word for it: come and find out about the project and get involved.
I’ll probably have a glass or two of something tonight.
It’s official: my book is now published and available in the US! What’s more, my author copies have arrived, so I’ve actually got physical copies that I can hold in my hand.
You can buy the book at Wiley’s site here, and pre-order with Amazon (the US site lists is as “currently unavailable”, and the UK site lists is as available from 22nd Feb, 2022. ,though hopefully it’ll be a little earlier than that). Other bookstores are also stocking it.
I’m over the moon: it’s been a long slog, and I’d like to acknowledge not only those I mentioned in last week’s post (Who gets acknowledged?), but everybody else. Particularly, at this point, everyone at Wiley, calling out specifically Jim Minatel, my commissioning editor. I’m currently basking in the glow of something completed before getting back to my actual job, as CEO of Profian. I’ll probably have a glass or two of something tonight. In the meantime, here’s a quote from Bruce Schneier to get you thinking about reading the book.
Trust is a complex and important concept in network security. Bursell neatly unpacks it in this detailed and readable book.
Bruce Schneier, author of Liars and Outliers: Enabling the Trust that Society Needs to Thrive
At least you know what to buy your techy friends for Christmas!
I wrote a simple workload for testing. It didn’t work.
A few weeks ago, we had a conversation on one of the Enarx calls about logging. We’re at the stage now (excitingly!) where people can write applications and run them using Enarx, in an unprotected Keep, or in an SEV or SGX Keep. This is great, and almost as soon as we got to this stage, I wrote a simple workload to test it all.
It didn’t work.
This is to be expected. First, I’m really not that good a software engineer, but also, software is buggy, and this was our very first release. Everyone expects bugs, and it appeared that I’d found one. My problem was tracing where the issue lay, and whether it was in my code, or the Enarx code. I was able to rule out some possibilities by trying the application in an unprotected (“plain KVM”) Keep, and I also discovered that it ran under SEV, but not SGX. It seemed, then, that the problem might be SGX-specific. But what could I do to look any closer? Well, with very little logging available from within a Keep, there was little I could do.
Which is good. And bad.
It’s good because one of the major points about using Confidential Computing (Enarx is a Confidential Computing framework) is that you don’t want to leak information to untrusted parties. Since logs and error messages can leak lots and lots of information, you want to restrict what’s made available, and to whom. Safe operation dictates that you should make as little information available as you possibly can: preferably none.
It’s bad because there are times when (like me) you need to work out what’s gone wrong, and find out whether it’s in your code or the environment that you’re running your application in.
This is where the conversation about logging came in. We’d started talking about it before this issue came up, but this made me realise how important it was. I started writing a short blog post about it, and then stopped when I realised that there are some really complex issues to consider. That’s why this article doesn’t go into them in depth: you can find a much more detailed discussion over on the Enarx blog. But I’m not going to leave you hanging: below, you’ll find the final paragraph of the Enarx blog article. I hope it piques your interest enough to go and find out more.
In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime. For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure (e.g. the Enarx projects) and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of host gaining access to messages associated with the workload and the infrastructure components. It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.
Write an application, compile it to WebAssembly, and then run it in one of three Keeps types.
I was on holiday last week, and I took the opportunity not to write a blog post, but while I was sunning myself[1] at the seaside, the team did a brilliant thing: we have our first release of Enarx, and a new look for the website, to boot.
To see the new website, head over to https://enarx.dev. There, you’ll find new updated information about the project, details of how to get involved, and – here’s the big news – instructions for how to download and use Enarx. If you’re a keen Rustacean, you can also go straight to crates.io (https://crates.io/crates/enarx) and start off there. Up until now, in order to run Enarx, you’ve had to do quite a lot of low level work to get things running, run your own github branches, understand how everything fits together and manage your own development environment. This has now all changed.
This first release, version 0.1.1, is codenamed Alamo, and provides an easy way in to using Enarx. As always, it’s completely open source: you can look at every single line of our code. It doesn’t provide a full feature set, but what it does do is allow you, for the first time, to write an application, compile it to WebAssembly, and then run it in one of three Keep[2] types:
KVM – this is basically a debugging Keep, in that it doesn’t provide any confidentiality or integrity protection, but it does allow you to get running and to try things even if you don’t have access to specialist hardware. A standard Linux machine should do you fine.
SEV – this is a Keep using AMD’s SEV technology, specifically the newer version, SEV-SNP. This requires access to a machine which supports it[3].
SGX – this is a Keep using Intel’s SGX technology. Again, this requires access to a machine which supports it[3].
The really important point here is that you’re running the same binary on each of these architectures. No recompilation for different architectures: just plain old WebAssembly[4].
Current support
There’s a lot more work to do, but what do we support at the moment?
running WebAssembly
KVM, SEV and SGX Keeps (see above)
stdin and stdout from/to the host – this is temporary, as the host is untrusted in the Enarx model, but until we have networking support (see below), we wanted to provide a simple way to manage input and output from a Keep.
There’s lots more to come – networking and attestation are both high on the list – but now anyone can start playing with Enarx. And, we hope, submitting enhancement and feature requests, not to mention filing bugs (we know there will be some!): to do so, hop over to https://github.com/enarx/enarx/issues.
To find out more, please head over to the website – there’s loads to see – or join us on chat channel over at https://chat.enarx.dev to find out more and get involved.
1 – it’s the British seaside, in October, so “sunning” might be a little inaccurate.
2 – a Keep is what we call a TEE instance set up for you to run an application in.
3 – we have AMD and SGX machines available for people who contribute to the project – get in touch!
4 – WebAssembly is actually rather new, but “plain old” sounds better than “vanilla”. Not my favourite ice cream flavour[5].
5 – my favourite basic ice cream flavour is strawberry. Same for milkshakes.
How might we differentiate Edge computing from Cloud computing?
This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.
There’s been a lot of talk about the Edge, and almost as many definitions as there are articles out there. As usual on this blog, my main interest is around trust and security, so this brief look at the Edge concentrates on those aspects, and particularly on how we might differentiate Edge computing from Cloud computing.
The first difference we might identify is that Edge computing addresses use cases where consolidating compute resource in a centralised location (the typical Cloud computing case) is not necessarily appropriate, and pushes some or all of the computing power out to the edges of the network, where it can computing resources can process data which is generated at the fringes, rather than having to transfer all the data over what may be low-bandwidth networks for processing. There is no generally accepted single industry definition of Edge computing, but examples might include:
placing video processing systems in or near a sports stadium for pre-processing to reduce the amount of raw footage that needs to be transmitted to a centralised data centre or studio
providing analysis and safety control systems on an ocean-based oil rig to reduce reliance and contention on an unreliable and potentially low-bandwidth network connection
creating an Internet of Things (IoT) gateway to process and analyse data environmental sensor units (IoT devices)
mobile edge computing, or multi-access edge computing (both abbreviated to MEC), where telecommunications services such as location and augmented reality (AR) applications are run on cellular base stations, rather than in the telecommunication provider’s centralised network location.
Unlike Cloud computing, where the hosting model is generally that computing resources are consumed by tenants – customers of a public cloud, for instance- in the Edge case, the consumer of computing resources is often the owner of the systems providing them (though this is not always the case). Another difference is the size of the host providing the computing resources, which may range from very large to very small (in the case of an IoT gateway, for example). One important factor about most modern Edge computing environments is that they employ the same virtualisation and orchestration techniques as the cloud, allowing more flexibility in deployment and lifecycle management over bare-metal deployments.
A table comparing the various properties typically associated with Cloud and Edge computing shows us a number of differences.
Public cloud computing
Private cloud computing
Edge computing
Location
Centralised
Centralised
Distributed
Hosting model
Tenants
Owner
Owner or tenant(s)
Application type
Generalised
Generalised
May be specialised
Host system size
Large
Large
Large to very small
Network bandwidth
High
High
Medium to low
Network availability
High
High
High to low
Host physical security
High
High
Low
Differences between Edge computing and public/private cloud computing
In the table, I’ve described two different types of cloud computing: public and private. The latter is sometimes characterised as on premises or on-prem computing, but the point here is that rather than deploying applications to dedicated hosts, workloads are deployed using the same virtualisation and orchestration techniques employed in the public cloud, the key difference being that the hosts and software are owned and managed by the owner of the applications. Sometimes these services are actually managed by an external party, but in this case there is a close commercial (and concomitant trust) relationship to this managed services provider, and, equally important, single tenancy is assured (assuming that security is maintained), as only applications from the owner of the service are hosted[1]. Many organisations will mix and match workloads to different cloud deployments, employing both public and private clouds (a deployment model known as hybrid cloud) and/or different public clouds (a deployment model known as multi-cloud). All these models – public computing, private computing and Edge computing – share an approach in common: in most cases, workloads are not deployed to bare-metal servers, but to virtualisation platforms.
Deployment model differences
What is special about each of the models and their offerings if we are looking at trust and security?
One characteristic that the three approaches share is scale: they all assume that hosts will have with multiple workloads per host – though the number of hosts and the actual size of the host systems is likely to be highest in the public cloud case, and lowest in the Edge case. It is this high workload density that makes public cloud computing in particular economically viable, and one of the reasons that it makes sense for organisations to deploy at least some of their workloads to public clouds, as Cloud Service Providers can employ economies of scale which allow them to schedule workloads onto their servers from multiple tenants, balancing load and bringing sets of servers in and out of commission (a computation- and time-costly exercise) infrequently. Owners and operators of private clouds, in contrast, need to ensure that they have sufficient resources available for possible maximum load at all times, and do not have the opportunities to balance loads from other tenants unless they open up their on premises deployment to other organisations, transforming themselves into Cloud Service Providers and putting them into direct competition with existing CSPs.
It is this push for high workload density which is one of the reasons for the need for strong workload-from-workload (type 1) isolation, as in order to be able to maintain high density, cloud owners need to be able to mix workloads from multiple tenants on the same host. Tenants are mutual untrusting; they are in fact likely to be completely unaware of each other, and, if the host is doing its job well, unaware of the presence of other workloads on the same host as them. More important than this property, however, is a strong assurance that their workloads will not be negatively impacted by workloads from other tenants. Although negative impact can occur in other contexts to computation – such as storage contention or network congestion – the focus is mainly on the isolation that hosts can provide.
The likelihood of malicious workloads increases with the number of tenants, but reduces significantly when the tenant is the same as the host owner – the case for private cloud deployments and some Edge deployments. Thus, the need for host-from-workload (type 2) isolation is higher for the public cloud – though the possibility of poorly written or compromised workloads means that it should not be neglected for the other types of deployment.
One final difference between the models is that for both public and private cloud deployments the physical vulnerability of hosts is generally considered to be low[2], whereas the opportunities for unauthorised physical access to Edge computing hosts are considered to be much higher. You can read a little more about the importance of hardware as part of the Trusted Compute Base in my article Turtles – and chains of trust, and it is a fundamental principle of computer security that if an attacker has physical access to a system, then the system must be considered compromised, as it is, in almost all cases, possible to compromise the confidentiality, integrity and availability of workloads executing on it.
All of the above are good reasons to apply Confidential Computing techniques not only to cloud computing, but to Edge computing as well: that’s a topic for another article.
1 – this is something of a simplification, but is a useful generalisation.
2 – Though this assumes that people with authorised access to physical machines are not malicious, a proposition which cannot be guaranteed, but for which monitoring can at least be put in place.
Arm’s announcement of Realms isn’t just about the Edge
The Confidential Computing Consortium is a Linux Project designed to encourage open source projects around confidential computing. Arm has been part of the consortium for a while – in fact, the company is Premier Member – but things got interesting on the 30th March, 2021. That’s when Arm announced their latest architecture: Arm 9. Arm 9 includes a new set of features, called Realms. There’s not a huge amount of information in the announcement about Realms, but Arm is clear that this is their big play into Confidential Computing:
To address the greatest technology challenge today – securing the world’s data – the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA).
I happen to live about 30 minutes’ drive from the main Arm campus in Cambridge (UK, of course), and know a number of Arm folks professionally and socially – I think I may even have interviewed for a job with them many moons ago – but I don’t want to write a puff piece about the company or the technology[1]. What I’m interested in, instead, is the impact this announcement is likely to have on the Confidential Computing landscape.
Arm has had an element in their architecture for a while called TrustZone which provides a number of capabilities around security, but TrustZone isn’t a TEE (Trusted Execution Environment) on its own. A TEE is the generally accepted unit of confidential computing – the minimum building block on which you can build. It is arguably possible to construct TEEs using TrustZone, but that’s not what it’s designed for, and Arm’s decision to introduce Realms strongly suggests that they want to address this. This is borne out by the press release.
Why is all this important? I suspect that few of you have laptops or desktops that run on Arm (Raspberry Pi machines apart – see below). Few of the servers in the public cloud run Arm, and Realms are probably not aimed particularly at your mobile phone (for which TrustZone is a better fit). Why, then, is Arm bothering to make a fuss about this and to put such an enormous design effort into this new technology? There are two answers, it seems to me, one of which is probably pretty much a sure thing, and the other of which is more of a competitive gamble.
Answer 1 – the Edge
Despite recent intrusions by both AMD and Intel into the Edge space, the market is dominated by Arm-based[3] devices. And Edge security is huge, partly because we’re just seeing a large increase in the number of Edge devices, and partly because security is really hard at the Edge, where devices are more difficult to defend, both logically (they’re on remote networks, more vulnerable to malicious attack) and physically (many are out of the control of their owners, living on customer premises, up utility poles, on gas pipelines or in sports stadia, just to give a few examples). One of the problems that confidential computing aims to solve is the issue that, traditionally, once an attacker has physical access to a system, it should be considered compromised. TEEs allow some strong mitigations against that problem (at least against most attackers and timeframes), so making it easy to create and use TEEs on the Edge makes a lot of sense. With the addition of Realms to the Arm 9 architecture, Arm is signally its intent to address security on the Edge, and to defend and consolidate its position as leader in the market.
Answer 2 – the Cloud
I mentioned above that few public cloud hosts run Arm – this is true, but it’s likely to change. Arm would certainly like to see it change, and to see its chipsets move into the cloud mainstream. There has been a lot of work to improve support for server-scale Arm within Linux (in fact, open source support for Arm is generally excellent, not least because of the success of Arm-based chips in Raspberry Pi machines). Amazon Cloud Services (AWS) started offering Arm-based servers to customers as long ago as 2018. This is a market in which Arm would clearly love to be more active and carve out a larger share, and the growing importance of confidential computing in the cloud (and public and private) means that having a strong story in this space was important: Realms are Arm’s answer to this.
What next?
An announcement of an architecture is not the same as availability of hardware or software to run on it. We can expect it to be quite a few months before we see production chips running Arm 9, though evaluation hardware should be available to trusted partners well before that, and software emulation for various components of the architecture will probably come even sooner. This means that those interested in working with Realms should be able to get things moving and have something ready pretty much by the time of availability of production hardware. We’ll need to see how easy they are to use, what performance impact they have, etc., but Arm do have an advantage here: as they are not the first into the confidential computing space, they’ve had the opportunity to watch Intel and AMD and see what has worked, and what hasn’t, both technically and in terms of what the market seems to like. I have high hopes for Arm Realms, and Enarx, the open source confidential computing project with which I’m closely involved, has plans to support them when we can: our architecture was designed with multi-platform support from the beginning.
1 – I should also note that I participated in a panel session on Confidential Computing which was put together by Arm for their “Arm Vision Day”, but I was in no way compensated for this[2].
2 -in fact, the still for the video is such a terrible picture of me that I think maybe I have grounds to sue for it to be taken down.
3 – Arm doesn’t manufacture chips itself: it licenses its designs to other companies, who create, manufacture and ship devices themselves.
The CCC is currently working to create a User Advisory Council (UAC)
Disclaimer: the views expressed in this article (and this blog) do not necessarily reflect those of any of the organisations or companies mentioned, including my employer (Red Hat) or the Confidential Computing Consortium.
The Confidential Computing Consortium was officially formed in October 2019, nearly a year and a half ago now. Despite not setting out to be a high membership organisation, nor going out of its way to recruit members, there are, at time of writing, 9 Premier members (of which Red Hat, my employer, is one), 22 General members, and 3 Associate members. You can find a list of each here, and a brief analysis I did of their business interests a few weeks ago in this article: Review of CCC members by business interests.
The CCC has two major committees (beyond the Governing Board):
Technical Advisory Board (TAC) – this coordinates all technical areas in which the CCC is involved. It recommends whether software projects should be accepted into the CCC (no hardware projects have been introduced so far, thought it’s possible they might be), coordinates activities like special interest groups (we expect one on Attestation to start very soon), encourages work across projects, manages conversations with other technical bodies, and produces material such as the technical white paper listed here.
Outreach Committee – when we started the CCC, we decided against going with the title “Marketing Committee”, as we didn’t think it represented the work we hoped this committee would be doing, and this was a good decision. Though there are activities which might fall under this heading, the work of the Outreach Committee is much wider, including analyst and press relations, creation of other materials, community outreach, cross-project discussions, encouraging community discussions, event planning, webinar series and beyond.
These two committees have served the CCC well, but now that it’s fairly well established, and has a fairly broad industry membership of hardware manufacturers, CSPs, service providers and ISVs (see my other article), we decided that there was one set of interested parties who were not well-represented, and which the current organisational structure did not do a sufficient job of encouraging to get involved: end-users.
It’s all very well the industry doing amazing innovation, coming up with astonishingly well-designed, easy to integrate, security-optimised hardware-software systems for confidential computing if nobody wants to use them. Don’t get me wrong: we know from many conversations with organisations across multiple sectors that users absolutely want to be able to make use of TEEs and confidential computing. That is not that same, however, as understanding their use cases in detail and ensuring that what we – the members of the CCC, who are focussed mainly on creating services and software – actually provide what users need. These users are across many sectors – finance, government, healthcare, pharmaceutical, Edge, to name but a few – and their use cases and requirements are going to be different.
This is why the CCC is currently working to create a User Advisory Council (UAC). The details are being worked out at the moment, but the idea is that potential and existing users of confidential computing technologies should have a forum in which they can connect with the leaders in the space (which hopefully describes the CCC members), share their use cases, find out more about the projects which are part of the CCC, and even take a close look at those projects most relevant to them and their needs. This sort of engagement isn’t likely, on the whole, to require attendance at lots of meetings, or to have frequent input into the sorts of discussions which the TAC and the Outreach Committee typically consider, and the general feeling is that as we (the CCC) are aiming to service these users, we shouldn’t be asking them to pay for the privilege (!) of talking to us. The intention, then, is to allow a low bar for involvement in the UAC, and for there to be no membership fee required. That’s not to stop UAC members from joining the CCC as members if they wish – it would be a great outcome if some felt that they were so keen to become more involved that membership was appropriate – but there should be no expectation of that level of commitment.
I should be clear that the plans for the UAC are not complete yet, and some of the above may change. Nor should you consider this a formal announcement – I’m writing this article because I think it’s interesting, and because I believe that this is a vital next step in how those involved with confidential computing engages with the broader world, not because I represent the CCC in this context. But there’s always a danger that “cool” new technologies develop into something which fits only the fundamentally imaginary needs of technologists (and I’ll put my hand up and say that I’m one of those), rather than the actual needs of businesses and organisations which are struggling to operate around difficult issues in the real world. The User Advisory Council, if it works as we hope, should allow the techies (me, again) to hear from people and organisations about what they want our technologies to do, and to allow the CCC to steer its efforts in these directions.
Reflections on the different types of member in the Confidential Computing Consortium
This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.
The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.
The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.
What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.
Cloud Service Providers (CSPs)
Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.
Google
IBM
Microsoft
The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.
Silicon vendors
Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.
AMD
Arm
Huawei
IBM
Intel
Nvidia
Hardware manufacturers
Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.
Cisco
Google
Huawei
IBM
Nvidia
Western Digital
Xilinx
Service companies
In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.
Accenture
Ant Group
Bytedance
Facebook
Google
Huawei
IBM
Microsoft
Red Hat
Swisscom
ISVs
There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.
What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.
Anjuna
Anqlave
Bytedance
Cosmian
Cysec
Decentriq
Edgeless Systems
Fortanix
Google
Huawei
IBM
r3
Red Hat
VMware
Blockchain
As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.