SF in June: Confidential Computing Summit

A good selection of business-led and technical sessions

It should be around 70F/21C in San Francisco around the 29th June, which is a pretty good reason to pop over to attend the Confidential Computing Summit which is happening on that day. One of the signs that a technology is getting some real attention in the industry is when conferences start popping up, and Confidential Computing is now at the stage where it has two: OC3 (mainly virtual, Europe-based) and CCS.

I have to admit to having skin in this game – as Executive Director of the Confidential Computing Consortium, I’ll be presenting a brief keynote – but given the number of excellent speakers who’ll be there, it’s very much worth considering if you have an interest in Confidential Computing (and you should). I’d planned to paste the agenda into this article, but it’s just too large. Here is a list of just some of the sessions and panels, instead.

  • State of the Confidential Computing MarketRaluca Ada Popa, Assoc. Prof CS, UC Berkeley and co-founder Opaque Systems
  • Confidential Computing and Zero TrustVikas Bhatia, Head of Product, Microsoft Azure Confidential Computing
  • Overcoming Barriers to Confidential Computing as a Universal PlatformJohn Manferdelli, Office of the CTO, VMware
  • Confidential Computing as a Cornerstone for Cybersecurity Strategies and ComplianceXochitl Monteon, Chief Privacy Officer and VP Cybersecurity Risk & Governance, Intel
  • Citadel: Side-Channel-Resistant Enclaves on an Open-Source, Speculative, Out-of-Order ProcessorSrini Devadas, Webster Professor of EECS, MIT
  • Collaborative Confidential Computing: FHE vs sMPC vs Confidential Computing. Security Models and Real World Use CasesBruno Grieder, CTO & Co-Founder, Cosmian
  • Application of confidential computing to Anti Money Laundering in CanadaVishal Gossain, Practice Leader, Risk Analytics and Strategy, Ernst and Young

As you can tell, there’s a great selection of business-led and technical sessions, so whether you want to delve into the technology or understand the impact of Confidential Computing on business, please come along: I look forward to seeing you there.

Functional vs non-functional requirements: a dangerous dichotomy?

Non-functional requirements are at least as important as functional requirements.

Imagine you’re thinking about an application or a system: how would you describe it? How would you explain what you want it to do? Most of us, I think, would start with statements like:

  • it should read JPEGs and output SVG images;
  • it should buy the stocks I tell it to when they reach a particular price;
  • it should take a customer’s credit history and decide whether to approve a loan application;
  • it should ensure that the car maintains a specific speed unless the driver presses the brakes or disengages the system;
  • it should level me up when I hit 10,000 gold bars mined;
  • it should take a prompt and output several hundred words about a security topic that sound as if I wrote them;
  • it should strike out any text which would give away its non-human status.

These are all requirements on the system. Specifically, they are functional requirements: they are things that an application or a system should do based on the state of inputs and outputs to which it is exposed.

Now let’s look at another set of requirements: requirements which are important to the correct operation of the system, but which aren’t core to what it does. These are non-functional requirements, in that they don’t describe the functions it performs, but its broader operation. Here are some examples:

  • it should not leak cryptographic keys if someone performs a side-channel attack on it;
  • it should be able to be deployed on premises or in the Cloud;
  • it should be able to manage 30,000 transactions a second;
  • it should not slow stop a user’s phone from receiving a phone call when it is running;
  • it should not fail catastrophically, but degrade its performance gracefully under high load;
  • it should be allowed to empty the bank accounts of its human masters;
  • it should recover from unexpected failures, such as its operator switching off the power in a panic on seeing unexpected financial transactions.

You may notice that some of the non-functional requirements are expressed as negatives – “it should not” – this is fairly common, and though functional requirements are sometimes expressed in the negative, it is more rare.

So now we come to the important question, and the core of this article: which of the above lists is more important? Is it the list with the functional requirements or the non-functional requirements? I think that there’s a fair case to be made for the latter: the non-functional requirements. Even if that’s not always the case, my (far too) many years of requirements gathering (and requirements meeting) lead me to note that while there may be a core set of functional requirements that typically are very important, it’s very easy for a design, architecture or specification to collect more and more functional requirements which pale into insignificance against some of the non-functional requirements that accrue.

But the problem is that non-functional requirements are almost always second-class citizens when compared to functional requirements on an application or system. They are are often collected after the functional requirements – if at all – and are often the first to be discarded when things get complicated. They also typically require input from people with skill sets outside the context of the application or system: for instance, it may not be obvious to the designer of a back-end banking application that they need to consider data-in-use protection (such as Confidential Computing) when they are collecting requirements of an application which will initially be run in an internal data centre.

Agile and DevOps methodologies can be relevant in these contexts, as well. On the one hand, ensuring that the people who will be operating an application or system is likely to focus their minds on some of the non-functional requirements which might impact them if they are not considered early enough. On the other hand, however, a model of development where the the key performance indicator is having something that runs means that the functional requirements are fore-grounded (“yes, you can log in – though we’re not actually checking passwords yet…”).

What’s the take-away from this article? It’s to consider non-functional requirements as at least as important as functional requirements. Alongside that, it’s vital to be aware that the people in charge of designing, architecting and specifying an application or system may not be best placed to collect all of the broader requirements that are, in fact, core to its safe and continuing (business critical) operation.

E2E encryption in danger (again) – sign the petition

You should sign the petition on the official UK government site.

The UK Government is at it again: trying to require technical changes to products and protocols which will severely impact (read “destroy”) the security of users. This time, it’s the Online Safety Bill and, like pretty much all similar attempts, it requires the implementation of backdoors to things like messaging services. Lots of people have stood up and made the point that this is counter-productive and dangerous – here are a couple of links:

This isn’t the first time I’ve written about this (The Backdoor Fallacy: explaining it slowly for governments and Helping our governments – differently, for a start), and I fear that it won’t be the last. The problem is that none of these technical approaches work: none of them can work. Privacy and backdoors (and this is a backdoor, make no mistake about it) are fundamentally incompatible. And everyone with an ounce (or gram, I suppose) of technical expertise agrees: we know (and we can prove) that what’s being suggested won’t and can’t work.

We gain enormous benefits from technology, and with those benefits come risks, and some downsides which malicious actors exploit. The problem is that you can’t have one without the other. If you try to fix (and this approach won’t fix – it might reduce, but not fix) the problem that malicious actors and criminals use online messaging service, you open out a huge number of opportunities for other actors, including malicious governments (now or in the future) to do very, very bad things, whilst reducing significantly the benefits to private individuals, businesses, human rights organisations, charities and the rest. There is no zero sum game here.

What can you do? You can read up about the problem, you can add your voice to the technical discussions and/or if you’re a British citizen or resident, you should sign the petition on the official UK government site. This needs 10,000 signatures, so please get signing!

Executive Director, Confidential Computing Consortium

I look forward to furthering the aims of the CCC

I’m very pleased to announce that I’ve just started a new role as part-time Executive Director for the Confidential Computing Consortium, which is a project of the The Linux Foundation. I have been involved from the very earliest days of the consortium, which was founded in 2019, and I’m delighted to be joining as an officer of the project as we move into the next phase of our growth. I look forward to working with existing and future members and helping to expand industry adoption of Confidential Computing.

For those of you who’ve been following what I’ve been up to over the years, this may not be a huge surprise, at least in terms of my involvement, which started right at the beginning of the CCC. In fact, Enarx, the open source project of which I was co-founder, was the very first project to be accepted into the CCC, and Red Hat, where I was Chief Security Architect (in the Office of the CTO) at the time, was one of the founding members. Since then, I’ve served on the Governing Board (twice, once as Red Hat’s representative as a Premier member, and once as an elected representative of the General members) acted as Treasurer, been Co-chair of the Attestation SIG and been extremely active in the Technical Advisory Council. I was instrumental in initiating the creation of the first analyst report into Confidential Computing and helped in the creation of the two technical and one general white paper published by the CCC. I’ve enjoyed working with the brilliant industry leaders who more than ably lead the CCC, many of whom I now count not only as valued colleagues but also as friends.

The position – Executive Director – however, is news. For a while, the CCC has been looking to extend its activities beyond what the current officers of the consortium can manage, given that they have full-time jobs outside the CCC. The consortium has grown to over 40 members now – 8 Premier, 35 General and 8 Associate – and with that comes both the opportunity to engage in a whole new set of activities, but also a responsibility to listen to the various voices of the membership and to ensure that the consortium’s activities are aligned with the expectations and ambitions of the members. Beyond that, as Confidential Computing becomes more pervasive, it’s time to ensure that (as far as possible), there’s a consistent, crisp and compelling set of messages going out to potential adopters of the technology, as well as academics and regulators.

I plan to be working on the issues above. I’ve only just started and there’s a lot to be doing – and the role is only part-time! – but I look forward to furthering the aims of the CCC:

The Confidential Computing Consortium is a community focused on projects securing data in use and accelerating the adoption of confidential computing through open collaboration.

The core mission of the CCC

Wish me luck, or, even better, get in touch and get involved yourself.

SVB & finance stress: remember the (other) type of security

Now is the time for more vigilance, not less.

This is the week that the start-up world has been reeling after the collapse of Silicon Valley Bank. There have been lots of articles about it and about how the larger ecosystem (lawyers, VCs, other banks and beyond) have rallied to support those affected, written (on the whole, at least!) by people much better qualified than me to do so. But there’s another point that could get lost in the noise, and that’s the opportunity presented to bad actors by all of this.

When humans are tired, stressed, confused or have too many inputs, they (we – I’ve not succumbed to the lure of ChatGPT yet…) are prone to make poor decisions, or to take less time over decisions – even important decisions – than they ought to. Sadly, bad people know this, and that means that they will be going out of their way to exploit us (I’m very aware that I’m as vulnerable to this type of exploitation as anybody else). The problem is that when banks start looking dodgy, or when money is at stake, people need to do risky things. And these are often risky things which involve an awful lot of money, things like:

  • withdrawing large amounts of money
  • moving large amounts of money between accounts
  • opening new accounts
  • changing administrative access permissions and privileges on accounts
  • adding new people as administrators on accounts.

All of the above are actions (or involve actions) which we would normally be very careful about, and take very seriously (though that doesn’t stop us making the occasional mistake). The problem (and the opportunity for bad actors) is that when we’re stressed or in a hurry (as we’re likely to be in the current situation), we may pay less attention to important steps than we might otherwise. We might not enable multi-factor authentication, we might not check website certificates, we might click-through on seemingly helpful offers in emails to help us out, or we might not check the email addresses to which we’re sending invitations. All of these could lead bad folks to get at our money. They know this, and they’ll be going out of their way to find ways to encourage us to make mistakes, be less careful or hurry our way through vital processes.

My plea, then, is simple: don’t drop your guard because of the stress of the current situation. Now is the time for more vigilance, not less.

What’s your website’s D&D alignment?

This is the impression – often the first impression – that users of the website get of the organisation.

Cookies and Dungeons and Dragons – a hypothesis

Recent privacy legislation has led organisations to have to adopt ways of allowing their users to register their cookie preference in ways which expose the underlying motivations of the org. I have a related theory, and it goes like this: these different registration options allow you to map organisations to one of the 9 Dungeons and Dragons character alignments.

This may seem like a bit of a leap, but stick with me. First, here’s a a little bit of background for those readers who’ve never dabbled in (or got addicted to -there’s little room between the two extremes) the world of Dungeons and Dragons (or “D&D”). There are two axes used to describe a character: Lawful-Neutral-Chaotic and Good-Neutral-Evil. Each character has a position across each of these axes, so you could have someone who’s Lawful-Good, one who’s Chaotic Neutral or one who’s Neutral-Good, for instance (a “Neutral-Neutral” character is described as “True Neutral”). A Lawful character follows the law – or a strong moral code, whereas a Chaotic one just can’t be bothered. A Neutral character tends to do so when it suits them. The Good-Neutral-Evil axis should be pretty clear.

Second bit of background: I never just accept cookies on a website. I always go through the preferences registration options, and almost always remove permissions for all cookie tracking beyond the “minimum required for functionality”. I know I’m in a tiny minority in this, but I like to pretend that I can safeguard at least some of my private data, so I do it anyway (and so should you). I’ve noticed, over the past few months, that there are a variety of ways that cookie choices are presented to you, and I reckon that we can map several of these to these D&D alignments, giving us a chance to get a glimpse into the underlying motivation of the organisation whose website we’re visiting.

I’ve attempted to map the basic approaches I’ve seen into this table.

Lawful GoodNeutral GoodChaotic Good
Functional cookies only by default.No cookies, and a link to a long and confusing explanation about how the organisation doesn’t believe in them.No cookies at all, no explanation.
Lawful NeutralTrue Neutral Chaotic Neutral
Functional and tracking cookies by default, clear what tracking cookies are; all easy to turn off.Functional and tracking cookies by default, completely unclear what the cookies do.Random selection of cookies, and it’s unclear what they do, but you can at least turn them off.
Lawful EvilNeutral EvilChaotic Evil
All cookies by default: functional, tracking and legitimate uses.  Easy to remove with “reject all” or “object all”.All cookies by default.  “Legitimate uses” need to be deselected individually.All cookies by default, with 100s listed.  You have to deselect them by hand (there’s no “reject all” or “object all”), and there’s a 2 to 5 minute process to complete the registration, which finishes on 100% but never completes.
D&D alignments and website cookie preference approaches

Clearly, this is a tongue-in-cheek post, but there’s an important point here, I think: even if this glimpse isn’t a true representation of the organisation, it’s the impression – often the first impression – that users of the website get. My view of an organisation is formed partly through my interaction with its website, and while design, layout and content are all important, of course, the view that is presented about how much (if at all) the organisation cares about my experience, my data and my privacy should be something that organisations really care about. If they don’t respect me, then why should I respect them?

If I’m trying to attract someone to work for me, partner with me or buy from me, then my marketing department should be aware of the impression that visitors to my website glean from all interactions. At the moment, this seems to be missing, and while it’s not difficult to address, it seems to have escaped the notice of most organisations up to this point.

The 9 development stages for software (and kids)

We can draw parallels in the stages through which software projects and children tend to progress.

This week, one of my kids turned 18, and is therefore an adult – at least in the eyes of law in the UK, where we live. This is scary. For me, and probably for the rest of the UK.

It also got me thinking about how there are similarities between the development lifecycle for software and kids, and that we can probably draw some parallels in the stages through which they tend to progress. Here are the ones that occurred to me.

1. Creation

Creating a new software project is very easy, and a relatively quick process, though sometimes you have a huge number of false starts and things don’t go as planned. The same, it turns out, applies when creating children. Another similarity is that many software projects are created by people who don’t really know what they’re doing, and shouldn’t be allowed anywhere near the process at all. Equally useless at the process are people who have only a theoretical understanding of how it should work, having studied it at school, college or university, but who feel that they are perfectly qualified.

The people who are best qualified – those who have done it before – are either rather blasé about it and go about creating new ones all over the place, or are so damaged by it all that they swear they’ll never do it again. Beware: there are also numerous incidents of people starting software processes at very young ages, or when they had no intention of doing so.

2. Naming

Naming a software project – or a baby – is an important step, as it’s notoriously difficult to change once you’ve assigned a name. While there’s always a temptation to create a “clever” or “funny” name for your project, or come up with an alternative spelling of a well-known word, you creation will suffer if you allow yourself to be so tempted. Use of non-ASCII characters will either be considered silly, or, for non-Anglophone names, lead to complications when your project (or child) is exposed to other cultures.

3. Ownership

When you create a software project, you need to be careful that your employer (or educational establishment) doesn’t lay claim to it. This is rarely a problem for human progeny (if it is, you really need to check your employment contract), but issues can arise when two contributors are involved and wish to go their separate ways, or if other contributors – e.g. mothers-in-law – feel that their input should have more recognition, and expect more of their commits to be merged into main.

4. Language choice

The choice of language may be constrained by the main contributors’ expertise, but support for multiple languages can be very beneficial to a project or child. Be aware that confusion can occur, particularly at early stages, and it is generally worthwhile trying to avoid contributors attempting to use languages in which they are not fluent.

5. Documentation

While it is always worthwhile being aware of the available documentation, there are many self-proclaimed “experts” out there, and much conflicting advice. Some classic texts, e.g. The C Programming Language by Brian Kernighan and Dennis Ritchie or Baby and Child Care by Dr Benjamin Spock, are generally considered outdated in some circles, while yet others may lead to theological arguments. Some older non-core contributors (see, for example, “mothers-in-law”, above), may have particular attachments to approaches which are not considered “safe” in modern software or child development.

6. Maintenance

While the initial creation step is generally considered the most enjoyable in both software and child development processes, the vast majority of the development lifecycle revolves around maintenance. Keeping your project or child secure, resilient and operational or enabling them to scale outside the confines of the originally expected environment, where they come into contact with other projects, can quickly become a full-time job. Many contributors, at this point, will consider outside help to manage these complexities.

7. Scope creep

Software projects don’t always go in the direction you intend (or would like), discovering a mind of their own as they come into contact with external forces and interacting in contexts which are not considered appropriate by the original creators. Once a project reaches this stage, however, there is little that can be done, and community popularity – considered by most contributors as a positive attribute at earlier stages of lifecycle – can lead to some unexpected and possibly negative impacts on the focus of the project as competing interests vie to influence the project’s direction. Careful management of resources (see below) is the traditionally approach to dealing with this issue, but can backfire (withdrawal of privileges can have unexpected side effects in both software and human contexts).

8. Resource management

Any software project always expands to available resources. The same goes for children. In both cases, in fact, there will always appear to be insufficient resources to meet the “needs” of the project/child. Be strong. Don’t give in. Consider your other projects and how they could flourish if provided with sufficient resources. Not to mention your relationships with other contributors. And your own health/sanity.

9. Hand-over

At some point, it becomes time to hand over your project. Whether this is to new lead maintainer (or multiple maintainers – we should be open-minded), to an academic, government or commercial institution, letting go can be difficult. But you have to let go. Software projects – and children – can rarely grow and fulfil their potential under the control of the initial creators. When you do manage to let go, it can be a liberating experience for you and your creation. Just don’t expect that you’ll be entirely free of them: for some reason, as the initial creator, you may well be expected to arrange continued resources well past the time you were expecting. Be generous, and enjoy the nostalgia, but you’re not in charge, so don’t expect the resources to be applied as you might prefer.


I’m aware that there are times when children – and even software projects – can actually cause pain and hurt, and I don’t want to minimise the negative impact that the inability to have children, their illness, injury or loss can have on individuals and families. Please accept this light-hearted offering in the spirit it is meant, and if you are negatively affected by this article, please consider accessing help and external support.

Closing Profian

In June 2021, a little under two years ago, I left Red Hat and joined Profian as the CEO – Chief Executive Officer. In mid-January 2023, we – the board – decided to close down the company. All 14 members of the company are looking for new jobs.

I’ve not been blogging much recently, and it’s been because I’ve been busy trying to sort out what we do with the company. We looked at many different options around getting more funding or even being acquired by another company, but none came to fruition, so we decided to close down the company as gracefully as we could. It’s not been an easy few weeks (or months, in fact), but I’ve pretty much come to peace with the decision.

I’ll be writing more posts about what happened, how we got there, and the rest, but here’s a quick version of what happened, as I posted in an internal chat room:

While pretty much everybody believes that Confidential Computing is on its way, there’s also general agreement in the market that it’s not ready for major market adoption for 12 or more months. This is partly due to the fact that the tech is still regarded as immature (and prone to vulnerabilities) and also largely because the recessionary pressures on all sectors mean that organisations are protecting their core existing services, rather than betting money on new tech. VCs are into “ARR”: Annual Recurring Revenue. They want to see fast growth, and paid pilots with (even with big players) which don’t lead to fast scaling of the business aren’t considered sufficient. The amount of money available wouldn’t have been sufficient to allow us to grow and defend a market share in order to get to the next funding round. We also looked at acquisition, but nobody was ready to bet on new tech to the extent of buying the company: again, because they’re defending their existing services and staff (and, in many cases, laying people off already).

Me, on internal Profian chat room

I’m currently focussing on four things:

  1. helping the extremely talented Profian team find new jobs;
  2. winding the company down;
  3. taking some time to recover from the past few months – emotionally, mentally and physically;
  4. starting to look for a new job for myself.

If you can help with #1 or #4, please get in touch. Otherwise, keep an eye out on this blog, and expect more posts. See you soon.

Confidential Computing – become the expert

There really is no excuse for not protecting your (and your customers’!) data in use.

I came across this article recently: 81% of companies had a cloud security incident in the last year. I think it’s probably factually incorrect, and that the title should be “81% of companies are aware that they had a cloud security incident last year”. Honestly, it could well be much higher than that. When I’ve worked on IT security audits, I sometimes see statements like “[Company X] experienced no data or privacy breaches over the past 12 months”, and I always send it back, insisting on a change of wording to reflect the fact that all that is known is that the organisation is not aware of any data or privacy breaches over the past 12 months.

The other statistic that really struck me in the article, however, is that the top reported type of incident was “Security incidents during runtime”, with 34% of respondents reporting it. That’s over a third of incidents!

And near the top of concerns was “Privacy/data access issues, such as those from GDPR”, at 31%.

The problem about both of these types of issues is that there’s almost nothing you can do to protect yourself from them in the cloud. Cloud computing (and virtualisation in general) is pretty good at protecting you from other workloads (type 1 isolation) and protecting the host from your workloads (type 2 isolation), but offers nothing to protect your workload from the host (type 3 isolation). If you’re interested in a short introduction to why, please have a look at my article Isolationism – not a 4 letter word (in the cloud).

The good news is that there are solutions out there that do allow you to run sensitive applications (and applications with sensitive data) in the cloud: that’s what Confidential Computing is all about. Confidential Computing protects your data not just at rest (when it’s in storage) and in transit (on the network), but actually at runtime: “data in use”. And it seems that industry is beginning to realise that it’s time to be sitting up and paying attention: the problem is that not enough people know about Confidential Computing.

So – now’s the time to become the expert on Confidential Computing for your organisation, and show your manager, your C-levels and your board how to avoid becoming part of the 81% (or the larger, unknowing percentage). The industry body is the Confidential Computing Consortium, and they have lots of information, but if you want to dive straight in, I encourage you to visit Profian and download one or more of our white papers (there’s one about runtime isolation there, as well). There really is no excuse for not protecting your (and your customers’!) data in use.

Enarx hits 750 stars

Yesterday, Enarx, the open source security project of which I’m co-founder and for which Profian is custodian, gained its 750th GitHub star. This is an outstanding achievement, and I’m very proud of everyone involved. Particular plaudits to Nathaniel McCallum, my co-founder for Enarx and Profian, Nick Vidal, the community manager for Enarx, everyone who’s been involved in committing code, design, tests and documentation for the project, and everyone who manages the running of the project and its infrastructure. We’ve been lucky enough to be joined by a number of stellar interns along the way, who have also contributed enormously to the project.

Enarx has also been supported by a number of organisations and companies, and it’s worth listing as many of them as I can think of:

  • Profian, the current custodian
  • Red Hat, under whose auspices the initial development began
  • the Confidential Computing Consortium, a Linux Foundation Project, which owns the project
  • Equinix, who have donated computing resources
  • PhoenixNAP, who have donated computing resources
  • Rocket.Chat, who have donated chat resources
  • Intel, who have worked with us along the way and donated various resources
  • AMD, who have worked with us along the way and donated various resources
  • Outreachy, with whom worked to get some of our fine interns

When it all comes down to it, however, it’s the community that makes the project. We strive to create a friendly, open community, and we want more and more people to get involved. To that end, we’ll soon be announcing some new ways to get involved with trying and using Enarx, in association with Profian. Keep an eye out, and keep visiting and giving us stars!