Taking some time

I’m going to practice what I preach, and not write.

I’m going to practice what I preach, this week, and not write a full article. I’ve had a stressful and busy few weeks, including needing to spend some extra time with the family (nothing scary or earth-shattering – we just needed some family time), and I think the best thing for me to do today is not spend time writing an article. Let me point you instead at some I’ve written in the past.

On self-care:

On security:

On trust:

Keep safe, and look after yourself, dear reader!

15 steps to prepare for (another) lockdown

What steps can we be taking to prepare for what seems likely now – a new lockdown?

The kids are back in school, there are people in shops and restaurants, and traffic is even beginning to get back to something like normal levels. I’m being deployed as a CFR (community first responder) to more incidents, as the ambulance service gets better at assessing the risks to me and patients. And the colds and sneezes are back.

Of course they are: it’s that time of year. And where are they spreading from? Where do they usually spread from? School pupils. Both of mine have picked up minor cold symptoms, but, luckily nothing suggesting Covid-19. The school they attend is following government advice by strongly recommending that pupils wear masks in communal areas, encouraging social distancing and providing hand sanitiser outside each classroom, to be used on entry. Great! That should limit Covid-19. And it should… but the sore throats, coughing and sneezing started within days of their return to school. I’m no expert but it seems likely (and many experts agree) that schools will be act as transmission vectors, and that the rates of infection of Covid-19 will start rising again. And yes, the UK already has an R figure well above 1.

Apart from ranting about how this was always likely to happen, and that the relevant authorities should have taken more steps to reduce the impact) both true), what steps can we be taking to prepare for what seems likely now – a new lockdown?

Physical steps

There are a number of things that I’ve done or plan to do to prepare. Some of them aren’t because I necessarily expect a full lockdown, but some because, if I feel ill and unable to leave the house, it’s best to be ready.

  • get provisions – what do we need in for food and drink? We should obviously not go overboard on alcohol, but if you like a glass of wine from time to time, get a few bottles in, maybe a nice one for a special occasion. Get dried food in, cooking oil, and the rest stock the freezer. Oh, and chocolate. Always chocolate.
  • household supplies – remember that run on random items at the beginning of the first lockdown? Let’s avoid that this time: get toilet paper, kitchen roll, cleaning materials and tissues (for when we feel really poorly).
  • work supplies – most of us are used to working at home now, but if you’ve got a dodgy monitor, a printer in need of paper, or a webcam that’s on its last legs, now is a good time to sort them out there’s a good chance that these might become difficult to get hold of (again).
  • fitness preparations – if the gyms close again, what will you do? Even if we’re allowed outside more for exercise this time round, those warm jogging shorts that you wore in the spring and summer are not what you want to be wearing in the sleet and snow, so buy whatever gear you need for indoor or outdoor use now.
  • get a haircut – or get hold of some hairdressing supplies. Many of us discovered that we or our family members had some skills in this department, but better to get a cut in preparation, right?
  • books – yes, there are alternatives to physical books: you can read on your phone or another device these days. But I like a physical book, and I wish I’d stocked up last time. Go to your friendly neighbourhood book store – they need your business right now – and buy a few books.
  • wood – we live in an old house, and have wood-burning stoves to supplement our heating. Get wood in now to avoid getting cold in the winter!
  • pay the bills – you may want or need some extra luxuries later, as the weather sets in and lockdown takes hold. Get the bills paid up front, so there are no nasty surprises and you can budget a few treats for yourself later.

Psychological steps

Just as important as the physical – more, probably – is psychological preparation. That doesn’t mean that the steps above aren’t important: in fact, they’re vital to allow you to have space to consider the psychological preparation, which is difficult if you’re concerned or unsure about your physical safety and environment.

Prioritise – if you can, work out now what you’re going to prioritise, and when. Sometimes work may come first (barring an emergency), sometimes family, sometimes you. Thinking about this now is a good plan, so that you can set some rules for yourself and for those around you.

Prepare your family – this isn’t just about the priorities you’ve already worked on in the previous point, but also more generally. Many of us struggled with lockdown, and although we might think that it’ll be easier second time round, the very fact that it’s happened again is likely to cause us more stress in some ways.

Sleep – sleep now: bank it while you can! Sleep when lockdown happens, too. This was something which was a surprise to me: how tired I got. Not going out is, it turns out, tiring. This is because stress – which was a clear outcome from the first lockdown, and stress can make you very tired. So sleep when you can, and don’t just try to “power through”.

List what you can control and what you can’t – a classic stressor is feeling overwhelmed with things that we can’t control. And there will definitely be things that we really can’t – how long it takes, which of my friends get sick, issues such as that. But equally, there are things that we can control: when I stop for a cup of tea (or coffee, I suppose), who I call to catch up with on the phone, what I have for supper. In order to reduce stress, list things that you can control, and which you can’t, and try to accept the latter. Doing so won’t remove all stress, but it should help you manage your response to that stress, which can help you reduce it.

Be ready to feel weak – you will feel sad and depressed and ill and fed up from time to time. This is normal, and human, and it does not make you a failure or a bad employee, family member, friend or person. Accept it, and be ready to move on when you can.

Think of others – other people will be struggling, too: your family, friends, colleagues and neighbours. Spare them a thought, and think how you can help, even if it’s just with a quick text, a family videochat or a kind word from time to time. Being nice to people can make you feel good, too – and if you’re lucky, they’ll reciprocate, so everyone wins twice!

Be ready to put yourself first – sometimes, you need to step back and say “enough”. This isn’t always easy, but it’s sometimes necessary. If you begin to realise that things are coming unstuck, and that you’re going to have to disengage, let others around you know if you can. Don’t say “I hope it’s OK if…” or “I was thinking about, would it be OK for me to…”. Instead, let them know your intentions: “I’m going to need 5 minutes to myself”, or “I need to drop from this meeting for a while”. This won’t always be easy, but if you can prepare them, and yourself, for taking a little time, it’s going to be better for everyone in the end: you, because you will recover (if only for a while), and them, because they’ll get a healthier, more efficient and less stressed you.

Measured and trusted boot

What they give you – and don’t.

Sometimes I’m looking around for a subject to write about, and realise that there’s one which I assume that I’ve covered, but, on searching, discover that I haven’t. Such a one is “measured boot” and “trusted boot” – sometimes, misleadingly, referred to as “secure boot”. There are specific procedures which use these terms with capital letters – e.g. Secure Boot – which I’m going to try to avoid discussing in this post. I’m more interested in the generic processes, and a major potential downfall, than in trying to go into the ins and outs of specifics. What follows is a (heavily edited) excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

In order to understand what measured boot and trusted boot aim to achieve, let’s have a look at the Linux virtualisation stack: the components you run if you want to be using virtual machines (VMs) on a Linux machine. This description is arguably over-simplified, but we’re not interested here in the specifics (as I noted above), but in what we’re trying to achieve. We’ll concentrate on the bottom four layers (at a rather simple level of abstraction): CPU/management engine; BIOS/EFI; Firmware; and Hypervisor, but we’ll also consider a layer just above the CPU/management engine, where we interpose a TPM (a Trusted Platform Module) and some instructions for how to perform one of our two processes. Once the system starts to boot, the TPM is triggered, and then starts its work (alternative roots of trust such as HSMs might also be used, but we will use TPMs, the most common example in this context, as our example).

In both cases, the basic flow starts with the TPM performing a measurement of the BIOS/EFI layer. This measurement involves checking the binary instructions to be carried out by this layer, and then creating a cryptographic hash of the binary image. The hash that’s produced is then stored in one of several “PCR slots” in the TPM. These can be thought of as pieces of memory which can be read later on, either by the TPM for its purposes, or by entities external to the TPM, but which cannot be changed once they have been written. This provides assurances that once a value is written to a PCR by the TPM, it can be considered constant for the lifetime of the system until power-off or reboot.

After measuring the BIOS/EFI layer, the next layer (Firmware) is measured. In this case, the resulting hash is combined with the previous hash (which was stored in the PCR slot) and then itself stored in a PCR slot. The process continues until all of the layers involved in the process have been measured, and the results of the hashes stored. There are (sometimes quite complex) processes to set up the original TPM values (I’ve missed out some of the more low-level steps in the process for simplicity) and to allow (hopefully authorised) changes to the layers for upgrading or security patching, for example. What this process “measured boot” allows is for entities to query the TPM after the process has completed, and check whether the values in the PCR slots correspond to the expected values, pre-calculated with “known good” versions of the various layers – that is, pre-checked versions whose provenance and integrity have already been established. Various protocols exist to allow parties external to the system to check the values (e.g. via a network connection) that the TPM attests to being correct: the process of receiving and checking such values from an external system is known as “remote attestation”.

This process – measured boot – allows us to find out whether the underpinnings of our system – the lowest layers – are what we think they are, but what if they’re not? Measured boot (unsurprisingly, given the name) only measures, but doesn’t perform any other actions. The alternative, “trusted boot” goes a step further. When a trusted boot process is performed, the process not only measures each value, but also performs a check against a known (and expected!) good value at the same time. If the check fails, then the process will halt, and the booting of the system will fail. This may sound like a rather extreme approach to take to a system, but sometimes it is absolutely the right one. Where the system under consideration may have been compromised – which is one likely inference that you can make from the failure of a trusted boot process – then it is better that it not be available at all than to be running based on flawed expectations.

This is all very well if I’m the owner of the system which is being measured, have checked all of the various components being measured (and the measurements), and so can be happy that what’s being booted it what I want[1]. But what if I’m actually using a system on the cloud, for instance, or any system owned and managed by someone elese? In that case, I’m trusting the cloud provider (or owner/manager) with two things:

  1. do all the measuring correctly, and report correct results to me;
  2. actually to have built something which I should be trusting in the first place!

This is the problem with the nomenclature “trusted boot”, and, even worse, “secure boot”. Both imply that an absolute, objective property of a system has been established – it is “trusted” or “secure” – when this is clearly not the case. Obviously, it would be unfair to expect the designers of such processes to name them after the failure states – “untrusted boot” or “insecure boot” – but unless I can be very certain that I trust the owner of the system to do step 2 entirely correctly (and in my best interests, as user of the system, rather than theirs, and owner) then we can make no stronger assertions. There is an enormous temptation to take a system which has gone through a trusted boot process and to label it a “trusted system”, where the very best assertion we can make is that the particular layers measured in the measured and/or trusted boot process have been asserted to be those which the process expected to be present. Such a process says nothing at all about the fitness of the layers to provide assurances of behaviour, nor about the correctness (or fitness to provide assurances of behaviour) of any subsequent layers on top of those.

It’s important to note that designers of TPMs are quite clear what is being asserted, and that assertions about trust should be made carefully and sparingly. Unluckily, however, the complexities of systems, the general low level of understanding of trust, and the complexities of context and transitive trust make it very easy for designers and implementors of systems to do the wrong thing, and to assume that any system which has successfully performed a trusted boot process can be considered “trusted”. It is also extremely important to remember that TPMs, as hardware roots of trust, offer us one of the best mechanisms for we have for establishing a chain of trust in systems that we may be designing or implementing, and I plan to write an article about them soon.


1 – although this turns out to be much harder to do that you might expect!

Rust – my top 7 keywords

A few useful keywords from the Rust standard library.

I’ve been using Rust for a few months now, writing rather more of it than I expected – though quite a lot of that has been thrown away as I’ve learnt improved what I’m writing and taken some more complex tasks on beyond what I’d originally intended. I still love it, and thought that today might be a good day to talk about some of the important keywords that come up again and again in Rust, and provide my personal summary of what they do, why you need to think about how you use them, and anything else that’s useful, particularly for people who are new to Rust, or coming from another language (such as Java – see my previous article on the subject, 5 Rust reflections (from Java)). Without further ado, let’s get going. A good place for further information is always the official Rust documatation – you’ll probably want to start with the std library.

  1. const – you get to declare constants with “const”, and you should. This isn’t rocket science, but do declare with const, and if you’re going to use constants across different modules, then do the right thing and create a lib.rs file (the Rust default) into which you can put all of these, with a nicely name module. I’ve had clashes of const variable names (and values!) across different files in different modules, simply because I was too lazy to do anything other than cut and paste across files, when I could have save myself lots of work by simply creating a shared module.
  2. let – you don’t always need to declare a variable with a let statement, but your code will be clearer when you do. What’s more, always add the type if you can. Rust will do its very best to guess what it should be, but may not always be able to do so at runtime (in which case Cargo, the compiler, will tell you), or may even not necessarily do what you expect. In the latter case, it’s always simpler for Cargo to complain that the function you’re assigning from (for instance) doesn’t match the declaration than for Rust to try to help you do the wrong thing, only for you to have to spend ages debugging elsewhere.
  3. match – match was new to me, and I love it. It’s not dissimilar to “switch” in other languages, but is used extensively in Rust. It makes for legible code, and Cargo will have a good go at warning you if you do something foolish (such as miss out possible cases). My general rule of thumb, where I’m managing different options or doing branching, is to ask whether I can use match. If I can, I will.
  4. mut – when declaring a variable, if it’s going to change after its initialisation, then you need to declare it mutable. A common mistake is to declare something as mutable when it isn’t changed – but the compiler will warn you about that. If you get a warning from Cargo that a mutable variable isn’t changed when you think it is, then you may wish to check the scope of the variable, and check that you’re using the right version.
  5. return – I actually very rarely use return, which is for returning a value from a function, because it’s usually simpler and clearer to read if you just provide the value (or function providing the return value) at the end of the function, as the last line. Warning: you will forget to omit the semicolon at the end of this line on many occasions: if you do, the compiler won’t be happy.
  6. unsafe – does what it says on the tin: if you want to do things where Rust can’t guarantee memory safety, then you’re going to need to use this keyword. I have absolutely no intention of declaring any of my Rust code unsafe now or at any point in the future: one of the reasons Rust is so friendly is because it stops this sort of hackery. If you really need to do this, think again, think yet again, and then redesign. Unless you’re a seriously low-level systems programmer, avoid.
  7. use – when you want to use an item – struct, variable, function, etc. from another crate, then you need to declare it at the beginning of the block where you’ll be using it. Another common mistake is to do this, but fail to add the crate (preferably with minimum version number) to the Cargo.toml file.

This isn’t the most complicated article I’ve ever written, I know, but it’s the sort of article which I would have appreciated finding when I was starting to learn Rust. I plan to create similar articles on key functions and other Rust must-knows: let me know if you have any requests!

Formal verification … or Ken Thompson?

“You can’t trust code that you did not totally create yourself” – Ken Thompson.

This article is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

How can we be sure that the code we’re running does what we think it does? One of the answers – or partial answers – to that question is “formal verification.” Formal verification is an important field of study, applying mathematics to computing, and it aims to start with proofs – at best, with an equivalent level of assurance to that of formal mathematical proofs – of the correctness of algorithms to be implemented in code to ensure that they perform the operations expected and set forth in a set of requirements. Though implementation of code can often fall down in the actual instructions created by a developer or set of developers – the programming – mistakes are equally possible at the level of the design of the code to be implemented in the first place, and so this must be a minimum step before looking at any actual implementations. What is more, these types of mistakes can be all the more hard to spot, as even if the developer has introduced no bugs in the work they have done, the implementation will be flawed by virtue of it being incorrectly defined in the first place. It is with an acknowledgement of this type of error, and an intention of reducing or eliminating it, that formal verification starts, but some areas go much further, with methods to examine concrete implementations and make statements about their correctness with regards to the algorithms which they are implementing.

Where we can make these work, they are extremely valuable, and the sort of places that they are applied are exactly where we would expect: for systems where security is paramount, and to prove the correctness of cryptographic designs and implementations. Another major focus of formal verification is software for safety systems, where the “correct” operation of the system – by which we mean “as designed and expected” – is vital. Examples might include oil refineries, fire suppression systems, nuclear power station management, aircraft flight systems and electrical grid management – unsurprisingly, given the composition of such systems, formal verification of hardware is also an important field of study. The practical application of formal verification methods to software is, however, more limited than we might like. As Alessandro Abate notes in a paper on formal verification of software:

“Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models.”

To these shortcomings we can add another, extremely significant one: how sure can you be that what you are running is what you think you are running? Surely knowing what you are running is exactly why we write software, look at the source, and then compile it under our control? That, certainly, is the basic starting point for software that we care about.

The problem is arguably one of layers and dependencies, and was outlined by Ken Thompson, one of the founders or modern computing, in the lecture he gave at his acceptance of the Turing Award in 1983. It is short, stands as one of the establishing artefacts of computing security, and has weathered the tests of time: I have no hesitation in recommending that all readers of this blog read it: Reflections on Trusting Trust. In it, he describes how careful placing of malicious code in the C standard compiler could lead to vulnerabilities (his specific example is in account login code) which are not only undetectable by those without access to the source code, but also not removable. The final section of the paper is entitled “Moral”, and Thompson starts with these words:

“The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.”

However, as he goes on to point out, here is nothing special about the compiler:

“I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.”

It is for this the reasons noted by Thompson that open source software – and hardware – is so vital to the field of computer security, and to our task of defining and understanding what “trust” means in the context of computing. Just relying on the “open source-ness” of your code is not enough: there is more work to be done in understanding your stack, the community and your requirements, but without the ability to look at the source code of all the layers of software and hardware on which you are running code, then you can have only reduced trust that what you are running is what you think you should be running, whether you have performed formal verification on it or not.


Vint Cerf’s “game changer”

I’m really proud to be involved with a movement which I believe can change the way we do computing.

Today’s article is a little self-indulgent, but please bear with me, as I’m a little excited. Vint Cerf is one of a small handful of people who have a claim to being called “greats”. He’s one of the co-developers of TCP/IP protocol with Bob Kahn in 1974, and has been working on technology – much of it pretty cool technology – since then. I turned 50 recently, and if I’d achieved half of what he had by his 50th birthday, I’d be feeling more accomplished than I do right now! As well as his work in technology, he’s also an advocate for accessibility, which is something which is also dear to my heart.

What does this have to do with Alice, Eve and Bob – a security blog? Well, last week, Dark Reading[1], an influential technology security site, published a commentary piece by Cerf under its “Cloud” heading: Why Confidential Computing is a Game Changer. I could hardly have been more pleased: this is an area which I’m very excited about, and which the Enarx project, of which I’m co-founder, addresses. The Enarx project is part of the Confidential Computing Consortium (mentioned in Cerf’s article), a Linux Foundation project to increase use of confidential computing through open source projects.

So, what is confidential computing? Cerf describes it as “a breakthrough technology that encrypts data in use, while it is being processed”. He goes on to give a good description of the technology, noting that Google (his employer[2]) has recently released a product using confidential computing. Google is actually far from the first cloud service provider to do this, but it’s only fair that Cerf should mention his employer’s services from time to time: I’m going to forgive him, given how enthusiastic he is about the technology more generally. He describes it as a transformational technology which “will and should be a part of every enterprise cloud deployment”.

I agree, and it’s really exciting to see such a luminary embracing the possibilities the confidential computing presents. For those readers who aren’t aware of what it is, confidential computing allows you to keep data and processes secret in the cloud, on private servers, on the Edge, IoT, etc. – even from administrators, hypervisors and the host kernel. It uses TEEs – Trusted Execution Environments – to protect the confidentiality and integrity of the workloads (application, programs) that you want to run. If you’re not sure you trust your cloud provider, if your regulatory body won’t let you run your applications in certain places, if you want to deploy to machines which are vulnerable to attack – physical or logical – then TEEs and confidential computing can help.

You can find a more information in some of my articles:

You can always visit the Confidential Computing Consortium[3] or visit the Enarx project (links above): all of our code and documentation is open, and we’d love to see you. I’m really proud to be involved with – in fact, deeply embedded in – a movement which I believe can change the way we do computing. And really excited that someone like Vint Cerf agrees.


1 – I have no affiliation with Dark Reading, though I do recommend it to readers of this blog.

2- neither do I have any affiliation with Google or Alphabet, its parent!

3 – I am, however, a member of both the Governing Board and the Technical Advisory Council of the Confidential Computing Consortium. I’m also the Treasurer.

Bringing your emotions to work

An opportunity to see our colleagues as more “human”.

We’ve all seen the viral videos of respected experts, working from home, who are being interviewed for a news programme, only to be interrupted by a small child who then proceeds to embarrass them, whilst making the rest of us laugh. Since the increase in working from home brought on by Covid-19, it has become quite common to see similar dramas acted out on our own computer screens as colleagues struggle with children – and sometimes adults – turning up unexpectedly in front of the camera. We tend to laugh these occurrences off – quite rightly – and to be aware that they are often much more embarrassing for the affected party than for the rest of the participants. In all of the situations that I have witnessed where this has happened, the other members of the video conference have been shown understanding both of the fact that the incident occurred at all, but also of the frustration and embarrassment of the affected party.

This is all as it should be, but I think that we have a larger lesson to learn here. The emotions evidenced by this sort of incident are obvious and, what is more, it is usually entirely clear what has caused them: we have, after all, just seen the drama unfold in front of us. What I think I am also seeing, partly due to the broadly shared experiences of lock-down, is a better understanding that there are frustrations and emotions that occur due to events which occur off-camera, and that people need to be given space to manage those as much as any other, more obvious issue. Taking time at the beginning of a call to ask a colleague – or even someone from a different organisation – how things are going, how they’re coping, and what’s on their mind – has become much more commonplace than it was when most of us spent most of our time in offices. An acknowledgement of the impact of these trials and tribulations that everybody is facing has become much more acceptable in a work context, because the separation between the work context and the home context is become, for many, so blurred that that are almost indistinguishable.

What is astonishing about this is that we all know, and have always known, if we are honest with ourselves, that these trials and tribulations have always been there. What we seem to have believed is that because there are two separate spaces for most people who are not remote workers – the work environment and the home environment – then everybody should somehow magically be able to compartmentalise their feelings and emotions into corresponding separate boxes.

This was always a fiction, and, more, a self-evident one, which only ever worked in one direction. All families and partners know that there are occasions when a frustrating day at work will leave someone annoyed and upset on their return home. Equally, we expect to celebrate work successes when we arrive back with our families. But while telling work colleagues about the birth of niece, or the arrival of a new puppy, has been seen as just about acceptable, “burdening” them with news about a sick child or the impact of a major flood in the bathroom, both of which may be a major stressor in our lives, has often been seen as “unprofessional”.

Yesterday, my wife and I had to take our dog for emergency surgery[1]. Not only did this have an impact on my ability to attend a meeting, but I was also aware that my ability to function fully at work was impaired. I’m very fortunate to work at a company (Red Hat) where the culture is strongly supportive in dealing with such emergencies, and so it was: colleagues were ready to go out of their way to help, and this morning, one in particular was very forgiving of a rather confused technical question that I asked yesterday evening. I’m pretty sure that the same would have been the case outside the Covid-19 lockdown, but I was cheered (and helped) by their reactions. My emotions and ability to function in this case were due to an obvious and acute event, rather than a set of less visible or underlying conditions or events. Instances of the latter, however, are no less real, nor any less debilitating than instances of the former, but we are generally expected to hide them, at least in work context.

My plea – which is not new, and not original – is that as we fashion a “new normal” for our working lives, we create an environment where expressing and being honest about all parts of our lives – home, work and beyond – is welcomed and encouraged. I am not asking that we should expect colleagues to act as unpaid councillors, or that explosions of anger in meetings should suddenly become acceptable, but, instead, that we get better at not pretending that we are emotionless automata at work, able (and required) to compartmentalise our home lives from our work lives.

There are benefits to such an approach, not the least of which are the positive mental health effects of not “bottling up” our emotions[2]. But an opportunity to see our colleagues as more “human” can lead to better, more honest and empathetic relationships, as well as an increased resilience for businesses and organisations which are able to flex and bend to accommodate tensions and issues in people’s lives as the norm becomes to “chip in” and support colleagues who are struggling, as well as celebrating with them when they are joyful.

There are tensions here, limits of behaviour, and support structures which need to be put in place, but a honest and more rounded person, I believe, is a better and more understanding colleague, and leads to better, more diverse and higher-functioning workplaces.


1 – to fix a slipped disk. Initial signs are that the operation went well.

2 – I want to acknowledge and note that mental health issues are complex and need special management and treatment: something I have neither the expertise nor space to address in this article. I am, however, strongly in favour of more openness and less stigmatising of mental health issues, by which the vast majority of us will be affected – first or second hand – at some point in our lives. I know that I have.

Why Enarx is open

It’s not just our coding that we do in the open.

When Nathaniel McCallum and I embarked on the project which is now called Enarx, we made one decision right at the beginning: the code for Enarx would be open source, a stance fully supported by our employer Red Hat (see standard disclaimer). All of it, and for ever. That’s a decision that we’ve not regretted at any point, and it’s something we stand behind. As soon as we had enough code for a demo, and were ready to show it, we created a repository on github and made it public. There’s a very small exception, which is that there are some details of upcoming chip features that are shared with us under NDA[1] where if we write code for them, publishing that code would be a breach of the NDA. But where this applied (which is rarely) we are absolutely clear with the various vendors that we intend to make the code open as soon as possible, and lobby them to release details as early as they can (which may be earlier than they might prefer), so that more experts can look over both their designs and our code.

Auditability and trust

This brings us to possibly the most important reasons for making Enarx open source: auditability and trust. Enarx is a security-related project, and I believe passionately not only that security should be done in the open, but that if anybody is actually going to trust their sensitive data, algorithms and workloads to a piece of software, then they want to be in a position where as many experts as possible have looked at it, scrutinised it, criticised it and improved it: whether that is the people running the software, their employees, contractors or (even better) the wider security community. The more people who check the code, the happier you should be to trust it. This is important for any piece of security software, but vital for software such as Enarx which is designed to protect your more most sensitive workloads.

Bug-catching

There are bugs in Enarx. I know: I’m writing some of the code[2] and I found one yesterday (which I’d put in), just as I was about to give a demo[3]. It is very, very difficult to write perfect code, and we know that if we make our source open, then more people can help us fix issues.

Commonwealth

For Nathaniel and me, open source is an ethical issue, and we make no apologies for that. I think it’s the same for most, if not all, of the team working on Enarx. This include a number of Red Hat employees (see standard disclaimer), so shouldn’t come as a surprise, but we have non-Red Hat contributors from a number of backgrounds, and we feel that Enarx should be a Common Good, and contribute to the commonwealth of intellectual property out there.

More brain power

Making something open source doesn’t just make it easier to fix bugs: it can improve the quality of what you produce in general. The more brain power you have to apply to the problem, but better your chances of making something great – assuming that the brain power is applied efficiently (not always an easy task!). We had a design meeting yesterday where one of the participants said towards the end, “I’m sure I could implement some of this, but don’t know a huge amount about this topic, and I’m worried that I’m not contributing to this discussion.” In fact, they had, by asking questions and clarifying some points, and we assured them that we wanted to include experienced, senior developers for their expertise and knowledge, and to pull out assumptions and to validate the design, and not because we expected everybody to be experts in all parts of the project. Having bright people around, involved in design and coding, spreads expertise and knowledge, and helps keep the work from becoming an insulated, isolated “ivory tower” construction, understood by few, and almost impossible to validate.

Not just code

It’s not just our coding that we do in the open. We manage our architecture in the open, our design meetings, our protocol design, our design methodology[4], our documentation, our bug-tracking, our chat, our CI/CD processes: all of it is open. The one exception is our vulnerability management process, which needs to have the opportunity for confidential exposure for a limited time.

We also take diversity seriously, and the project contributors are subject to the Contributor Covenant Code of Conduct.

In short, Enarx is an open project. I’m sure we could do better, and we’ll strive for that, but our underlying principles are that open is good in general, and vital for security. If you agree, please come and visit!


1 – Non-Disclosure Agreement.

2 – to the surprise of many of the team, including myself. At least it’s not in Perl.

3 – I fixed it. Admittedly after the demo.

4 – we’ve just moved to a Sprint pattern – the details of which we designed and agreed in the open.

They won’t get security right

Save users from themselves: make it difficult to do the wrong thing.

I’m currently writing a book at Trust in computing and the cloud – I’ve mentioned it before – and I confidently expect to reach 50% of my projected word count today, as I’m on holiday, have more time to write it, and got within about 850 words of the goal yesterday. Little boast aside, one of the topics that I’ve been writing about is the need to consider the contexts in which the systems you design and implement will be used.

When we design systems, there’s a temptation – a laudable one, in many cases – to provide all of the features and functionality that anyone could want, to implement all of the requests from customers, to accept every enhancement request that comes in from the community. Why is this? Well, for a variety of reasons, including:

  • we want our project or product to be useful to as many people as possible;
  • we want our project or product to match the capabilities of another competing one;
  • we want to help other people and be seen as responsive;
  • it’s more interesting implementing new features than marking an existing set complete, and settling down to bug fixing and technical debt management.

These are all good – or at least understandable – reasons, but I want to argue that there are times that you absolutely should not give in to this temptation: that, in fact, on every occasion that you consider adding a new feature or functionality, you should step back and think very hard whether your product would be better if you rejected it.

Don’t improve your product

This seems, on the face of it, to be insane advice, but bear with me. One of the reasons is that many techies (myself included) are more interested getting code out of the door than weighing up alternative implementation options. Another reason is that every opportunity to add a new feature is also an opportunity to deal with technical debt or improve the documentation and architectural information about your project. But the other reasons are to do with security.

Reason 1 – attack surface

Every time that you add a feature, a new function, a parameter on an interface or an option on the command line, you increase the attack surface of your code. Whether by fuzzing, targeted probing or careful analysis of your design, the larger the attack surface of your code, the more opportunities there are for attackers to find vulnerabilities, create exploits and mount attacks on instances of your code. Strange as it may seem, adding options and features for your customers and users can often be doing them a disservice: making them more vulnerable to attacks than they would have been if you had left well enough alone.

If we do not need an all-powerful administrator account after initial installation, then it makes sense to delete it after it has done its job. If logging of all transactions might yield information of use to an attacker, then it should be disabled in production. If older versions of cryptographic functions might lead to protocol attacks, then it is better to compile them out than just to turn them off in a configuration file. All of these lead to reductions in attack surface which ultimately help safeguard users and customers.

Reason 2 – saving users from themselves

The other reason is also about helping your users and customers: saving them from themselves. There is a dictum – somewhat unfair – within computing that “users are stupid”. While this is overstating the case somewhat, it is fairer to note that Murphy’s Law holds in computing as it does everywhere else: “Anything that can go wrong, will go wrong”. Specific to our point, some user somewhere can be counted upon to use the system that you are designing, implementing or operating in ways which are at odds with your intentions. IT security experts, in particular, know that we cannot stop people doing the wrong thing, but where there are opportunities to make it difficult to do the wrong thing, then we should embrace them.

Not adding features, disabling capabilities and restricting how your product is used might seem counter-intuitive, but if it leads to a safer user experience and fewer vulnerabilities associated with your product or project, then in the end, everyone benefits. And you can use the time to go and write some documentation. Or go to the beach. Enjoy!