Security, cost and usability (pick 2)

If we cannot be explicit that there is a trade-off, it’s always security that loses.

Everybody wants security: why wouldn’t you? Let’s role-play: you’re a software engineer on a project to create a security product. There comes a time in the product life-cycle when it’s nearly due, and, as usual, time is tight. So you’re in the regular project meeting and the product manager’s there, so you ask them what they want you to do: should you prioritise security? The product manager is very clear[1]: they will tell you that they want the product as secure as possible – and they’re right, because that’s what customers want. I’ve never spoken to a customer (and I’ve spoken to lots of customers over the years) who said that they’d prefer a product which wasn’t as secure as possible. But there’s a problem, which is that all customers also want their products tomorrow – in fact, most customers want their products today, if not yesterday.

Luckily, products can generally be produced more quickly if more resources are applied (though Frederick Brooks’ The Mythical Man Month tells us that simple application of more engineers is actually likely to have a negative impact), so the requirement for speed of delivery can be translated to cost. There’s another thing that customers want, however, and that is for products to be easy to use: who wants to get a new product and then, when it arrives, for it to take months to integrate or for it to be almost impossible for their employees to run it as they expect?

So, to clarify, customers want a security product to be be the following:

  1. secure – security is a strong requirement for many enterprises and organisations[3], and although we shouldn’t ever use the word secure on its own, that’s still what customers want;
  2. cheap – nobody wants to pay more than the minimum they can;
  3. usable – everybody likes simple-to-use, easy-to-integrate applications.

There’s a problem, however, which is that out of the three properties above, you can only choose two for any application or project. You say this to your product manager (who’s always right, remember[1]), and they’ll say: “don’t be ridiculous! I want all three”.

But it just doesn’t work like that: why? Here’s my take on the reasons. Security, simply stated, is designed to stop people doing things. Stated from the point of view of a user, security’s view is to reduce usability. “Doing security” is generally around applying controls to actions in a system – whether by users or non-human entities – and the simplest way to apply it is “blanket security” – defaulting to blocking or denying actions. This is sometimes known as fail to safe or fail to closed.

Let’s take an example: you have a simple internal network in your office and you wish to implement a firewall between your network and the Internet, to stop malicious actors from probing your internal machines and to compromised systems on the internal network from communicating out to the Internet. “Easy,” you think, and set up a DENY ALL rule for connections originating outside the firewall, and a DENY ALL rule for connections originating inside the firewall, with the addition of a ALLOW all outgoing port 443 connections to ensure that people can use web browsers to make HTTPS connections. You set up the firewall, and get ready to head home, knowing that your work is done. But then the problems arise:

  • it turns out that some users would like to be able to send email, which requires a different outgoing port number;
  • sending email often goes hand in hand with receiving email, so you need to allow incoming connections to your mail server;
  • one of your printers has been compromised, and is making connections over port 443 to an external botnet;
  • in order to administer the pay system, your accountant – who is not a full-time employee, and works from home, needs to access your network via a VPN, which requires the ability to accept an incoming connection.

Your “easy” just became more difficult – and it’s going to get more difficult still as more users start encountering what they will see as your attempts to make their day-to-day revenue-generating lives more difficult.

This is a very simple scenario, but it’s clear that in order to allow people actually to use a system, you need to spend a lot more time understanding how security will interact with it, and how people’s experience of the measures you put in place will be impacted. Usability and user experience (“UX”) is a complex field on its own, but when you combine it with the extra requirements around security, things become even more tricky.

You need both to manage the requirements of users to whom the security measures should be transparent (“TLS encryption should be on by default”) and those who may need much more control (“developers need to be able to select the TLS cipher suite options when connecting to a vendor’s database”), so you need to understand the different personae[4] you are targeting for your application. You also need to understand the different failures modes, and what the correct behaviour should be: if authentication fails three times in a row, should the medical professional who is trying to get a rush blood test result be locked out of the system, or should the result be provided, and a message sent to an administrator, for example? There will be more decisions to make, based on what your application does, the security policies of your customers, their risk profiles, and more. All of these investigations and decisions take time, and time equates to money. What is more, they also require expertise – both in terms of security but also usability – and that is in itself expensive.

So, you have three options:

  1. choose usability and cost – you can prioritise usability and low cost, but you won’t be able to apply security as you might like;
  2. choose security and cost – in this case, you can apply more security to the system, but you need to be aware that usability – and therefore your customer’s acceptance of the system – will suffer;
  3. choose usability and security – I wish this was the one that we chose every time: you decide that you’re willing to wait longer or pay more for a more secure product, which people can use.

I’m not going to pretend that these are easy decisions, nor that they are always clear cut. And a product manager’s job is sometimes to make difficult choices – hopefully ones which can be re-balanced in a later release, but difficult choices nevertheless. It’s really important, however, that anyone involved in security – as an engineer, as a UX expert, as a product manager, as a customer – understands the trade-off here. If we cannot be explicit that there is a trade-off, then the trade-off will be made silently, and in my experience, it’s always security that loses.


1 – and right: product managers are always right[2].

2 – I know: I used to be a product manager.

3 – and the main subject of this blog, so it shouldn’t be a surprise that I’m writing about it.

4 – or personas if you really, really must. I got an “A” in Latin O level, and I’m not letting this one go.

Do I trust this package?

The area of software supply chain management is growing in importance.

This isn’t one of those police dramas where a suspect parcel arrives at the precinct and someone realises just in time that it may be a bomb – what we’re talking about here is open source software packages (though the impact on your application may be similar if you’re not sufficiently suspicious). Open source software is everywhere these days – which is great – but how can you be sure that you should trust the software you’ve downloaded to do what you want? The area of software supply chain management – of which this discussion forms a part – is fairly newly visible in the industry, but is growing in importance. We’re going to consider a particular example.

There’s a huge conversation to be had here about what trust means (see my article “What is trust?” as a starting point, and I have a forthcoming book on Trust in Computing and the Cloud for Wiley), but let’s assume that you have a need for a library which provides some cryptographic protocol implementation. What do you need to know, and what are you choices? We’ll assume, for now, that you’ve already made what is almost certainly the right choice, and gone with an open source implementation (see many of my articles on this blog for why open source is just best for security), and that you don’t want to be building everything from source all the time: you need something stable and maintained. What should be your source of a new package?

Option 1 – use a vendor

There are many vendors out there now who provide open source software through a variety of mechanisms – typically subscription. Red Hat, my employer (see standard disclosure) is one of them. In this case, the vendor will typically stand behind the fitness for use of a particular package, provide patches, etc.. This is your easiest and best choice in many cases. There may be times, however, when you want to use a package which is not provided by a vendor, or not packaged by your vendor of choice: what do you do then? Equally, what decisions do vendors need to make about how to trust a package?

Option 2 – delve deeper

This is where things get complex. So complex, in fact, that I’m going to be examining them at some length in my book. For the purposes of this article, though, I’ll try to be brief. We’ll start with the assumption that there is a single maintainer of the package, and multiple contributors. The contributors provide code (and tests and documentation, etc.) to the project, and the maintainer provides builds – binaries/libraries – for you to consume, rather than your taking the source code and compiling it yourself (which is actually what a vendor is likely to do, though they still need to consider most of the points below). This is a library to provide cryptographic capabilities, so it’s fairly safe to assume that we care about its security. There are at least five specific areas which we need to consider in detail, all of them relying on the maintainer to a large degree (I’ve used the example of security here, though very similar considerations exist for almost any package): let’s look at the issues.

  1. build – how is the package that you are consuming created? Is the build process performed on a “clean” (that is, non-compromised) machine, with the appropriate compilers and libraries (there’s a turtles problem here!)? If the binary is created with untrusted tools, then how can we trust it at all, so what measures does the maintainer take to ensure the “cleanness” of the build environment? It would be great if the build process is documented as a repeatable build, so that those who want to check it can do so.
  2. integrity – this is related to build, in that we want to be sure that the source code inputs to the build process – the code coming, for instance, from a git repository – are what we expect. If, somehow, compromised code is injected into the build process, then we are in a very bad position. We want to know exactly which version of the source code is being used as the basis for the package we are consuming so that we can track features – and bugs. As above, having a repeatable build is a great bonus here.
  3. responsiveness – this is a measure of how responsive – or not – the maintainer is to changes. Generally, we want stable features, tied to known versions, but a quick response to bug and (in particular) security patches. If the maintainer doesn’t accept patches in a timely manner, then we need to worry about the security of our package. We should also be asking questions like, “is there a well-defined security disclosure of vulnerability management process?” (See my article Security disclosure or vulnerability management?), and, if so, “is it followed”?
  4. provenance – all code is not created equal, and one of the things of which a maintainer should be keeping track is the provenance of contributors. If a large amount of code in a part of the package which provides particularly sensitive features is suddenly submitted by an unknown contributor with a pseudonymous email address and no history of contributions of security functionality, this should raise alarm bells. On the other hand, if there is a group of contributors employed by a company with a history of open source contributions and well-reviewed code who submit a large patch, this is probably less troublesome. This is a difficult issue to manage, and there are typically no definite “OK” or “no-go” signs, but the maintainer’s awareness and management of contributors and their contributions is an important point to consider.
  5. expertise – this is the most tricky. You may have a maintainer who is excellent at managing all of the points above, but is just not an expert in certain aspects of the functionality of the contributed code. As a consumer of the package, however, I need to be sure that it is fit for purpose, and that may include, in the case of the security-related package we’re considering, being assured that the correct cryptographic primitives are used, that bounds-checking is enforced on byte streams, that proper key lengths are used or that constant time implementations are provided for particular primitives. This is very, very hard, and the job of maintainer can easily become a full-time one if they are acting as the expert for a large and/or complex project. Indeed, best practice in such cases is to have a team of trusted, experienced experts who work either as co-maintainers or as a senior advisory group for the project. Alternatively, having external people or organisations (such as industry bodies) perform audits of the project at critical junctures – when a major release is due, or when an important vulnerability is patched, for instance – allows the maintainer to share this responsibility. It’s important to note that the project does not become magically “secure” just because it’s open source (see Disbelieving the many eyes hypothesis), but that the community, when it comes together, can significantly improve the assurance that consumers of a project can have in the packages which is produces.

Once we consider these areas, we then need to work out how we measure and track each of them. Who is in a position to judge the extent to which any particular maintainer is fulfilling each of the areas? How much can we trust them? These are a set of complex issues, and one about which much more needs to be written, but I am passionate about exposing the importance of explicit trust in computing, particularly in open source. There is work going on around open source supply chain management, for instance the new (at time of writing Project Rekor – but there is lots of work still to be done.

Remember, though: when you take a package – whether library or executable – please consider what you’re consuming, what about it you can trust, and on what assurances that trust is founded.

Security is (only) subjective

What aspects of security does it provide?

This article covers ground covered in more detail within (but is not quite an excerpt from) my forthcoming book on Trust in Computing and the Cloud for Wiley.

In 1985, the US Department of Defense [sic] published the “Orange Book”[1], officially named Trusted Computer System Evaluation Criteria. It was a guide to how to create a “trusted system”, and was hugely influential within the IT and security industry as a whole. Eight years later, in 1993, Dorothy Denny published a devastating critique of the Orange Book called A New Paradigm for Trusted Systems[1]. It is a brilliant step-by-step analysis of why the approach taken by the DoD was fundamentally flawed. Denning starts:

“The current paradigm for trusted computer systems holds that trust is a property of a system. It is a property that can be formally modeled, specified, and verified. It can be designed into a system using a rigorous design methodology.”

Later, she explains why this just doesn’t work in the real world:

“The current paradigm of treating trust as a property is inconsistent, with the way trust is actually established in the world. It is not a property, but rather an assessment that is based on experience and shared through networks of people in the world-wide market. It is a declaration made by an observer, rather than a property of the observed.”

Demolishing the idea that trust is an inherent property of a system, and making it relational instead, changed the way that systems designed for security would be considered (and ushered in a new approach by the US Government and associated organisations, known was Common Criteria). Denning was writing about trust, but very similar issues exist around the concept of “security”. Too often, security is considered an inherent or intrinsic property of a system: “it’s secure”, someone will say, or “this fix will secure your computer”. It isn’t, and it won’t.

The first problem with such statements is that it’s not clear what “secure” means. There are a number of properties associated with systems that are relevant to security: three of the ones most often quoted are confidentiality, integrity and availability (which I discuss in more detail in the post The Other CIA: Confidentiality, Integrity and Availability). Specifying which of these you’re interested in removes the temptation just to say that something is “secure”, and if someone says, “it provides security”, we’re now in a position to start asking what that assertion actually means. Which aspects of security does it provide?

I also don’t think it makes sense to say that a system is “confidential” or “available” (there’s no obvious equivalent adjective for integrity – “integral” means something rather different): what we may be able to say is that it exhibits properties associated with confidentiality, integrity and availability, or better, that it has measures associated with it which are designed and intended to provide confidentiality, integrity and availability. These measures can be listed, examined and evaluated, hopefully against well-defined criteria.

This seems like a much better approach: not only have we addressed the suggestion that there is such as thing as “security” that we can apply to a system, but following Denning, we have also challenged the suggestion that it is inherent to – or in – a system. Instead, we have introduced the alternative approach of describing security-related properties which can be subjected to scrutiny by the users of the system. This allows the type of relational understanding of security that Denning was proposing, but it also raises the possibility of differing parties having different views of the security (or not) of a system, depending on who they are, and how it is going to be used.

It turns out, when you think about it, that this makes a lot of sense. A laptop which provides sufficient confidentiality, integrity and availability protection for the computing needs of my retired uncle may not provide sufficient protections for the uses to which an operative of a government security service might put it[3]. Equally, a system which a telecommunications company runs in a physically protected data centre may well be considered to have appropriate security protections, whereas the same system, attached to a pole somewhere on a residential street, might not. The measures applied to provide the protections associated with the properties (e.g. 128 bit AES encryption for the confidentiality) may be objectively specifiable, but the extent to which they provide “security” is not, because they are relative to specific requirements.

One last point, and it’s one which regular readers of my blog will be unsurprised to see: how can you assess the applicability of a system’s security properties to your requirements if it is not open? Open source helps significantly with security. Yes, there are assessment regimes to say that systems meet certain criteria – and sometimes these can be very helpful – but they are generally broad criteria, and difficult to apply to your specific use cases. Equally, most are just a starting point, and many such certified systems will require “exceptions” to be met in order to function in the real world, exceptions which require significant expertise to understand, judge and apply safely (that is, with appropriate levels of risk). If the system you want to use is open, then you, a party who you trust, or the wider community can evaluate the appropriateness of controls and measures, and make an informed decision about whether a system’s security properties are what you need. Without open source, this is impossible.


1 – it had an orange cover.

2 – Denning, Dorothy E. (1993) A New Paradigm for Trusted Systems [online]. Available at: https://www.researchgate.net/publication/234793347_A_New_Paradigm_for_Trusted_Systems [Accessed 3 Apr. 2020]

3 – I’m assuming that my uncle isn’t an operative of a government security service[4].

4 – or at least that his security needs are reduced in retirement[5].

5 – that is, if he has really retired…

Track and trace failure: a systems issue

The problem was not Excel, but that somebody used the wrong tools in a system.

Like many other IT professionals in the UK – and across the world, having spoken to some other colleagues in other countries – I was first surprised and then horrified as I found out more about the failures of the UK testing and track and trace systems. What was most shocking about the failure is not that it was caused by some alleged problem with Microsoft Excel, but that anyone thought this was a problem due to Excel. The problem was not Excel, but that somebody used the wrong tools in a system which was not designed properly, tested properly, or run properly. I have written many words about systems, and one article in particular seems relevant: If it isn’t tested, it doesn’t work. In it, I assert that a system cannot be said to work properly if it has not been tested, as a fully working system requires testing in order to be “working”.

In many software and hardware projects, in order to complete a piece of work, it has to meet one or more of a set of tests which allow it to be described as “done”. These tests may be actual software tests, or documentation, or just checks done by members of the team (other than the person who did the piece of work!), but the list needs to be considered and made part of the work definition. This “done” definition is as much part of the issue being addressed, functionality added or documentation being written as the actual work done itself.

I find it difficult to believe that there was any such definition for the track and trace system. If there was, then it was not, I’m afraid, defined by someone who is an expert in distributed or large-scale systems. This may not have been the fault of the person who chose Excel for the task of recording information, but it is the fault of the person who was in charge of the system, because Excel is not, and never was, a fit application for what it was being used for. It does not have the scalability characteristics, the integrity characteristics or the versioning characteristics required. This is not the fault of Microsoft, any more than it would be fault of Porsche if a 911T broke down because its owner filled with diesel fuel, rather than petrol[1]. Any competent systems architect or software engineer, qualified to be creating such a system, would have known this: the only thing that seems possible is that whoever put together the system was unqualified to do so.

There seem to be several options here:

  1. the person putting together the system did not know they were unqualified;
  2. the person putting together the system realised that they were unqualified, but did not feel able to tell anyone;
  3. the person putting together the systems realised that they were unqualified, but lied.

In any of the above, where was the oversight? Where was the testing? Where were the requirements? This was a system intended to safeguard the health of millions – millions – of people.

Who can we blame for this? In the end, the government needs to take some large measure of responsibility: they commissioned the system, which means that they should have come up with realistic and appropriate requirements. Requirements of this type may change over the life-cycle of a project, and there are ways to manage this: I recommend a useful book in another article, Building Evolutionary Architectures – for security and for open source. These are not new problems, and they are not unsolved problems: we know how to do this as a profession, as a society.

And then again, should we blame someone? Normally, I’d consider this a question out of scope for this blog, but people may die because of this decision – the decision not to define, design, test and run a system which was fit for purpose. At the very least, there are people who are anxious and worried about whether they have Covid-19, whether they need to self-isolate, whether they may have infected vulnerable friends or family. Blame is a nasty thing, but if it’s about holding people to account, then that’s what should happen.

IT systems are important. Particularly when they involve people’s health, but in many other areas, too: banking, critical infrastructure, defence, energy, even retail and entertainment, where people’s jobs will be at stake. It is appropriate for those of us with a voice to speak out, to remind the IT community that we have a responsibility to society, and to hold those who commission IT systems to account.


1 – or “gasoline” in some geographies.

Rust – my top 7 functions

Rust helpfully provides a set of “prelude” functions.

I’ve written a few articles about Rest now, including, most recently, Rust – my top 7 keywords, in which I promised a follow-up article. The keywords article talked about keywords from the std library, and this time I’m going to look at some functions from the Rust prelude. When you create a file in Rust and then compile it, you can (and will often need to) import external modules, typically with the use or extern keywords. Rust does a good thing for you, however, which is to import a set of useful modules without your even asking. This is known as the standard prelude. As usual, the Rust documentation has good information about this, and the latest version is found here.

Here are a few of my favourite functions from the standard prelude: useful ones to which I keep returning, and some which expose a little about how Rust “thinks” about the world.

  1. clone() – there are times when you need to use a variable somewhere where Rust’s rules of memory management make that difficult. Luckily, where the std::clone::Clone trait is implemented (which is pretty much everywhere), you can copy to a new variable. Don’t do this just to get around Rust’s memory management, which is there to help you, but it can be very useful when you actually need a new copy of something.
  2. format!() – OK, officially this is a macro, rather than a function, but it’s very useful. You probably know and use println!(), which is used to print to stdout: format!() does pretty much the same thing for strings which you don’t immediately want to output.
  3. is_ok() – to be honest, this is just an excuse for me to talk about std::result::Result, which is hugely useful, and allows you to create and then access success (Ok) or failure (Err) results. The is_ok() function will tell you whether what you have is an Ok result (and remember that the “k” is lower case – probably my most frequent syntax error when writing Rust). In order to understand Rust properly, you need to get your head around Result: it’s used extensively, and you should be using it, too.
  4. is_some() – like Result, std::option::Option is something you’re likely to use a lot when you’re writing Rust. Given that there’s no equivalent to the Null that you in many other languages, what can you do when you don’t have a value generated to return? The answer is that you can use an Option, which you can give a None value: in other cases, you can provide a value within a Some() wrapper.. The is_some() function checks whether there is a value – if there is, you can use the unwrap() function to access it (see below). Like Result, get used to using Option: you’ll see it all over the place.
  5. iter() – many different collections can be iterated over, and the iter() function allows you to access all of the values very simply. You may sometimes want to use the related functions into_iter() and iter_mut() (for mutable values, unsurprisingly), but iter() is what you’ll be using the most, and you can chain all sorts of useful functions onto it.
  6. panic!() – there are times when your program gets input, or generates output, which it really shouldn’t. When std::result::Result isn’t good enough, and you can’t propagate errors up through your execution stack, because this isn’t the sort of error that should be handled, you can force your program to stop with panic!() (another macro, if we’re honest), and add an error message to provide more information.
  7. unwrap() – if you’ve got a std::option::Option or a std::result::Result, and you want to access what it contains, then you’ll want to use unwrap(), which will panic if there’s a problem (or expect() if you want to be able to add a specific message).

Another fairly basic article, but if it’s useful for people starting to get their heads around Rust, then I’m happy. I plan to continue looking at some of the more basic language components in Rust and some basic gotchas: keep an eye out.

Taking some time

I’m going to practice what I preach, and not write.

I’m going to practice what I preach, this week, and not write a full article. I’ve had a stressful and busy few weeks, including needing to spend some extra time with the family (nothing scary or earth-shattering – we just needed some family time), and I think the best thing for me to do today is not spend time writing an article. Let me point you instead at some I’ve written in the past.

On self-care:

On security:

On trust:

Keep safe, and look after yourself, dear reader!

15 steps to prepare for (another) lockdown

What steps can we be taking to prepare for what seems likely now – a new lockdown?

The kids are back in school, there are people in shops and restaurants, and traffic is even beginning to get back to something like normal levels. I’m being deployed as a CFR (community first responder) to more incidents, as the ambulance service gets better at assessing the risks to me and patients. And the colds and sneezes are back.

Of course they are: it’s that time of year. And where are they spreading from? Where do they usually spread from? School pupils. Both of mine have picked up minor cold symptoms, but, luckily nothing suggesting Covid-19. The school they attend is following government advice by strongly recommending that pupils wear masks in communal areas, encouraging social distancing and providing hand sanitiser outside each classroom, to be used on entry. Great! That should limit Covid-19. And it should… but the sore throats, coughing and sneezing started within days of their return to school. I’m no expert but it seems likely (and many experts agree) that schools will be act as transmission vectors, and that the rates of infection of Covid-19 will start rising again. And yes, the UK already has an R figure well above 1.

Apart from ranting about how this was always likely to happen, and that the relevant authorities should have taken more steps to reduce the impact) both true), what steps can we be taking to prepare for what seems likely now – a new lockdown?

Physical steps

There are a number of things that I’ve done or plan to do to prepare. Some of them aren’t because I necessarily expect a full lockdown, but some because, if I feel ill and unable to leave the house, it’s best to be ready.

  • get provisions – what do we need in for food and drink? We should obviously not go overboard on alcohol, but if you like a glass of wine from time to time, get a few bottles in, maybe a nice one for a special occasion. Get dried food in, cooking oil, and the rest stock the freezer. Oh, and chocolate. Always chocolate.
  • household supplies – remember that run on random items at the beginning of the first lockdown? Let’s avoid that this time: get toilet paper, kitchen roll, cleaning materials and tissues (for when we feel really poorly).
  • work supplies – most of us are used to working at home now, but if you’ve got a dodgy monitor, a printer in need of paper, or a webcam that’s on its last legs, now is a good time to sort them out there’s a good chance that these might become difficult to get hold of (again).
  • fitness preparations – if the gyms close again, what will you do? Even if we’re allowed outside more for exercise this time round, those warm jogging shorts that you wore in the spring and summer are not what you want to be wearing in the sleet and snow, so buy whatever gear you need for indoor or outdoor use now.
  • get a haircut – or get hold of some hairdressing supplies. Many of us discovered that we or our family members had some skills in this department, but better to get a cut in preparation, right?
  • books – yes, there are alternatives to physical books: you can read on your phone or another device these days. But I like a physical book, and I wish I’d stocked up last time. Go to your friendly neighbourhood book store – they need your business right now – and buy a few books.
  • wood – we live in an old house, and have wood-burning stoves to supplement our heating. Get wood in now to avoid getting cold in the winter!
  • pay the bills – you may want or need some extra luxuries later, as the weather sets in and lockdown takes hold. Get the bills paid up front, so there are no nasty surprises and you can budget a few treats for yourself later.

Psychological steps

Just as important as the physical – more, probably – is psychological preparation. That doesn’t mean that the steps above aren’t important: in fact, they’re vital to allow you to have space to consider the psychological preparation, which is difficult if you’re concerned or unsure about your physical safety and environment.

Prioritise – if you can, work out now what you’re going to prioritise, and when. Sometimes work may come first (barring an emergency), sometimes family, sometimes you. Thinking about this now is a good plan, so that you can set some rules for yourself and for those around you.

Prepare your family – this isn’t just about the priorities you’ve already worked on in the previous point, but also more generally. Many of us struggled with lockdown, and although we might think that it’ll be easier second time round, the very fact that it’s happened again is likely to cause us more stress in some ways.

Sleep – sleep now: bank it while you can! Sleep when lockdown happens, too. This was something which was a surprise to me: how tired I got. Not going out is, it turns out, tiring. This is because stress – which was a clear outcome from the first lockdown, and stress can make you very tired. So sleep when you can, and don’t just try to “power through”.

List what you can control and what you can’t – a classic stressor is feeling overwhelmed with things that we can’t control. And there will definitely be things that we really can’t – how long it takes, which of my friends get sick, issues such as that. But equally, there are things that we can control: when I stop for a cup of tea (or coffee, I suppose), who I call to catch up with on the phone, what I have for supper. In order to reduce stress, list things that you can control, and which you can’t, and try to accept the latter. Doing so won’t remove all stress, but it should help you manage your response to that stress, which can help you reduce it.

Be ready to feel weak – you will feel sad and depressed and ill and fed up from time to time. This is normal, and human, and it does not make you a failure or a bad employee, family member, friend or person. Accept it, and be ready to move on when you can.

Think of others – other people will be struggling, too: your family, friends, colleagues and neighbours. Spare them a thought, and think how you can help, even if it’s just with a quick text, a family videochat or a kind word from time to time. Being nice to people can make you feel good, too – and if you’re lucky, they’ll reciprocate, so everyone wins twice!

Be ready to put yourself first – sometimes, you need to step back and say “enough”. This isn’t always easy, but it’s sometimes necessary. If you begin to realise that things are coming unstuck, and that you’re going to have to disengage, let others around you know if you can. Don’t say “I hope it’s OK if…” or “I was thinking about, would it be OK for me to…”. Instead, let them know your intentions: “I’m going to need 5 minutes to myself”, or “I need to drop from this meeting for a while”. This won’t always be easy, but if you can prepare them, and yourself, for taking a little time, it’s going to be better for everyone in the end: you, because you will recover (if only for a while), and them, because they’ll get a healthier, more efficient and less stressed you.

Measured and trusted boot

What they give you – and don’t.

Sometimes I’m looking around for a subject to write about, and realise that there’s one which I assume that I’ve covered, but, on searching, discover that I haven’t. Such a one is “measured boot” and “trusted boot” – sometimes, misleadingly, referred to as “secure boot”. There are specific procedures which use these terms with capital letters – e.g. Secure Boot – which I’m going to try to avoid discussing in this post. I’m more interested in the generic processes, and a major potential downfall, than in trying to go into the ins and outs of specifics. What follows is a (heavily edited) excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

In order to understand what measured boot and trusted boot aim to achieve, let’s have a look at the Linux virtualisation stack: the components you run if you want to be using virtual machines (VMs) on a Linux machine. This description is arguably over-simplified, but we’re not interested here in the specifics (as I noted above), but in what we’re trying to achieve. We’ll concentrate on the bottom four layers (at a rather simple level of abstraction): CPU/management engine; BIOS/EFI; Firmware; and Hypervisor, but we’ll also consider a layer just above the CPU/management engine, where we interpose a TPM (a Trusted Platform Module) and some instructions for how to perform one of our two processes. Once the system starts to boot, the TPM is triggered, and then starts its work (alternative roots of trust such as HSMs might also be used, but we will use TPMs, the most common example in this context, as our example).

In both cases, the basic flow starts with the TPM performing a measurement of the BIOS/EFI layer. This measurement involves checking the binary instructions to be carried out by this layer, and then creating a cryptographic hash of the binary image. The hash that’s produced is then stored in one of several “PCR slots” in the TPM. These can be thought of as pieces of memory which can be read later on, either by the TPM for its purposes, or by entities external to the TPM, but which cannot be changed once they have been written. This provides assurances that once a value is written to a PCR by the TPM, it can be considered constant for the lifetime of the system until power-off or reboot.

After measuring the BIOS/EFI layer, the next layer (Firmware) is measured. In this case, the resulting hash is combined with the previous hash (which was stored in the PCR slot) and then itself stored in a PCR slot. The process continues until all of the layers involved in the process have been measured, and the results of the hashes stored. There are (sometimes quite complex) processes to set up the original TPM values (I’ve missed out some of the more low-level steps in the process for simplicity) and to allow (hopefully authorised) changes to the layers for upgrading or security patching, for example. What this process “measured boot” allows is for entities to query the TPM after the process has completed, and check whether the values in the PCR slots correspond to the expected values, pre-calculated with “known good” versions of the various layers – that is, pre-checked versions whose provenance and integrity have already been established. Various protocols exist to allow parties external to the system to check the values (e.g. via a network connection) that the TPM attests to being correct: the process of receiving and checking such values from an external system is known as “remote attestation”.

This process – measured boot – allows us to find out whether the underpinnings of our system – the lowest layers – are what we think they are, but what if they’re not? Measured boot (unsurprisingly, given the name) only measures, but doesn’t perform any other actions. The alternative, “trusted boot” goes a step further. When a trusted boot process is performed, the process not only measures each value, but also performs a check against a known (and expected!) good value at the same time. If the check fails, then the process will halt, and the booting of the system will fail. This may sound like a rather extreme approach to take to a system, but sometimes it is absolutely the right one. Where the system under consideration may have been compromised – which is one likely inference that you can make from the failure of a trusted boot process – then it is better that it not be available at all than to be running based on flawed expectations.

This is all very well if I’m the owner of the system which is being measured, have checked all of the various components being measured (and the measurements), and so can be happy that what’s being booted it what I want[1]. But what if I’m actually using a system on the cloud, for instance, or any system owned and managed by someone elese? In that case, I’m trusting the cloud provider (or owner/manager) with two things:

  1. do all the measuring correctly, and report correct results to me;
  2. actually to have built something which I should be trusting in the first place!

This is the problem with the nomenclature “trusted boot”, and, even worse, “secure boot”. Both imply that an absolute, objective property of a system has been established – it is “trusted” or “secure” – when this is clearly not the case. Obviously, it would be unfair to expect the designers of such processes to name them after the failure states – “untrusted boot” or “insecure boot” – but unless I can be very certain that I trust the owner of the system to do step 2 entirely correctly (and in my best interests, as user of the system, rather than theirs, and owner) then we can make no stronger assertions. There is an enormous temptation to take a system which has gone through a trusted boot process and to label it a “trusted system”, where the very best assertion we can make is that the particular layers measured in the measured and/or trusted boot process have been asserted to be those which the process expected to be present. Such a process says nothing at all about the fitness of the layers to provide assurances of behaviour, nor about the correctness (or fitness to provide assurances of behaviour) of any subsequent layers on top of those.

It’s important to note that designers of TPMs are quite clear what is being asserted, and that assertions about trust should be made carefully and sparingly. Unluckily, however, the complexities of systems, the general low level of understanding of trust, and the complexities of context and transitive trust make it very easy for designers and implementors of systems to do the wrong thing, and to assume that any system which has successfully performed a trusted boot process can be considered “trusted”. It is also extremely important to remember that TPMs, as hardware roots of trust, offer us one of the best mechanisms for we have for establishing a chain of trust in systems that we may be designing or implementing, and I plan to write an article about them soon.


1 – although this turns out to be much harder to do that you might expect!

Rust – my top 7 keywords

A few useful keywords from the Rust standard library.

I’ve been using Rust for a few months now, writing rather more of it than I expected – though quite a lot of that has been thrown away as I’ve learnt improved what I’m writing and taken some more complex tasks on beyond what I’d originally intended. I still love it, and thought that today might be a good day to talk about some of the important keywords that come up again and again in Rust, and provide my personal summary of what they do, why you need to think about how you use them, and anything else that’s useful, particularly for people who are new to Rust, or coming from another language (such as Java – see my previous article on the subject, 5 Rust reflections (from Java)). Without further ado, let’s get going. A good place for further information is always the official Rust documentation – you’ll probably want to start with the std library.

  1. const – you get to declare constants with “const”, and you should. This isn’t rocket science, but do declare with const, and if you’re going to use constants across different modules, then do the right thing and create a lib.rs file (the Rust default) into which you can put all of these, with a nicely name module. I’ve had clashes of const variable names (and values!) across different files in different modules, simply because I was too lazy to do anything other than cut and paste across files, when I could have save myself lots of work by simply creating a shared module.
  2. let – you don’t always need to declare a variable with a let statement, but your code will be clearer when you do. What’s more, always add the type if you can. Rust will do its very best to guess what it should be, but may not always be able to do so at runtime (in which case Cargo, the compiler, will tell you), or may even not necessarily do what you expect. In the latter case, it’s always simpler for Cargo to complain that the function you’re assigning from (for instance) doesn’t match the declaration than for Rust to try to help you do the wrong thing, only for you to have to spend ages debugging elsewhere.
  3. match – match was new to me, and I love it. It’s not dissimilar to “switch” in other languages, but is used extensively in Rust. It makes for legible code, and Cargo will have a good go at warning you if you do something foolish (such as miss out possible cases). My general rule of thumb, where I’m managing different options or doing branching, is to ask whether I can use match. If I can, I will.
  4. mut – when declaring a variable, if it’s going to change after its initialisation, then you need to declare it mutable. A common mistake is to declare something as mutable when it isn’t changed – but the compiler will warn you about that. If you get a warning from Cargo that a mutable variable isn’t changed when you think it is, then you may wish to check the scope of the variable, and check that you’re using the right version.
  5. return – I actually very rarely use return, which is for returning a value from a function, because it’s usually simpler and clearer to read if you just provide the value (or function providing the return value) at the end of the function, as the last line. Warning: you will forget to omit the semicolon at the end of this line on many occasions: if you do, the compiler won’t be happy.
  6. unsafe – does what it says on the tin: if you want to do things where Rust can’t guarantee memory safety, then you’re going to need to use this keyword. I have absolutely no intention of declaring any of my Rust code unsafe now or at any point in the future: one of the reasons Rust is so friendly is because it stops this sort of hackery. If you really need to do this, think again, think yet again, and then redesign. Unless you’re a seriously low-level systems programmer, avoid.
  7. use – when you want to use an item – struct, variable, function, etc. from another crate, then you need to declare it at the beginning of the block where you’ll be using it. Another common mistake is to do this, but fail to add the crate (preferably with minimum version number) to the Cargo.toml file.

This isn’t the most complicated article I’ve ever written, I know, but it’s the sort of article which I would have appreciated finding when I was starting to learn Rust. I plan to create similar articles on key functions and other Rust must-knows: let me know if you have any requests!