Track and trace failure: a systems issue

The problem was not Excel, but that somebody used the wrong tools in a system.

Like many other IT professionals in the UK – and across the world, having spoken to some other colleagues in other countries – I was first surprised and then horrified as I found out more about the failures of the UK testing and track and trace systems. What was most shocking about the failure is not that it was caused by some alleged problem with Microsoft Excel, but that anyone thought this was a problem due to Excel. The problem was not Excel, but that somebody used the wrong tools in a system which was not designed properly, tested properly, or run properly. I have written many words about systems, and one article in particular seems relevant: If it isn’t tested, it doesn’t work. In it, I assert that a system cannot be said to work properly if it has not been tested, as a fully working system requires testing in order to be “working”.

In many software and hardware projects, in order to complete a piece of work, it has to meet one or more of a set of tests which allow it to be described as “done”. These tests may be actual software tests, or documentation, or just checks done by members of the team (other than the person who did the piece of work!), but the list needs to be considered and made part of the work definition. This “done” definition is as much part of the issue being addressed, functionality added or documentation being written as the actual work done itself.

I find it difficult to believe that there was any such definition for the track and trace system. If there was, then it was not, I’m afraid, defined by someone who is an expert in distributed or large-scale systems. This may not have been the fault of the person who chose Excel for the task of recording information, but it is the fault of the person who was in charge of the system, because Excel is not, and never was, a fit application for what it was being used for. It does not have the scalability characteristics, the integrity characteristics or the versioning characteristics required. This is not the fault of Microsoft, any more than it would be fault of Porsche if a 911T broke down because its owner filled with diesel fuel, rather than petrol[1]. Any competent systems architect or software engineer, qualified to be creating such a system, would have known this: the only thing that seems possible is that whoever put together the system was unqualified to do so.

There seem to be several options here:

  1. the person putting together the system did not know they were unqualified;
  2. the person putting together the system realised that they were unqualified, but did not feel able to tell anyone;
  3. the person putting together the systems realised that they were unqualified, but lied.

In any of the above, where was the oversight? Where was the testing? Where were the requirements? This was a system intended to safeguard the health of millions – millions – of people.

Who can we blame for this? In the end, the government needs to take some large measure of responsibility: they commissioned the system, which means that they should have come up with realistic and appropriate requirements. Requirements of this type may change over the life-cycle of a project, and there are ways to manage this: I recommend a useful book in another article, Building Evolutionary Architectures – for security and for open source. These are not new problems, and they are not unsolved problems: we know how to do this as a profession, as a society.

And then again, should we blame someone? Normally, I’d consider this a question out of scope for this blog, but people may die because of this decision – the decision not to define, design, test and run a system which was fit for purpose. At the very least, there are people who are anxious and worried about whether they have Covid-19, whether they need to self-isolate, whether they may have infected vulnerable friends or family. Blame is a nasty thing, but if it’s about holding people to account, then that’s what should happen.

IT systems are important. Particularly when they involve people’s health, but in many other areas, too: banking, critical infrastructure, defence, energy, even retail and entertainment, where people’s jobs will be at stake. It is appropriate for those of us with a voice to speak out, to remind the IT community that we have a responsibility to society, and to hold those who commission IT systems to account.


1 – or “gasoline” in some geographies.

They won’t get security right

Save users from themselves: make it difficult to do the wrong thing.

I’m currently writing a book at Trust in computing and the cloud – I’ve mentioned it before – and I confidently expect to reach 50% of my projected word count today, as I’m on holiday, have more time to write it, and got within about 850 words of the goal yesterday. Little boast aside, one of the topics that I’ve been writing about is the need to consider the contexts in which the systems you design and implement will be used.

When we design systems, there’s a temptation – a laudable one, in many cases – to provide all of the features and functionality that anyone could want, to implement all of the requests from customers, to accept every enhancement request that comes in from the community. Why is this? Well, for a variety of reasons, including:

  • we want our project or product to be useful to as many people as possible;
  • we want our project or product to match the capabilities of another competing one;
  • we want to help other people and be seen as responsive;
  • it’s more interesting implementing new features than marking an existing set complete, and settling down to bug fixing and technical debt management.

These are all good – or at least understandable – reasons, but I want to argue that there are times that you absolutely should not give in to this temptation: that, in fact, on every occasion that you consider adding a new feature or functionality, you should step back and think very hard whether your product would be better if you rejected it.

Don’t improve your product

This seems, on the face of it, to be insane advice, but bear with me. One of the reasons is that many techies (myself included) are more interested getting code out of the door than weighing up alternative implementation options. Another reason is that every opportunity to add a new feature is also an opportunity to deal with technical debt or improve the documentation and architectural information about your project. But the other reasons are to do with security.

Reason 1 – attack surface

Every time that you add a feature, a new function, a parameter on an interface or an option on the command line, you increase the attack surface of your code. Whether by fuzzing, targeted probing or careful analysis of your design, the larger the attack surface of your code, the more opportunities there are for attackers to find vulnerabilities, create exploits and mount attacks on instances of your code. Strange as it may seem, adding options and features for your customers and users can often be doing them a disservice: making them more vulnerable to attacks than they would have been if you had left well enough alone.

If we do not need an all-powerful administrator account after initial installation, then it makes sense to delete it after it has done its job. If logging of all transactions might yield information of use to an attacker, then it should be disabled in production. If older versions of cryptographic functions might lead to protocol attacks, then it is better to compile them out than just to turn them off in a configuration file. All of these lead to reductions in attack surface which ultimately help safeguard users and customers.

Reason 2 – saving users from themselves

The other reason is also about helping your users and customers: saving them from themselves. There is a dictum – somewhat unfair – within computing that “users are stupid”. While this is overstating the case somewhat, it is fairer to note that Murphy’s Law holds in computing as it does everywhere else: “Anything that can go wrong, will go wrong”. Specific to our point, some user somewhere can be counted upon to use the system that you are designing, implementing or operating in ways which are at odds with your intentions. IT security experts, in particular, know that we cannot stop people doing the wrong thing, but where there are opportunities to make it difficult to do the wrong thing, then we should embrace them.

Not adding features, disabling capabilities and restricting how your product is used might seem counter-intuitive, but if it leads to a safer user experience and fewer vulnerabilities associated with your product or project, then in the end, everyone benefits. And you can use the time to go and write some documentation. Or go to the beach. Enjoy!

Enarx news: WebAssembly + SGX

We can now run a WebAssembly workload in a TEE

Regular readers will know that I’m one of the co-founders for the Enarx project, and write about our progress fairly frequently. You’ll find some more information in these articles (newest first):

I won’t spend time going over what the project is about here, as you can read more in the articles above, and lots more over at our GitHub repository (https://enarx.io), but aim of the project is to allow you run sensitive workloads in Trusted Execution Environments (TEEs) from various silicon vendors. The runtime we’ll be providing is WebAssembly (using the WASI interface), and the TEEs that we’re targetting to start with are AMD’s SEV and Intel’s SGX – though you, the client running the workload, shouldn’t care which is being used at any particular point, as we abstract all of that away.

It’s a complicated architecture, with lots of moving parts and a fairly complex full-stack architecture. Things have been moving really fast recently, partly due to a number of new contributors to the project (which recently reached 150 stars on our main GitHub repository), and one of the important developments is the production of a new component architecture, included below.

I’ve written before about how important it is to have architectural diagrams for projects (and particularly open source projects), and so I’m really pleased that we have this updated – and much more detailed – version of the diagram. I won’t go into it in detail here, but the only trusted components (from the point of view of Enarx and a client using Enarx) are those in green: the Enarx client agent, the Attestation measurement database and the TEE instance containing the Application, Wasm, Shim, etc., which we call the Keep.

It’s only just over a couple of months since we announced our most recent milestone: running binaries within TEEs, including both SEV and SGX. This was a huge deal for us, but then, at the end of last week, one of the team, Daiki Ueno, managed to get a WebAssembly binary loaded using the Wasm/WASI layers and to run it. In other words, we can now run a WebAssembly workload in a TEE. Over the past 10 minutes, I’ve just managed to replicate that on my own machine (for my own interest, not that I doubted the results!). Circled, below, are the components which are involved in this feat.

I should make it clear that none of these components is complete or stable – they are all still very much in development, and, in fact, even the running test yielded around 40 lines for error messages! – but the fact that we’ve got a significant part of the stack working together is an enormous deal. The Keep loader creates an SGX TEE instance, loads the Shim and the Wasm/WASI components into the Keep and starts them running. The App Loader (one of the Wasm components) loads the Application, and it runs.

It’s not just an enormous deal due to the amount of work that it’s taken and the great teamwork that’s been evident throughout – though those are true as well – but also because it means we’re getting much closer to having something that people can try out for themselves. The steps to just replicating the test were lengthy and complex, and my next step is to see if I can make make some changes and create my own test, which may be quite a marathon, but I’d like to stress that my technical expertise lies closer to the application layer than the low-level pieces, which means that if I can make this work now (with a fair amount of guidance), then we shouldn’t be too far off being in a position where pretty much anyone with a decent knowledge of the various pieces can do the same.

We’re still quite a way off being able to provide a simple file with 5-10 steps to creating and running your own workload in a Keep, but that milestone suddenly feels like it’s in sight. In the meantime, there’s loads of design work, documentation, infrastructure automation, testing and good old development to be done: fancy joining us?

No security without an architecture

Your diagrams don’t need to be perfect. But they do need to be there.

I attended a virtual demo this week. It didn’t work, but none of us was stressed by that: it was an internal demo, and these things happen. Luckily, the members of the team presenting the demo had lots of information about what it would have shown us, and a particularly good architectural diagram to discuss. We’ve all been in the place where the demo doesn’t work, and all felt for the colleague who was presenting the slidedeck, and on whose screen a message popped up a few slides in, saying “Demo NO GO!” from one of her team members.

After apologies, she asked if we wanted to bail completely, or to discuss the information they had to hand. We opted for the latter – after all, most demos which aren’t foregrounding user experience components don’t show much beyond terminal windows that most of us could fake up in half an hour or so anyway. She answered a couple of questions, and then I piped up with one about security.

This article could have been about the failures in security in a project which was showing an early demo: another example of security being left till late (often too late) in the process, at which point it’s difficult and expensive to integrate. However, it’s not. It clear that thought had been given to specific aspects of security, both on the network (in transit) and in storage (at rest), and though there was probably room for improvement (and when isn’t there?), a team member messaged me more documentation during the call which allowed me to understand of the choices the team had made.

What this article is about is the fact that we were able to have a discussion at all. The slidedeck included an architecture diagram showing all of the main components, with arrows showing the direction of data flows. It was clear, colour-coded to show the provenance of the different components, which were sourced from external projects, which from internal, and which were new to this demo. The people on the call – all technical – were able to see at a glance what was going on, and the team lead, who was providing the description, had a clear explanation for the various flows. Her team members chipped in to answer specific questions or to provide more detail on particular points. This is how technical discussions should work, and there was one thing in particular which pleased me (beyond the fact that the project had thought about security at all!): that there was an architectural diagram to discuss.

There are not enough security experts in the world to go around, which means that not every project will have the opportunity to get every stage of their design pored over by a member of the security community. But when it’s time to share, a diagram is invaluable. I hate to think about the number of times I’ve been asked to look at project in order to give my thoughts about security aspects, only to find that all that’s available is a mix of code and component documentation, with no explanation of how it all fits together and, worse, no architecture diagram.

When you’re building a project, you and your team are often so into the nuts and bolts that you know how it all fits together, and can hold it in your head, or describe the key points to a colleague. The problem comes when someone needs to ask questions of a different type, or review the architecture and design from a different slant. A picture – an architectural diagram – is a great way to educate external parties (or new members of the project) in what’s going on at a technical level. It also has a number of extra benefits:

  • it forces you to think about whether everything can be described in this way;
  • it forces you to consider levels of abstraction, and what should be shown at what levels;
  • it can reveal assumptions about dependencies that weren’t previously clear;
  • it is helpful to show data flows between the various components
  • it allows for simpler conversations with people whose first language is not that of your main documentation.

To be clear, this isn’t just a security problem – the same can go for other non-functional requirements such as high-availability, data consistency, performance or resilience – but I’m a security guy, and this is how I experience the issue. I’m also aware that I have a very visual mind, and this is how I like to get my head around something new, but even for those who aren’t visually inclined, a diagram at least offers the opportunity to orient yourself and work out where you need to dive deeper into code or execution. I also believe that it’s next to impossible for anybody to consider all the security implications (or any of the higher-order emergent characteristics and qualities) of a system of any significant complexity without architectural diagrams. And that includes the people who designed the system, because no system exists on its own (or there’s no point to it), so you can’t hold all of those pieces in your head of any length of time.

I’ve written before about the book Building Evolutionary Architectures, which does a great job in helping projects think about managing requirements which can morph or change their priority, and which, unsurprisingly, makes much use of architectural diagrams. Enarx, a project with which I’m closely involved, has always had lots of diagrams, and I’m aware that there’s an overhead involved here, both in updating diagrams as designs change and in considering which abstractions to provide for different consumers of our documentation, but I truly believe that it’s worth it. Whenever we introduce new people to the project or give a demo, we ensure that we include at least one diagram – often more – and when we get questions at the end of a presentation, they are almost always preceded with a phrase such as, “could you please go back to the diagram on slide x?”.

I nearly published this article without adding another point: this is part of being “open”. I’m a strong open source advocate, but source code isn’t enough to make a successful project, or even, I would add, to be a truly open source project: your documentation should not just be available to everybody, but accessible to everyone. If you want to get people involved, then providing a way in is vital. But beyond that, I think we have a responsibility (and opportunity!) towards diversity within open source. Providing diagrams helps address four types of diversity (at least!):

  • people whose first language is not the same as that of your main documentation (noted above);
  • people who have problems reading lots of text (e.g. those with dyslexia);
  • people who think more visually than textually (like me!);
  • people who want to understand your project from different points of view (e.g. security, management, legal).

If you’ve ever visited a project on github (for instance), with the intention of understanding how it fits into a larger system, you’ll recognise the sigh of relief you experience when you find a diagram or two on (or easily reached from) the initial landing page.

And so I urge you to create diagrams, both for your benefit, and also for anyone who’s going to be looking at your project in the future. They will appreciate it (and so should you). Your diagrams don’t need to be perfect. But they do need to be there.

Immutability: my favourite superpower

As a security guy, I approve of defence in depth.

I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time.  I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well.  You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora.  There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.

What does “immutable” mean?  It means that it can’t be changed.  To be more accurate, in a software context, it generally means that something can’t be changed during run-time.

Important digression – constant immutability

I realised as I wrote that final sentence that it might be a little misleading.  Many  programming languages have the concept of “constants”.  A constant is a variable (or set, or data structure) which is constant – that is, not variable.  You can assign a value to a constant, and generally expect it not to change.  But – and this depends on the language you are using – it may be that the constant is not immutable.  This seems to go against common sense[1], but that’s just the way that some languages are designed.  The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.

Operating System immutability

In Silverblue’s case, it’s the operating system that’s immutable.  You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem.  This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced.  It’s not impossible[2], but the risk is significantly lower.

How do you update your system, then?  Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that.  Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system.  This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages.  If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing.  Another environment?  Another image.

We’re more interested in the security properties that this offers us, however.  Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration.  From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.

Container immutability

This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”).  One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.

From our point of view, however, what’s great about containers is that you can choose to use them immutably.  In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern.  The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required.  The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t.  Remember: immutable software gives better knowability, and improves your resistance to run-time compromise.  Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.

This brings us to two of the reasons that you should never run containers with root privilege:

  • there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
  • there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.

Double immutability with Silverblue

I mentioned above that Silverblue runs applications in containers.  This means that you have two levels of security provided as default when you run applications on a Silverblue system:

  1. the operating system immutability;
  2. the container immutability.

As a security guy, I approve of defence in depth, and this is a classic example of that property.  I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.


1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.

2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.

3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.

Building Evolutionary Architectures – for security and for open source

Consider the fitness functions, state them upfront, have regular review.

Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.

https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it.  Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.

How, then, did I come across the book?  I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting.  The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.

Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly.  The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.

The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”.  Crucially, these are properties of the architecture – or system – rather than features or specific functionality.  Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests.  Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring.  There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.

Frankly, the above might be enough to recommend the book, but there’s more.  They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:

  1. you may not meet all your fitness functions at the beginning;
  2. applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
  3. your architecture is likely to change over time;
  4. your requirements, and therefore the priority that you give to each fitness function, will change over time;
  5. that even if your fitness functions remain the same, the ways in which you need to monitor them may change.

All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.

As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book.  At times, this felt a little contrived, but the mechanism is generally helpful.  There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts.  Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance.  Relating these to the impact of “architectural coupling” was particularly helpful, in my view.

There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective.  There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.

Why is this a book for security folks?

The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security.  Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix.  The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function.  Taking this as a starting point for a project will help you do two things:

  • avoid focussing on features and functionality, and look at the bigger picture;
  • consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.

Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time.  A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.

Why is this a book for open source folks?

Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles.  This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project.  One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:

  • still relevant;
  • correctly prioritised at this stage in the project;
  • actually being met.

If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.

Open source projects have – quite rightly – various different models of use and intended users.  One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended.  Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out.  One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project.  It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.

Oh, how I love my TEE (or do I?)

Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security

I realised just recently that I’ve not written yet about Trusted Execution Environments (TEEs) on this blog.  This is a surprise, honestly, because TEEs are fascinating, and I spend quite a lot of my professional time thinking – and sometimes worrying – about them.  So what, you may ask, is a TEE?

Let’s look at one of the key use cases first, and then get to what a Trusted Execution Environment is.  A good place to start it the “Cloud”, which, as we all know, is just somebody else’s computer.  What this means is that if you’re running an application (let’s call it a “workload”) in the Cloud – AWS, Azure, whatever – then what you’re doing is trusting somebody else to take the constituent parts of that workload – its code and its data – and run them on their computer.  “Yay”, you may be thinking, “that means that I don’t have to run it in my computer: it’s all good.”  I’m going to take issue with the “all good” bit of that statement.  The problem is that the company – or people within that company – who run your workload on their computer (let’s call it a “host”) can, if they so wish, look inside it, change it, and stop it running.  In other words, they can break all three classic “CIA” properties of security: confidentiality (by looking inside it); integrity (by changing it); and availability (by stopping it running).  This is because the way that workloads run on hosts – whether in hardware-mediated virtual machines, within containers or on bare-metal – all allow somebody with sufficient privilege on that machine to do all of the bad things I’ve just mentioned.

And these are bad things.  We don’t tend to care about them too much as individuals – because the amount of value a cloud provider would get from bothering to look at our information is low – but as businesses, we really should be worried.

I’m afraid that the problem doesn’t go away if you run your systems internally.  Remember that anybody with sufficient access to hosts can look inside and tamper with your workloads?  Well, are you happy that you sysadmins should all have access to your financial results?  Merger and acquisition details?  Pay roll?  Because if you have this kind of data running on your machines on your own premises, then they do have access to all of those.

Now, there are a number of controls that you can put in place to help with this – not least background checks and Acceptable Use Policies – but TEEs aim to solve this problem with technology.  Actually, they only really aim to solve the confidentiality and integrity pieces, so we’ll just have to assume for now that you’re going to be in a position to notice if your sales order process fails to run due to malicious activity (for instance).  Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security where processes can execute (and data can be processed) in ways that mean that even privileged users of the host cannot attack their confidentiality or integrity.  To get a little bit technical, these enclaves are memory pages with particular controls on them such that they are always encrypted except when they are actually being processed by the chip.

The two best-known TEE implementations so far are Intel’s SGX and AMD’s SEV (though other silicon vendors are beginning to talk about their alternatives).  Both Intel and AMD are aiming to put these into server hardware and create an ecosystem around their version to make it easy for people to run workloads (or components of workloads) within them.  And the security community is doing what it normally does (and, to be clear, absolutely should be doing), and looking for vulnerabilities in the implementation.  So far, most of the vulnerabilities that have been identified are within Intel’s SGX – though I’m not in a position to say whether that’s because the design and implementation is weaker, or just because the researchers have concentrated on the market leader in terms of server hardware.  It looks like we need to go through a cycle or two of the technologies before the industry is convinced that we have a working design and implementation that provides the levels of security that are worth deploying.  There’s also work to be done to provide sufficiently high quality open source software and drivers to support TEEs for wide deployment.

Despite the hopes of the silicon vendors, it may be some time before TEEs are in common usage, but people are beginning to sit up and take notice, partly because there’s so much interest in moving workloads to the Cloud, but still serious concerns about the security of your sensitive processes and data when they’re there.  This has got to be a good thing, and I think it’s really worth considering how you might start designing and deploying workloads in new ways once TEEs actually do become commonly available.

The “invisible” trade-off? Security.

“For twenty years, people have been leaving security till last.”

Colleague (in a meeting): “For twenty years, people have been leaving security till last.”

Me (in response): “You could have left out those last two words.”

This article will be a short one, and it’s a plea.  It’s also not aimed at my regular readership, because if you’re part of my regular readership, then you don’t need telling.  Many of the articles on this blog, however, are written with the express intention of meeting two criteria:

  1. they should be technical credible[1].
  2. you should be able to show them to your parents or to your manager[2].

I suspect that it’s your manager, this time round, who I’ll be targeting, but I don’t want to make assumptions about your parents’ roles or influence, so let’s leave it open.

The issue I want to address this week is the impact of not placing security firmly at the beginning, middle and end of any system or application design process.  As we all know, security isn’t something that you can bolt onto the end of a project and hope that you’ll be OK.  Equally, if you think about it only at the beginning, you’ll find that by the end, your requirements, use cases, infrastructure or personae will have changed[3], and what you planned at the beginning is no longer fit for purpose.  After all, if you know that your functional requirements will change (and everybody knows this), then why would your non-functional requirements be subject to the same drift?

The problem is that security, being a non-functional requirement[4], doesn’t get the up-front visibility that it needs.  And, because it’s difficult to do well, and it’s often the responsibility of a non-core team member “flown in” as a consultant or expert for a small percentage design meetings, security is the area that it’s easy to decide to let slide a bit.  Or a lot.  Or completely.

If there’s a trade-off around features, functionality or resource location, it’s likely to be security, and often, nobody even raises the point that there has been a trade-off: it’s completely invisible (this is one of the reasons Why I love technical debt).  This is also the reason that whenever I look at a system, I try to think “what were the decisions made about security?”, because, too often, no decisions were made about security at all.

So, if you’re a manager[6], and you’re involved with designing a system or application, don’t let security be the invisible trade-off.  I’m not saying that it needs to be the be-all and end-all of the project, but at least ensure that you think about it.  Thank you.


1 – they should be accurate, to be honest, but I also try not to dive deeper into technical topics than is absolutely required for context.

2 – to be clear, this isn’t about making them work- and parent-safe, but about presenting the topics in a manner that is approachable by non-experts.

3 – or, equally likely, all of them.

4 – I don’t mean that security doesn’t function correctly[5], but rather that it’s not one of the key functions of the system or application that’s being designed.

5 – though, now you mention it…

6 – or parent – see above.

Single point of failure

Any failure which completely brings down a system for over 12 hours counts as catastrophic.

Yesterday[1], Gatwick Airport suffered a catastrophic failure. It wasn’t Air Traffic Control, it wasn’t security scanners, it wasn’t even check-in desk software, but the flight information boards. Catastrophic? Well, maybe the impact on the functioning of the airport wasn’t catastrophically affected, but the system itself was. For my money, any failure which completely brings down a system for over 12 hours (from 0430 to 1700 BST, reportedly), counts as catastrophic.

The failure has been blamed on damage to a fibre optic cable. It turned out that if this particular component of the system was brought down, then the system failed to operate as expected: it was a single point of failure. Now, in this case, it could be argued that the failure did not have a security impact: this was a resilience problem. Setting aside the fact that resilience and security are often bedfellows[2], many single points of failure absolutely are security issues, as they become obvious points of vulnerability for malicious actors to attack.

A key skill that needs to be grown with IT in general, but security in particular, is systems thinking, as I’ve discussed elsewhere, including in my first post on this blog: Systems security – why it matters. We need more systems engineers, and more systems architects. The role of systems architects, specifically, is to look beyond the single components that comprise a system, and to consider instead the behaviour of the system as a whole. This may mean looking past our first focus and our to, to for instance, hardware or externally managed systems to consider what the impact of failure, damage or compromise would be to the system’s overall operation.

Single points of failure are particularly awkward.  They crop up in all sorts of places, and they are a very good example of why diversity is important within IT security, and why you shouldn’t trust a single person – including yourself – to be the only person who looks at the security of a system.  My particular biases are towards crypto and software, for instance, so I’m more likely to miss a hardware or network point of failure than somebody with a different background to me.  Not to say that we shouldn’t try to train ourselves to think outside of whatever little box we come from – that’s part of the challenge and excitement of being a systems architect – but an acknowledgement of our own lack of expertise is in itself a realisation of our expertise: if you realise that you’re not an expert, you’re part way to becoming one.

I wanted to finish with an example of a single point of failure that is relevant to security, and exposes a process vulnerability.  The Register has a good write-up of the Foreshadow attack and its impact on SGX, Intel’s Trusted Execution Environment (TEE) capability.  What’s interesting, if the write-up is correct, is that what seems like a small break to a very specific part of the entire security chain means that you suddenly can’t trust anything.  The trust chain is broken, and you have to distrust everything you think you know.  This is a classic security problem – trust is a very tricky set of concepts – and one of the nasty things about it is that it may be entirely invisible to the user that an attack has taken place at all, particularly as the user, at this point, may have no visibility of the chain of trust that has been established – or not – up to the point that they are involved.  There’s a lot more to write about on this subject, but that’s for another day.  For now, if you’re planning to visit an airport, ensure that you have an app on your phone which will tell you your flight departure time and the correct gate.


1 – at time of writing, obviously.

2 – for non-native readers[3] , what I mean is that they are often closely related and should be considered together.

3 – and/or those unaquainted with my somewhat baroque language and phrasing habits[4].

4 – I prefer to double-dot when singing or playing Purcell, for instance[5].

5 – this is a very, very niche comment, for which slight apologies.

What’s an attack surface?

“Reduce your attack surface,” they say. But what is it?

“Reduce your attack surface,” they[1] say.  But what is it?  The instruction to reduce your attack surface is one of the principles of IT security, so it must be a Good Thing[tm].  The problem is that it’s not always clear what an attack surface actually is.

I’m going to go for the broadest possible description I can think of, or nearly, because I’m pretty paranoid, and because I’m not convinced that the Wikipedia definition[2] is sufficient[3].  Although I’ll throw in a few examples of how to reduce attack surfaces, the purpose of this post is really to explain what one is, rather than to help protect you – but a good understanding really is required before you start with anything else, so hopefully this will be useful.

So, here’s my start at a definition:

  • The attack surface of a system is the sum of areas where attacks could be launched against it.

That feels a little bit circular – let’s define some terms.  First of all, what’s an an “area” in this definition?  Well, I’d say that any particular component of a system may have many points of possible vulnerability – and therefore attack.  The sum of those points is an area – and the sum of the areas of the different components of a system gives us our system’s attack surface.

To understand better, we’re going to have to talk about systems – one of my favourite topics[4] – because I think it’s important to clarify a key difference between the attack surface of a component considered alone, and the area that a component adds when part of a system.  They will not generally be the same.

Here’s an example: you’re deploying an Operating System.  Let’s look at two options for deployment, and compare the attack surfaces.  In both cases, I’m going to take a fairly restricted look at points of vulnerability, excluding, for instance, human factors, as I don’t want to get bogged down in the details.

Deployment one – bare metal

You install your Operating System onto a physical machine, and plug it into the network.  What are some of the attack points?

  • your network connection
  • the physical hardware
  • services which are listening on the network connection
  • connections via USB – keyboard and mouse, for example.

There are more, but this should give us enough to do some comparisons.  I’d generally think of the attack surface as being associated with the physical bounds of the hardware, with the addition of the network port and USB connections.

How can we reduce the attack surface?  Well, we could unplug the network connection – though that might significantly reduce the efficacy of the system! – or we might take steps to reduce the number of services listening on the connection, to reduce the privilege level at which they run, or increase the authentication requirements for connecting to them.  We could reduce our surface area by using a utility such as “usbguard” to restrict USB connections, and, if we’re worried about physical access to the machine, we could put it in a locked cabinet somewhere.  These are all useful and appropriate ways to reduce our system’s attack surface.

Deployment two – a Virtual Machine

In this deployment scenario, we’re going to install the Operating System onto a Virtual Machine (VM), running on a physical host.  What does my attack surface look like now?  Well, that rather depends on how you define your system.  You could, of course, look at the wider system – the VM and the physical host – but for the purposes of this discussion, I’m going to consider that the operation of the Operating System is what we’re interested in, rather than the broader system[6].  So, what does our attack surface look like this time?  Here’s a quick list.

  • your network connection
  • the hypervisor
  • services which are listening on the network connection
  • connections via USB – keyboard and mouse, for example.

You’ll notice that “the physical hardware” is missing from this list, and that’s because it’s been replace with “the hypervisor”.  This is a little simplistic, for a few reasons, including that the hypervisor is arguably implemented via a combination of software and hardware controls, but it’s certainly different from the entire physical hardware we were talking about before, and in fact, there’s not much you can do from the point of the Virtual Machine to secure it, other than recognise its restrictions, so we might want to remove it from our list at this level.

The other entries are also somewhat different from our first scenario, although you might not realise at first glance.  First, it’s quite likely (though not certain) that your network connection may in fact be a virtual network connection provided by the hosting system, which means that some of the burden of defending it goes to the hosting system.  The same goes for the connections via USB – the hypervisor generally provides “virtual hardware” (via something like qemu, for example), which can be attached – or removed – from virtual machines.

So, you still have the services which are listening on the network connection, but it’s definitely a different attack surface from the first deployment scenario.

Now, if you take the wider view, then there’s definitely an attack surface at the physical machine level as well, and that needs to be considered – but it’s quite likely that this will be under the control of somebody completely different (such as a Cloud Service Provider – CSP).

Another quick example

When I deploy a webserver (using, for instance, Apache), I’ll need to consider a variety of attack vectors, from authentication to denial of service to storage attacks: these are part of our attack surface.  If I deploy it with a database (e.g. PostgreSQL or MySQL), the attack surface looks different, assuming that I care about the data in the database.  Whereas I might previously have been concerned to ensure that an HTTP “PUT” command didn’t overwrite or scramble a file on my filesystem, a malformed command to my database server could delete or corrupt multiple tables.  On the other hand, I might now be able to lock down some of the functions of my webserver that I no longer need to worry about filesystem attacks.  The attack surface of my webserver is different when it’s combined in a system with other components[7].

Why do I want to reduce my attack surface?

Well, this is quite an easy one.  By looking back at my earlier definition, you’ll see that the smaller a system’s attack surface, the fewer points of attack there are available to malicious actors.  That’s got to be a piece of good news.

You will, of course, never be able to reduce your attack  surface to zero (see There are no absolutes in security), but the more you reduce (and document, always document!), the better position you’ll be in.  It’s always about raising the bar to make it more difficult for malicious actors to affect you.


1 – the mythical IT Security Community, that’s who.

2 – to give one example.

3 – it only talks about data, and only about software: that’s not broad enough for me.

4 – as long-standing[4] readers of this blog will know.

5 – and long-suffering.

6 – yes, I know we can’t ignore that, but we’ll come back to it, honest.

7 – there are considerations around the attack surface of the database as well, of course.