Formal verification … or Ken Thompson?

“You can’t trust code that you did not totally create yourself” – Ken Thompson.

This article is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

How can we be sure that the code we’re running does what we think it does? One of the answers – or partial answers – to that question is “formal verification.” Formal verification is an important field of study, applying mathematics to computing, and it aims to start with proofs – at best, with an equivalent level of assurance to that of formal mathematical proofs – of the correctness of algorithms to be implemented in code to ensure that they perform the operations expected and set forth in a set of requirements. Though implementation of code can often fall down in the actual instructions created by a developer or set of developers – the programming – mistakes are equally possible at the level of the design of the code to be implemented in the first place, and so this must be a minimum step before looking at any actual implementations. What is more, these types of mistakes can be all the more hard to spot, as even if the developer has introduced no bugs in the work they have done, the implementation will be flawed by virtue of it being incorrectly defined in the first place. It is with an acknowledgement of this type of error, and an intention of reducing or eliminating it, that formal verification starts, but some areas go much further, with methods to examine concrete implementations and make statements about their correctness with regards to the algorithms which they are implementing.

Where we can make these work, they are extremely valuable, and the sort of places that they are applied are exactly where we would expect: for systems where security is paramount, and to prove the correctness of cryptographic designs and implementations. Another major focus of formal verification is software for safety systems, where the “correct” operation of the system – by which we mean “as designed and expected” – is vital. Examples might include oil refineries, fire suppression systems, nuclear power station management, aircraft flight systems and electrical grid management – unsurprisingly, given the composition of such systems, formal verification of hardware is also an important field of study. The practical application of formal verification methods to software is, however, more limited than we might like. As Alessandro Abate notes in a paper on formal verification of software:

“Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models.”

To these shortcomings we can add another, extremely significant one: how sure can you be that what you are running is what you think you are running? Surely knowing what you are running is exactly why we write software, look at the source, and then compile it under our control? That, certainly, is the basic starting point for software that we care about.

The problem is arguably one of layers and dependencies, and was outlined by Ken Thompson, one of the founders or modern computing, in the lecture he gave at his acceptance of the Turing Award in 1983. It is short, stands as one of the establishing artefacts of computing security, and has weathered the tests of time: I have no hesitation in recommending that all readers of this blog read it: Reflections on Trusting Trust. In it, he describes how careful placing of malicious code in the C standard compiler could lead to vulnerabilities (his specific example is in account login code) which are not only undetectable by those without access to the source code, but also not removable. The final section of the paper is entitled “Moral”, and Thompson starts with these words:

“The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.”

However, as he goes on to point out, here is nothing special about the compiler:

“I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.”

It is for this the reasons noted by Thompson that open source software – and hardware – is so vital to the field of computer security, and to our task of defining and understanding what “trust” means in the context of computing. Just relying on the “open source-ness” of your code is not enough: there is more work to be done in understanding your stack, the community and your requirements, but without the ability to look at the source code of all the layers of software and hardware on which you are running code, then you can have only reduced trust that what you are running is what you think you should be running, whether you have performed formal verification on it or not.


Why Enarx is open

It’s not just our coding that we do in the open.

When Nathaniel McCallum and I embarked on the project which is now called Enarx, we made one decision right at the beginning: the code for Enarx would be open source, a stance fully supported by our employer Red Hat (see standard disclaimer). All of it, and for ever. That’s a decision that we’ve not regretted at any point, and it’s something we stand behind. As soon as we had enough code for a demo, and were ready to show it, we created a repository on github and made it public. There’s a very small exception, which is that there are some details of upcoming chip features that are shared with us under NDA[1] where if we write code for them, publishing that code would be a breach of the NDA. But where this applied (which is rarely) we are absolutely clear with the various vendors that we intend to make the code open as soon as possible, and lobby them to release details as early as they can (which may be earlier than they might prefer), so that more experts can look over both their designs and our code.

Auditability and trust

This brings us to possibly the most important reasons for making Enarx open source: auditability and trust. Enarx is a security-related project, and I believe passionately not only that security should be done in the open, but that if anybody is actually going to trust their sensitive data, algorithms and workloads to a piece of software, then they want to be in a position where as many experts as possible have looked at it, scrutinised it, criticised it and improved it: whether that is the people running the software, their employees, contractors or (even better) the wider security community. The more people who check the code, the happier you should be to trust it. This is important for any piece of security software, but vital for software such as Enarx which is designed to protect your more most sensitive workloads.

Bug-catching

There are bugs in Enarx. I know: I’m writing some of the code[2] and I found one yesterday (which I’d put in), just as I was about to give a demo[3]. It is very, very difficult to write perfect code, and we know that if we make our source open, then more people can help us fix issues.

Commonwealth

For Nathaniel and me, open source is an ethical issue, and we make no apologies for that. I think it’s the same for most, if not all, of the team working on Enarx. This include a number of Red Hat employees (see standard disclaimer), so shouldn’t come as a surprise, but we have non-Red Hat contributors from a number of backgrounds, and we feel that Enarx should be a Common Good, and contribute to the commonwealth of intellectual property out there.

More brain power

Making something open source doesn’t just make it easier to fix bugs: it can improve the quality of what you produce in general. The more brain power you have to apply to the problem, but better your chances of making something great – assuming that the brain power is applied efficiently (not always an easy task!). We had a design meeting yesterday where one of the participants said towards the end, “I’m sure I could implement some of this, but don’t know a huge amount about this topic, and I’m worried that I’m not contributing to this discussion.” In fact, they had, by asking questions and clarifying some points, and we assured them that we wanted to include experienced, senior developers for their expertise and knowledge, and to pull out assumptions and to validate the design, and not because we expected everybody to be experts in all parts of the project. Having bright people around, involved in design and coding, spreads expertise and knowledge, and helps keep the work from becoming an insulated, isolated “ivory tower” construction, understood by few, and almost impossible to validate.

Not just code

It’s not just our coding that we do in the open. We manage our architecture in the open, our design meetings, our protocol design, our design methodology[4], our documentation, our bug-tracking, our chat, our CI/CD processes: all of it is open. The one exception is our vulnerability management process, which needs to have the opportunity for confidential exposure for a limited time.

We also take diversity seriously, and the project contributors are subject to the Contributor Covenant Code of Conduct.

In short, Enarx is an open project. I’m sure we could do better, and we’ll strive for that, but our underlying principles are that open is good in general, and vital for security. If you agree, please come and visit!


1 – Non-Disclosure Agreement.

2 – to the surprise of many of the team, including myself. At least it’s not in Perl.

3 – I fixed it. Admittedly after the demo.

4 – we’ve just moved to a Sprint pattern – the details of which we designed and agreed in the open.

They won’t get security right

Save users from themselves: make it difficult to do the wrong thing.

I’m currently writing a book at Trust in computing and the cloud – I’ve mentioned it before – and I confidently expect to reach 50% of my projected word count today, as I’m on holiday, have more time to write it, and got within about 850 words of the goal yesterday. Little boast aside, one of the topics that I’ve been writing about is the need to consider the contexts in which the systems you design and implement will be used.

When we design systems, there’s a temptation – a laudable one, in many cases – to provide all of the features and functionality that anyone could want, to implement all of the requests from customers, to accept every enhancement request that comes in from the community. Why is this? Well, for a variety of reasons, including:

  • we want our project or product to be useful to as many people as possible;
  • we want our project or product to match the capabilities of another competing one;
  • we want to help other people and be seen as responsive;
  • it’s more interesting implementing new features than marking an existing set complete, and settling down to bug fixing and technical debt management.

These are all good – or at least understandable – reasons, but I want to argue that there are times that you absolutely should not give in to this temptation: that, in fact, on every occasion that you consider adding a new feature or functionality, you should step back and think very hard whether your product would be better if you rejected it.

Don’t improve your product

This seems, on the face of it, to be insane advice, but bear with me. One of the reasons is that many techies (myself included) are more interested getting code out of the door than weighing up alternative implementation options. Another reason is that every opportunity to add a new feature is also an opportunity to deal with technical debt or improve the documentation and architectural information about your project. But the other reasons are to do with security.

Reason 1 – attack surface

Every time that you add a feature, a new function, a parameter on an interface or an option on the command line, you increase the attack surface of your code. Whether by fuzzing, targeted probing or careful analysis of your design, the larger the attack surface of your code, the more opportunities there are for attackers to find vulnerabilities, create exploits and mount attacks on instances of your code. Strange as it may seem, adding options and features for your customers and users can often be doing them a disservice: making them more vulnerable to attacks than they would have been if you had left well enough alone.

If we do not need an all-powerful administrator account after initial installation, then it makes sense to delete it after it has done its job. If logging of all transactions might yield information of use to an attacker, then it should be disabled in production. If older versions of cryptographic functions might lead to protocol attacks, then it is better to compile them out than just to turn them off in a configuration file. All of these lead to reductions in attack surface which ultimately help safeguard users and customers.

Reason 2 – saving users from themselves

The other reason is also about helping your users and customers: saving them from themselves. There is a dictum – somewhat unfair – within computing that “users are stupid”. While this is overstating the case somewhat, it is fairer to note that Murphy’s Law holds in computing as it does everywhere else: “Anything that can go wrong, will go wrong”. Specific to our point, some user somewhere can be counted upon to use the system that you are designing, implementing or operating in ways which are at odds with your intentions. IT security experts, in particular, know that we cannot stop people doing the wrong thing, but where there are opportunities to make it difficult to do the wrong thing, then we should embrace them.

Not adding features, disabling capabilities and restricting how your product is used might seem counter-intuitive, but if it leads to a safer user experience and fewer vulnerabilities associated with your product or project, then in the end, everyone benefits. And you can use the time to go and write some documentation. Or go to the beach. Enjoy!

What’s a vulnerability?

Often, it’s the terms which seem familiar that are most confusing

I was discussing a document with some colleagues recently, and wanted to point them an article I’d written with some definitions of terms like “vulnerability”, “exploit” and “attack”, only to get somewhat annoyed when I discovered that I’d never written it. So this week’s post attempts to remedy that, though I’m going to address them in reverse order, and I’m going to add an extra one: “mitigation”.

The world of IT security is full of lots of terms, some familiar, some less so. Often, it’s the terms which seem familiar that are most confusing, because they may mean something other that what you think. The three that I think it’s important to define here are the ones I noted above (and mitigation, because it’s often used in the same contexts). Before we do that, let’s just define quickly what we’re talking about when we mean security.

CIA

We often talk about three characteristics of a system that we want to protect or maintain in order to safeguard its security: C, I and A.

  • “C” stands for confidentiality. Unauthorised entities should not be able to access information or processes.
  • “I” stands for integrity. Unauthorised entities should not be able to change information or processes.
  • “A” stands for availability. Unauthorised entities should not be able to impact the ability for authorised entities to access information or processes.

I’ve gone into more detail in my article The Other CIA: Confidentiality, Integrity and Availability, but that should keep us going for now. I should also say that there are various other definitions around relevant characteristics that we might want to protect, but CIA gives us a good start.

Mitigation

A mitigation[3] in this context is a technique to reduce the impact of an attack on whichever of C, I or A is (or are) affected.

Attack

An attack[1] is an action or set of actions which affects the C, the I or the A of a system. It’s the breaking of the protections applied to safeguard confidentiality, integrity or availability. An attack is possible through the use of an exploit.

Exploit

An exploit is a mechanism to affect the C, the I or the A of a system, allowing an attack. It is a technique, or set of techniques, employed in an an attack, and is possible through the exploiting[2] of a vulnerability.

Vulnerability

A vulnerability is a flaw in a system which allows the breaking of protections applied to the C, the I or the A of a system. It allows attacks, which take place via exploits.

One thing that we should note is that a vulnerability is not necessarily a flaw in software: it may be in hardware or firmware, or be exposed as an emergent characteristic of a system, in which case it’s a design problem. Equally, it may expose a protocol design issue, so even if the software (+ hardware, etc.) is a correct implementation of the protocol, a security vulnerability exists. This is, in fact, very common, particularly where cryptography is involved, but cryptography is hard to do.


1 – a successful one, anyway.

2 – hence the name.

3 – I’ve written more about mitigation in Mitigate or remediate?.

Enarx news: WebAssembly + SGX

We can now run a WebAssembly workload in a TEE

Regular readers will know that I’m one of the co-founders for the Enarx project, and write about our progress fairly frequently. You’ll find some more information in these articles (newest first):

I won’t spend time going over what the project is about here, as you can read more in the articles above, and lots more over at our GitHub repository (https://enarx.io), but aim of the project is to allow you run sensitive workloads in Trusted Execution Environments (TEEs) from various silicon vendors. The runtime we’ll be providing is WebAssembly (using the WASI interface), and the TEEs that we’re targetting to start with are AMD’s SEV and Intel’s SGX – though you, the client running the workload, shouldn’t care which is being used at any particular point, as we abstract all of that away.

It’s a complicated architecture, with lots of moving parts and a fairly complex full-stack architecture. Things have been moving really fast recently, partly due to a number of new contributors to the project (which recently reached 150 stars on our main GitHub repository), and one of the important developments is the production of a new component architecture, included below.

I’ve written before about how important it is to have architectural diagrams for projects (and particularly open source projects), and so I’m really pleased that we have this updated – and much more detailed – version of the diagram. I won’t go into it in detail here, but the only trusted components (from the point of view of Enarx and a client using Enarx) are those in green: the Enarx client agent, the Attestation measurement database and the TEE instance containing the Application, Wasm, Shim, etc., which we call the Keep.

It’s only just over a couple of months since we announced our most recent milestone: running binaries within TEEs, including both SEV and SGX. This was a huge deal for us, but then, at the end of last week, one of the team, Daiki Ueno, managed to get a WebAssembly binary loaded using the Wasm/WASI layers and to run it. In other words, we can now run a WebAssembly workload in a TEE. Over the past 10 minutes, I’ve just managed to replicate that on my own machine (for my own interest, not that I doubted the results!). Circled, below, are the components which are involved in this feat.

I should make it clear that none of these components is complete or stable – they are all still very much in development, and, in fact, even the running test yielded around 40 lines for error messages! – but the fact that we’ve got a significant part of the stack working together is an enormous deal. The Keep loader creates an SGX TEE instance, loads the Shim and the Wasm/WASI components into the Keep and starts them running. The App Loader (one of the Wasm components) loads the Application, and it runs.

It’s not just an enormous deal due to the amount of work that it’s taken and the great teamwork that’s been evident throughout – though those are true as well – but also because it means we’re getting much closer to having something that people can try out for themselves. The steps to just replicating the test were lengthy and complex, and my next step is to see if I can make make some changes and create my own test, which may be quite a marathon, but I’d like to stress that my technical expertise lies closer to the application layer than the low-level pieces, which means that if I can make this work now (with a fair amount of guidance), then we shouldn’t be too far off being in a position where pretty much anyone with a decent knowledge of the various pieces can do the same.

We’re still quite a way off being able to provide a simple file with 5-10 steps to creating and running your own workload in a Keep, but that milestone suddenly feels like it’s in sight. In the meantime, there’s loads of design work, documentation, infrastructure automation, testing and good old development to be done: fancy joining us?

In praise of triage

It’s all too easy to prioritise based on the “golfing test”.

Not all bugs are created equal.

Some bugs need fixing now, some bugs can wait. Some bugs are in your implementation, some are in the underlying design. Some bugs will annoy a few customers, some will destroy your business.

Bugs come in all shapes and sizes, and one of the tasks of a product owner, product manager, chief architect – whoever makes the call about where to assign resources – is to decide which ones to address in which order: to prioritise them. The problem is deciding how to prioritise them. It’s all too easy to prioritise based on the “golfing test”: your CEO meets someone on golf course who mentions that his or her company loves your product, except for one tiny issue. The CEO comes back, and makes it clear that fixing this “major bug” is now your one and only task until it’s done, and your world is turned upside down. You have to fix the bug as quickly as possible, with no thought to the impact it has on the rest of the project, or the immense pile technical debt that’s just been accrued. You don’t want to live in this world. What, then, is the alternative?

The answer – though it’s only the beginning of the answer – is triage. Triage (from the French for “separating out”) comes from the world of battlefield medicine. When deciding which wounded soldiers to treat, rapid (hopefully objective) assessments are carried out, allowing a quick sorting of each soldier, typically into categories such as “not urgent: wait”, “urgent: treat immediately” and “not saveable: do not treat”. We can apply the same to software bugs in order to decide what to treat (fix) and with what priority. The important thing is not so much the categories – which will vary based on your context – but the assessment criteria, and how they are applied. Here are a list of just some of the possible criteria:

  • likely monetary impact per customer
  • number of customers impacted
  • reputational impact on your organisation
  • ease to fix
  • impact on system security
  • impact on system performance
  • impact on system stability
  • annoyance of CEO not to be listened to.

We do not, of course, only need to apply one of these: a number of them can be combined with a weighting system, though the more you add, the less clear your priorities will be, and the more likely it is that someone will “put a finger on the scales” – tweak the numbers to give the outcome they want. Another important point about the categories that you decide to apply is that they should be as measurable as you can make them, to allow as objective scoring as possible. I wrote a review of the book Building Evolutionary Architectures a while ago: the methodology adopted there, where you measure and test in order to meet specific criteria, is exactly the sort of approach you should be choosing when designing your triage system.

This is (ostensibly) a blog about security, and so you might expect me to say that “security always wins”, but that should absolutely not be the approach you take. Security might be the most important category for you (that is, carry the most weight), but you need to understand why that is the case – at this particular time – and what exactly you mean by “security”. The “security of the system” is not an objective measure: in order to mean anything, such a phrase needs to reference measurements that can be made (“resistance to physical tampering”, “resistance to brute force attacks”, “number or PhD students likely to be needed to reverse engineer our ‘secure’ protocol”[1]). More importantly, it may be that at this point in your organisation’s life, the damage done by lack of stability or decreased performance outweighs the impact of a security bug. If that’s the case, then your measurements should encapsulate that information and lead you to prioritise bugs with impact in these categories over security issues[3].

There’s one proviso that I feel I need to put in at this point, and it’s about the power of what, in Agile Methodology terms, is called the Product Owner. This is the person who represents the users of the product/project, and should have final say about the direction of development in terms of features, functionality and, most relevant in this discussion, bug-fixing. As noted above, this may be an architect, product manager or someone enjoying another title, but their role should be clear: they get to call the shots. There are times when this person goes against the evidence provided by the triage, and makes a decision to prioritise a particular bug over others despite the outcome of the measurements. This is typically very painful for the technical team[4], but, when it comes down to it, as the product owner, they get to decide. The technical team – after appropriate warnings and discussion[5] – must be ready to step aside and accept the decision. Such decisions (and related discussions) should be recorded, and the product owner must be ready to stand or fall based on the outcome, but that is their job. Triage is a guide, and there are occasions when there are measurements which cannot be easily made objectively, and which sit outside the expertise or scope of knowledge of the technical team. If this sort of decision keeps being made, and you think you know better, you may have a future in technical product management, where people with a view of both the technical and the business side of technology are much in demand. In the end, though, the product owner will need to justify their decision to management, and if they get it wrong, then they must be ready to take the blame (this is one reason why you should make sure that you’ve recorded the process taken to get to this decision – you don’t want to take the blame for a poor decision which you advised against).

So: go out an design a triage process, be ready to follow it, and be ready to defend it. Oh, and one last point: you might want to buy a set of golf clubs.

—–

1 – this last one is a joke: don’t design your own protocol, or if you do, make it open and have it peer-reviewed[2].

2 – and then throw it away and use an open source implementation of better, more thoroughly-reviewed one.

3 – much as it pains me to say it.

4 – I’ve been on both sides of these decisions: I know.

5 -often rather heated, in my experience.

What’s a hash function?

It should be computationally implausible to work backwards from the output hash to the input.

Note – many thanks to a couple of colleagues who provided excellent suggestions for improvements to this text, which has been updated to reflect them.

Sometimes I like to write articles about basics in security, and this is one of those times. I’m currently a little over a third of the way through writing a book on trust in computing and the the cloud, and ended up creating a section about cryptographic hashes. I thought that it might be a useful (fairly non-technical) article for readers of this blog, so I’ve edited it a little bit and present it here for your delectation, dears readers.

There is a tool in the security practitioner’s repertoire that it is helpful for everyone to understand: cryptographic hash functions. A cryptographic hash function, such as SHA-256 or MD5 (now superseded for cryptographic uses as it’s considered “broken”) takes as input a set of binary data (typically as bytes) and gives as output which is hopefully unique for each set of possible inputs. The length of the output – “the hash” – for any particularly hash function is typically the same for any pattern of inputs (for SHA-256, it is 32 bytes, or 256 bits – the clue’s in the name). It should be computationally implausible (cryptographers hate the word “impossible”) to work backwards from the output hash to the input: this is why they are sometimes referred to as “one-way hash functions”. The phrase “hopefully unique” when describing the output is extremely important: if two inputs are discovered that yield the same output, the hash is said to have “collisions”. The reason that MD5 has become deprecated is that it is now trivially possible to find collisions with commercially-available hardware and software systems. Another important property is that even a tiny change in the message (e.g. changing a single bit) should generate a large change to the output (the “avalanche effect”).

What are hash functions used for, and why is the property of being lacking in collisions so important? The simplest answer to the first question is that hash functions are typically used to ensure that when someone hands you a piece of binary data (and all data in the world of computing can be described in binary format, whether it is text, an executable, a video, an image or complete database of data), it is what you expect. Comparing binary data directly is slow and arduous computationally, but hash functions are designed to be very quick. Given two files of several Megabytes or Gigabytes of data, you can produce hashes of them ahead of time, and defer the comparisons to when you need them[1].

Indeed, given the fact that it is easy to produce hashes of data, there is often no need to have both sets of data. Let us say that you want to run a file, but before you do, you want to check that it really is the file you think you have, and that no malicious actor has tampered with it. You can hash that file very quickly and easily, and as long as you have a copy of what the hash should look like, then you can be fairly certain that you have the file you wanted. This is where the “lack of collisions” (or at least “difficulty in computing collisions”) property of hash functions is important. If the malicious actor can craft a replacement file which shares the same hash as the real file, then the process is essentially useless.

In fact, there are more technical names for the various properties, and what I’ve described above mashes three of the important ones together. More accurately, they are:

  1. pre-image resistance – this says that if you have a hash, it should be difficult to find the message from which it was created, even if you know the hash function used;
  2. second pre-image resistance – this says that if you have a message, it should be difficult to find another message which, when hashed, generates the same hash;
  3. collision resistance – this says that it should be difficult to find any two messages which generate the same hash.

Collision resistance and second pre-image resistance sound like the same property, at first glances, but are subtly (but importantly) different. Pre-image resistance says that if you already have a message, finding another with a matching hash, whereas collision resistance should make it hard for you to find any two messages which will generate the same hash, and is a much harder property to fulfil in a hash function.

Let us go back to our scenario of a malicious actor trying to exchange a file (with a hash which we can check) with another one. Now, to use cryptographic hashes “in the wild” – out there in the real world beyond the perfectly secure, bug-free implementations populated by unicorns and overflowing with fat-free doughnuts – there are some important and difficult provisos that need to be met. More paranoid readers may already have spotted some of them, in particular:

  1. you need to have assurances that the copy of the hash you have has also not been subject to tampering;
  2. you need to have assurances that the entity performing the hash performs and reports it correctly;
  3. you need to have assurances that the entity comparing the two hashes reports the result of that comparison correctly.

Ensuring that you can meet such assurances it not necessarily an easy task, and is one of the reasons that Trusted Platform Modules (TPMs) are part of many computing systems: they act as a hardware root of trust with capabilities to provide such assurances. TPMs are a useful an important tool for real-world systems, and I plan to write an article on them (similar to What’s an HSM?) in the future.


1 – It’s also generally easier to sign hashes of data, rather than large sets of data themselves – this happens to be important as one of the most common uses of hashes is for cryptographic (“digital”) signatures.

Thunderspy – should I care?

Thunderspy is a nasty attack, but easily prevented.

There’s a new attack out there which is getting quite a lot of attention this week. It’s called Thunderspy, and it uses the Thunderbolt port which is on many modern laptops and other computers to suck data from your machine. I thought that it might be a good issue to cover this week, as although it’s a nasty attack, there are easy ways to defend yourself, some of which I’ve already covered in previous articles, as they’re generally good security practice to follow.

What is Thunderspy?

Thunderspy is an attack on your computer which allows an attacker with moderate resources to get at your data under certain circumstances. The attacker needs:

  • physical access to your machine – not for long (maybe five minutes), but they do need it. This type of attack is sometimes called an “evil maid” attack, as it can be carried out by hotel staff with access to your room;
  • the ability to take your computer apart (a bit) – all we’re talking here is a screwdriver;
  • a little bit of hardware – around $400 worth, according to one source;
  • access to some freely available software;
  • access to another computer at the same time.

There’s one more thing that the attacker needs, and that’s for you to leave your computer on, or in suspend mode. I’ve discussed different power modes before (in 3 laptop power mode options), and mentioned, as well, that leaving your machine in suspend mode is generally a bad idea (in 7 security tips for travelling with your laptop). It turns out I was right.

What’s the bad news?

Well, there’s quite a lot of bad news:

  • lots of machines have Thunderbolt ports (you can find pictures of both the port and connectors on Wikipedia’s Thunderbolt page, in case you’re not sure whether your machine is affected);
  • machines are vulnerable even if you have full disk encryption;
  • Windows machines are vulnerable;
  • Linux machines are vulnerable;
  • Macintosh machines are vulnerable;
  • most machines with a Thunderbolt port from 2011 onwards are vulnerable;
  • although protection is available on some newer machines (from around 2019)
    • the extent of its efficacy is unclear;
    • lots of manufacturers don’t implement it;
  • some protections that you can turn on break USB and other functionality;
  • one variant of the attack breaks Thunderbolt security permanently, meaning that the attacker won’t need to take your computer apart at all for subsequent attacks: they just need physical access to the port whilst your machine it turned on (or in suspend mode).

The worst thing to note is that full disk encryption does not help you if your computer is turned on or in suspend mode.

Note – I’ve been unable to find out whether any Chromebooks have Thunderbolt support. Please check your model’s specifications or datasheet to be certain.

What’s the good news?

The good news is short and sweet: if you turn your computer completely off, or ensure that it’s in Hibernate mode, then it’s not vulnerable. Thunderspy is a nasty attack, but it’s easily prevented.

What should I do?

  1. Turn your computer off when you leave it unattended, even for short amounts of time.

That was easy, wasn’t it? This is best practice anyway, and it turns out that hibernate mode is also OK. What the attacker is looking for is a powered-up, logged-on computer with Thunderbolt. If you can stop them finding a computer that meets those criteria, then you’re fine. Putting your computer into hibernate mode is also OK.

3 open/closed Covid-19 contact tracing questions

All projects are not created equal.

One of the cheering things about the pandemic crisis in which we find ourselves is the vast up-swell of volunteering that we are seeing across the world. We are seeing this equally across the IT sector, and one of the areas where work is being done is in apps to help track Covid-19. Specifically, there is an interest in Covid-19 contact tracing, or tracking, apps for our mobile[0] phones. These aren’t apps which keep an eye on whether you’ve observed lock-down procedures, but which attempt to work out who has been in contact with whom, and work out from that, once we know that one person is infected with Covid-19, what the likely spread of the virus will be.

There are lots of contact tracing initiatives out there, from Pep-Pt from the European Union to Singapore’s TraceTogether, from the University of Washington’s PACT to MIT’s PACT[1]. Google and Apple are – unprecedentedly – working on an app together. There are lots of ways of comparing these apps and projects, but in today’s article, I want to suggest three measures which can help you consider them from the point of view of “openness”. As regular readers of this blog will know, I’m a big fan of open source – not just for software, but for data, management and the rest – and I believe that there’s also a strong correlation here with civil or human rights. There are lots of ways to compare these apps, but these three measures are not too technical, and can help us get a grip on the likelihood that some of the apps (and associated projects) may impinge on privacy and other issues about which we care. I don’t want the data generated from apps that I download onto my phone to be used now or in the future to curtail my, or other people’s civil or human rights, for blackmail or even for unapproved commercial gain.

1. Open source

Our first question must be: “is the app open source?” If the answer is “no”, then we have no way to know what is being captured, and therefore how it is being used. If the app is closed source, it could be collecting any data from pretty much any measuring device on our phones, including photo, video, audio, Bluetooth, wifi, temperature, GPS or accelerometer. We can try restricting access to these measurements, but such controls have not always been effective, understanding the impact of turning them off is rarely simple, and people frankly rarely bother to check them anyway. Equally bad is the fact that with closed source, you can’t have any idea of how good the security is, nor any chance to criticise and improve it. This is something about which I’ve written many times, including in my articles Disbelieving the many eyes hypothesis and Trust & choosing open source. Luckily, it seems that the majority of contact tracing apps are open source, but please be careful, and reject any which are not.

2 Centralised or distributed

In order to make sense of all the data that these apps collect, there needs to be a centralised[2] store where it can be processed, right? It’s common sense.

Actually, no. Although managing and processing data in one place can be much easier, there are ways to store data in a distributed manner, and allow the sorts of processing needed for contact tracing to take place. It may be more complex, but it also makes it much, much more difficult for governments, corporations or malicious actors to misuse this information. And we should be clear that this will be what happens if the data is made available. Maybe the best governments and the best corporations will be well-behaved by their standards, but a) those are not necessarily the standards that I or others will endorse and b) what about malicious actors and governments and corporations which are not “the best”?

3 Location or proximity tracking

This might seem like another obvious choice: if you want to be finding out who was in contact with whom, then the way to do it is see who was where, and when. GPS tracking – and associated technologies like wifi access point location tracking – combined with easily available time data, would give the ability to work out who was in a particular place at the same time as other people. This is true, but it also provides enormous opportunities for misuse, particularly when the data is held centrally (see above). An alternative is to use sensors like Bluetooth or NFC[3], to allow phones to collect information about other phones (or devices) with which they have been in contact and when. This is more easily anonymised – or pseudonymised – allowing information to be passed to the owners of those phones, but at the same time more difficult to misuse by governments, corporations and malicious actors.

There are other issues to consider, one of which is that these sensors were not designed for this type of use, and we may be sacrificing accuracy if we choose this option. On the other hand, many interactions between people occur indoors, where GPS is much less effective anyway, and these types of technologies may help.

You could argue that this measurement is not about “openness” in itself, but it is a key indicator to whether the information collected can be used in ways which are far from open.

Conclusion

There are many other questions we can ask about Covid-19 contact tracing apps, some of which are related to openness, and some of which are not. These include:

  • Coverage
    • not all demographics have – or use – phones as much as the rest of the population, including the poor, the elderly, and certain religious groups. How effective will such projects be if they have reduced access to these groups?
    • older devices may have less accurate sensors, or not have some of the capabilities required by the apps. What is more, there may be a correlation between use of these older devices with some of the demographics noted above.
    • some people rarely update the apps on their phones, so even if they load an initial version of an app, newer versions, with functionality or security improvements, are likely to be unequally distributed across the set of devices.
  • Removal – how easy will it be to remove the application fully, what are the consequences of not doing so, and how likely are people to do so anyway[4]?
  • Will use of these apps by mandatory or voluntary? If the former, there are serious concerns about civil or human rights, not to mention the problems noted above about coverage.

All of these questions are important, but not directly related to the question of the “openness” of the apps and projects. However, we have, right now, some great opportunities to work with and influence some really important projects for public health and well-being, and I believe that it is important that we consider the questions I’ve raised about openness before endorsing, installing or using any of the apps that are being created.


0 – or “cell”, if you’re in North America.

1 – yes, they chose the same acronym. Yes, it is confusing.

2 – or, I supposed, “centralized”, depending on your geography.

3 – “Near Field Communication” – the same capability used when you do contactless payment with your phone or credit/debit card.

4 – how many apps do you still have on your phone that you’ve not even opened for 3 months? Yup, me too.

Post-Covid, post-open?

We are inventive, we are used to turning technologies to good.

The world of lockdown to which we’re becoming habituated at the moment has produced some amazing upsides. The number of people volunteering, the resurgence of local community initiatives, the selfless dedication of key workers across the world and the recognition of their sacrifice by the general public are among the most visible. As many regular readers of this blog are likely to be aware, there has also been an outpouring of interest and engagement in software- and hardware-related projects to help, from infection-tracking apps to 3D-printing of PPE[0]. Companies have made training and educational materials available for free, and there are attempts around the world to engage and contribute to the public commonwealth.

Sadly, not all of the news is good. There has been a rise in phishing attacks, and the lack of appropriate or sufficient security in commonly-used apps such as Zoom has become frightenly evident[1]. There’s an article to write here about the balance between security, usability and cost, but I’m going to save that for another day.

Somewhere in the middle, between the obvious positives and obvious negatives, there are some developments which most of us probably accept at necessary, but which aren’t things that we’d normally welcome. Beyond the obvious restrictions on movement and public gatherings, there are a number of actions which governments, in particular, a retaking which have generally negative impacts on human rights and civil liberties, as outlined in this piece by The Guardian. The article lists numerous examples of governments imposing, or considering the imposition of, measures which would normally be quickly attacked by human rights groups, and resisted by most citizens. Despite the headline, which suggests that the article will deal with how difficult these measures will be to remove after the end of the crisis, there is actually little discussion, beyond a note that “[w]hether that surveillance is eventually rolled back will depend on public oversight.”

I think that we need to go beyond just “oversight” and start planning now for public action. In the communities in which I live and work, there is a general expectation that the world – software, management, government, data – is becoming more, not less open. We are in grave danger of losing that openness even once the need for these government measures diminish. Governments – who will see the wider intelligence-gathering and control opportunities of these changes – will espouse the view that “we need these measures in place in order to be able to react quickly if the same thing happens again”, and, if we’re not careful, public sentiment, bruised and bloodied by the pandemic, will quietly acquiesce, and we will see improvements in human and civil rights rolled back decades, and damaged further by the availability of cheap, mobile, networked technology.

If we believe that openness is a public good, then we need to think how to counter the arguments which we will hear from governments, and be ready to be vocal – not just with counter-arguments, but with counter-proposals. This pandemic is unlike either of the World Wars of the 20th Century, when a clear ending was marked, and there was the opportunity (sadly denied to many citizens of the former USSR) to regain civil liberties and roll back the restrictions of the war years. Nor is it even like the aftermath of the 9/11, that event which has impacted the intelligence and security landscape of the past two decades, where there is (was?) at least a set of (posited) human foes to target. In the case of the Covid-19 pandemic, the “enemy” is amorphous and will be around for decades to come. The measures to combat it – and its successors – will only be slowly reduced, and some will not be.

We need to fight against those measures which are unnecessary, and we need to find alternatives – transparent, public alternatives – to measures which may have some positive effects, but whose overall impact on society and human rights is clearly negative. In a era where big data is becoming pervasive, and the tools to mine it tractable, we need to provide international mechanisms to share and use that data in ways which do not benefit any single government, bloc, or section of society. We are inventive, we are used to turning technologies to good. This is the time we need to do it, and do it quickly. We can make a difference by being open, but we need to start now.


0 – Personal Protection Equipment.

1 – although note that the company is reported to be making improvements to at least one area of concern to some – routing of traffic through China.