Oh, how I love my TEE (or do I?)

Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security

I realised just recently that I’ve not written yet about Trusted Execution Environments (TEEs) on this blog.  This is a surprise, honestly, because TEEs are fascinating, and I spend quite a lot of my professional time thinking – and sometimes worrying – about them.  So what, you may ask, is a TEE?

Let’s look at one of the key use cases first, and then get to what a Trusted Execution Environment is.  A good place to start it the “Cloud”, which, as we all know, is just somebody else’s computer.  What this means is that if you’re running an application (let’s call it a “workload”) in the Cloud – AWS, Azure, whatever – then what you’re doing is trusting somebody else to take the constituent parts of that workload – its code and its data – and run them on their computer.  “Yay”, you may be thinking, “that means that I don’t have to run it in my computer: it’s all good.”  I’m going to take issue with the “all good” bit of that statement.  The problem is that the company – or people within that company – who run your workload on their computer (let’s call it a “host”) can, if they so wish, look inside it, change it, and stop it running.  In other words, they can break all three classic “CIA” properties of security: confidentiality (by looking inside it); integrity (by changing it); and availability (by stopping it running).  This is because the way that workloads run on hosts – whether in hardware-mediated virtual machines, within containers or on bare-metal – all allow somebody with sufficient privilege on that machine to do all of the bad things I’ve just mentioned.

And these are bad things.  We don’t tend to care about them too much as individuals – because the amount of value a cloud provider would get from bothering to look at our information is low – but as businesses, we really should be worried.

I’m afraid that the problem doesn’t go away if you run your systems internally.  Remember that anybody with sufficient access to hosts can look inside and tamper with your workloads?  Well, are you happy that you sysadmins should all have access to your financial results?  Merger and acquisition details?  Pay roll?  Because if you have this kind of data running on your machines on your own premises, then they do have access to all of those.

Now, there are a number of controls that you can put in place to help with this – not least background checks and Acceptable Use Policies – but TEEs aim to solve this problem with technology.  Actually, they only really aim to solve the confidentiality and integrity pieces, so we’ll just have to assume for now that you’re going to be in a position to notice if your sales order process fails to run due to malicious activity (for instance).  Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security where processes can execute (and data can be processed) in ways that mean that even privileged users of the host cannot attack their confidentiality or integrity.  To get a little bit technical, these enclaves are memory pages with particular controls on them such that they are always encrypted except when they are actually being processed by the chip.

The two best-known TEE implementations so far are Intel’s SGX and AMD’s SEV (though other silicon vendors are beginning to talk about their alternatives).  Both Intel and AMD are aiming to put these into server hardware and create an ecosystem around their version to make it easy for people to run workloads (or components of workloads) within them.  And the security community is doing what it normally does (and, to be clear, absolutely should be doing), and looking for vulnerabilities in the implementation.  So far, most of the vulnerabilities that have been identified are within Intel’s SGX – though I’m not in a position to say whether that’s because the design and implementation is weaker, or just because the researchers have concentrated on the market leader in terms of server hardware.  It looks like we need to go through a cycle or two of the technologies before the industry is convinced that we have a working design and implementation that provides the levels of security that are worth deploying.  There’s also work to be done to provide sufficiently high quality open source software and drivers to support TEEs for wide deployment.

Despite the hopes of the silicon vendors, it may be some time before TEEs are in common usage, but people are beginning to sit up and take notice, partly because there’s so much interest in moving workloads to the Cloud, but still serious concerns about the security of your sensitive processes and data when they’re there.  This has got to be a good thing, and I think it’s really worth considering how you might start designing and deploying workloads in new ways once TEEs actually do become commonly available.

Equality in volunteering and open source

Volunteering favours the socially privileged

Volunteering is “in”. Lots of companies – particularly tech companies, it seems – provide incentives to employees to volunteer for charities, NGOs abs other “not-for-profits”. These incentives range from donations matching to paid volunteer days to matching hours worked for a charity with a cash donation.

Then there’s other types of voluntary work: helping out at a local sports club, mowing a neighbour’s lawn or fetching their groceries, and, of course, a open source, which we’ll be looking at in some detail. There are almost countless thousands of projects which could benefit from your time.

Let’s step back first and look at the benefits of volunteering. The most obvious, if course, is the direct benefit to the organisation, group or individual of your time and/or expertise. Then there’s the benefits to the wider community. Having people volunteering their time to help out with various groups – particularly those with whom they would have little contact in other circumstances – helps social cohesion and encourages better understanding of differing points of view as you meet people, and not just opinions.

Then there’s the benefit to you. Helping others feels great, looks good on your CV[1], can give you more skills, and make you friends – quite apart from the benefit I mentioned above about helping you to understand differing points of view. On the issue of open source, it’s something that lots of companies – certainly the sorts of companies with which I’m generally involved – are interested in, or even expect to see on your CV. Your contributions to open source projects are visible – unlike whatever you’ve been doing in most other jobs – they can be looked over, they show a commitment and are also a way of gauging your enthusiasm, expertise and knowledge in particular areas. All this seems to make lots of sense, and until fairly recently, I was concerned when I was confronted with a CV which didn’t have any open source contributions that I could check.

The inequality of volunteering

And then I did some reading by a feminist open sourcer (I’m afraid that I can’t remember who it was[3]), and did a little more digging, and realised that it’s far from that simple. Volunteering is an activity which favours the socially privileged – whether that’s in terms of income, gender, language or any other number of indicator. That’s particularly true for software and open source volunteering.

Let me explain. We’ll start with the gender issue. On average, you’re much less likely to have spare time to be involved in an open source project if you’re a woman, because women, on average, have more responsibilities in the home, and less free time. They are also globally less likely to have access to computing resources with which to contribute. due to wage discrepancies. Even beyond that, they are less likely to be welcomed into communities and have their contributions valued, whilst being more likely to attract abuse.

If you are in a low income bracket, you are less likely to have time to volunteer, and again, to have access to the resources needed to contribute.

If your first language is not English, you are less likely to be able to find an accepting project, and more likely to receive abuse for not explaining what you are doing.

If your name reflects a particular ethnicity, you may not be made to feel welcome in some contexts online.

If you are not neurotypical (e.g. you have Aspergers or are on the autism spectrum, or if you are dyslexic), you may face problems in engaging in the social activities – online and offline – which are important to full participation in many projects.

The list goes on. There are, of course, many welcoming project and communities that attempt to address all of these issues, and we must encourage that. Some people who are disadvantaged in terms of some of the privilege-types that I’ve noted above may actually find that open source suits them very well, as their privilege can be hidden online in ways in which it could not be in other settings, and that some communities make a special effort to be welcoming and accepting.

However, if we just assume – that’s unconscious bias, folks – that volunteering, and specifically open source volunteering, is a sine qua non for “serious” candidates for roles, or a foundational required expertise for someone we are looking to employ, then we set a dangerous precedent, and run a very real danger of reinforcing privilege, rather than reducing it.

What can we do?

First, we can make our open source projects more welcoming, and be aware of the problems that those from less privileged groups may face. Second, we must be aware, and make our colleagues aware, that when we are interviewing and hiring, lack evidence of volunteering is not evidence that the person is not talented, enthusiastic or skilled. Third, and always, we should look for more ways to help those who are less privileged than us to overcome the barriers to accessing not only jobs but also volunteering opportunities which will benefit not only them, but our communities as a whole.


1 – Curriculum vitae[2].

2 – Oh, you wanted the Americanism? It’s “resume” or something similar, but with more accents on it.

3 – a friend reminded me that it might have been this: https://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community

Cryptographers arise!

Cryptography is a strange field, in that it’s both concerned with keeping secrets, but also has a long history of being kept secret, as well.  There are famous names from the early days, from Caesar (Julius, that is) to Vigenère, to more recent names like Diffie, Hellman[1], Rivest, Shamir and Adleman.  The trend even more recently has been away from naming cryptographic protocols after their creators, and more to snappy names like Blowfish or less snappy descriptions such as “ECC”.  Although I’m not generally a fan of glorifying individual talent over collective work, this feels like a bit of a pity in some ways.

In fact, over the past 80 years or so, more effort has been probably put into keeping the work of teams in cryptanalysis – the study of breaking cryptography – secret, though there are some famous names from the past like Al-Kindi, Phelippes (or “Phillips), Rejewski, Turing, Tiltman, Knox and Briggs[2].

Cryptography is difficult.  Actually, let me rephrase that: cryptography is easy to do badly, and difficult to do well.  “Anybody can design a cipher that they can’t break”, goes an old dictum, with the second half of the sentence, “and somebody else can easily break”, being generally left unsaid.  Creation of cryptographic primitives requires significant of knowledge of mathematics – some branches of which are well within the grasp of an average high-school student, and some of which are considerably more arcane.  Putting those primitives together in ways that allow you to create interesting protocols for use in the real world doesn’t necessarily require that you understand the full depth of the mathematics of the primitives that you’re using[3], but does require a good grounding in how they should be used, and how they should not be used.  Even then, a wise protocol designer, like a wise cryptographer[4], always gets colleagues and others to review his or her work.  This is one of the reasons that it’s so important that cryptography should be in the public domain, and preferably fully open source.

Why am I writing about this?  Well, partly because I think that, on the whole, the work of cryptographers is undervalued.  The work they do is not only very tricky, but also vital.  We need cryptographers and cryptanalysts to be working in the public realm, designing new algorithms and breaking old (and, I suppose) new ones.  We should be recognising and celebrating their work.  Mathematics is not standing still, and, as I wrote recently, quantum computing is threatening to chip away at our privacy and secrecy.  The other reasons that I’m writing about this is because I think we should be proud of our history and heritage, inspired to work on important problems, and to inspire those around us to work on them, too.

Oh, and if you’re interested in the t-shirt, drop me a line or put something in the comments.


1 – I’m good at spelling, really I am, but I need to check the number of ells and ens in his name every single time.

2 – I know that is heavily Bletchley-centric: it’s an area of history in which I’m particularly interested.  Bletchley was also an important training ground for some very important women in security – something of which we have maybe lost sight.

3 – good thing, too, as I’m not a mathematician, but I have designed the odd protocol here and there.

4 – that is, any cryptographer who recognises the truth of the dictum I quote above.

Will quantum computing break security?

Do you want J. Random Hacker to be able to pretend that they’re your bank?

Over the past few years, a new type of computer has arrived on the block: the quantum computer.  It’s arguably the sixth type of computer:

  1. humans – before there were artificial computers, people used, well, people.  And people with this job were called “computers”.
  2. mechanical analogue – devices such as the Antikythera mechanism, astrolabes or slide rules.
  3. mechanical digital – in this category I’d count anything that allowed discrete mathematics, but didn’t use electronics for the actual calculation: the abacus, Babbage’s Difference Engine, etc.
  4. electronic analogue – many of these were invented for military uses such as bomb sights, gun aiming, etc.
  5. electronic digital – I’m going to go out on a limb here, and characterise Colossus as the first electronic digital computer[1]: these are basically what we use today for anything from mobile phones to supercomputers.
  6. quantum computers – these are coming, and are fundamentally different to all of the previous generations.

What is quantum computing?

Quantum computing uses concepts from quantum mechanics to allow very different types of calculations to what we’re used to in “classical computing”.  I’m not even going to try to explain, because I know that I’d do a terrible job, so I suggest you try something like Wikipedia’s definition as a starting point.  What’s important for our purposes is to understand that quantum computers use qubits to do calculations, and for quite a few types of mathematical algorithms – and therefore computing operations – they can solve problems much faster than classical computers.

What’s “much faster”?  Much, much faster: orders of magnitude faster.  A calculation that might take years or decades with a classical computer could, in certain circumstances, take seconds.  Impressive, yes?  And scary.  Because one of the types of problems that quantum computers should be good at solving is decrypting encrypted messages, even without the keys.

This means that someone with a sufficiently powerful quantum computer should be able to read all of your current and past messages, decrypt any stored data, and maybe fake digital signatures.  Is this a big thing?  Yes.  Do you want J. Random Hacker to be able to pretend that they’re your bank[2]?  Do you want that transaction on the blockchain where you were sold a 10 bedroom mansion in Mayfair to be “corrected” to be a bedsit in Weston-super-Mare[3]?

Some good news

This is all scary stuff, but there’s good news, of various types.

The first is that in order to make any of this work at all, you need a quantum computer with a good number of qubits operating, and this is turning out to be hard[4].  The general consensus is that we’ve got a few years before anybody has a “big” enough quantum computer to do serious damage to classical encryption algorithms.

The second is that, even with a sufficient number of qubits to attacks our existing algorithms, you still need even more in order to allow for error correction.

The third is that although there are theoretical models to show how to attack some of our existing algorithms, actually making them work is significantly harder than you or I[5] might expect.  In fact, some of the attacks may turn out to be infeasible, or just take more years to perfect that we’d worried about.

The fourth is that there are clever people out there who are designing quantum-computation resistant algorithms (sometimes referred to as “post-quantum algorithms”) that we can use, at least for new encryption, once they’ve been tested and become widely available.

All-in-all, in fact, there’s a strong body of expert opinion that says that we shouldn’t be overly worried about quantum computing breaking our encryption in the next 5 or even 10 years.

And some bad news

It’s not all rosy, however.  Two issues stick out to me as areas of concern.

  1. People are still designing and rolling out systems which don’t consider the issue.  If you’re coming up with a system which is likely to be in use for ten or more years, or which will be encrypting or signing data which must remain confidential or attributable over those sorts of periods, then you should be considering what the possible impact of quantum computing may have on your system.
  2. some of the new, quantum-computing resistant algorithms are proprietary.  This means that when you and I want to start implementing systems which are designed to be quantum-computing resistant, we’ll have to pay to do so.  I’m a big proponent of open source, and particularly of open source cryptography, and my big worry is that we just won’t be able to open source these things, and worse, that when new protocol standards are created – either de facto or through standards bodies – they will choose proprietary algorithms that exclude the use of open source, whether on purpose, through ignorance, or because few good alternatives are available.

What to do?

Luckily, there are things you can do to address both of the issues above.  The first is to think and plan, when designing a system, about what the impact of quantum computing might be on it.  Often – very often – you won’t actually need to implement anything explicit now (and it could be hard to, given the current state of the art), but you should at least embrace the concept of crypto-agility: designing protocols and systems so that you can swap out algorithms if required[7].

The second is a call to arms: get involved in the open source movement, and encourage everybody you know who has anything to do with cryptography to rally for open standards and for research into non-proprietary, quantum-computing resistant algorithms.  This is something that’s very much on my to-do list, and an area where pressure and lobbying is just as important as the research itself.


1 – I think it’s fair to call it the first electronic, programmable computer.  I know there were earlier non-programmable ones, and that some claim ENIAC, but I don’t have the space or the energy to argue the case here.

2 – no.

3 – see [2].  Don’t get me wrong, by the way – I grew up near Weston-super-Mare, and it’s got things going for it, but it’s not Mayfair.

4 – and if a quantum physicist says that something’s hard, then, to my mind, it’s hard.

5 – and I’m assuming that neither of us is a quantum physicist or mathematician[6].

6 – I’m definitely not.

7 – and not just for quantum-computing reasons: there’s a good chance that some of our existing classical algorithms may just fall to other, non-quantum attacks such as new mathematical approaches.

The 3 things you need to know about disk encryption

Use software encryption, preferably an open-source and audited solution.

It turns out that somebody – well, lots of people, in fact – failed to implement a cryptographic standard very well.  This isn’t a surprise, I’m afraid, but it’s bad news.  I’ve written before about how important it is to be using disk encryption, but it turns out that the advice I gave wasn’t sufficient, or detailed enough.

Here’s a bit of background.  There are two ways to do disk encryption:

  1. let the disk hardware (and firmware) manage it: HDD (hard disk drive), SSD (solid state drive) and hybrid (a mix of HDD and SDD technologies) manufacturers create drives which have encryption built in.
  2. allow your Operating System (e.g. Linux[0], OSX[1], Windows[2]) to do the job: the O/S will have a little bit of itself on the disk unencrypted, which will allow it to decrypt the rest of the disk (which is encrypted) when provided with a password or key.

You’d think, wouldn’t you, that option 1 would be the safest?  It should be quick, as it’s done in hardware, and well, the companies who manufacture these disks will know that they’re doing, right?

No.

A paper (link opens a PDF file) written by some researchers in the Netherlands reveals some work that they did on several SSD drives to try to work out how good a job had been done on the encryption security.  They are all supposed to have implemented a fairly complex standard from the TCG[4] called Opal, but it seems that none of them did it right.  It turns out that someone with physical access to your hardware can, fairly trivially, decrypt what’s on your drive.  And they can do this without the password that you use to lock it or any associated key(s).  The simple lesson from this is that you shouldn’t trust hardware disk encryption.

So, software disk encryption is OK, then?

Also no.

Well, actually yes, as long as you’re not using Microsoft’s BitLocker in its default mode.  It turns out that BitLocker will just use hardware encryption if the drive its using supports it.  In other words, using BitLocker just uses hardware encryption unless you tell it not to do so.

What about other options?  Well, you can tell BitLocker not to use hardware encryption, but only for a new installation: it won’t change on an existing disk.  The best option[5] is to use a software encryption solution which is open source and audited by the wider community.  LUKS is the default for most Linux distributions.  One suggested by the papers’ authors for Windows is Veracrypt.  Can we be certain that there are no holes or mistakes in the implementation of these solutions?  No, we can’t, but the chances of security issues being found and fixed are much, much higher than for proprietary software[6].

What, then are my recommendations?

  1. Don’t use hardware disk encryption.  It’s been shown to be flawed in many implementations.
  2. Don’t use proprietary software.  For anything, honestly, particularly anything security-related, but specifically not for disk encryption.
  3. If you have to use Windows, and are using BitLocker, run with VeraCrypt on top.

 


1 – GNU Linux.

2 – I’m not even sure if this is the OS that Macs run anymore, to be honest.

3 – not my thing either, but I’m pretty sure this is what it’s call.  Couldn’t be certain of the version, though.

4 – Trusted Computing Group.

5 – as noted by the paper’s authors, and heartily endorsed by me.

6 – I’m not aware of any problems with Macintosh-based implementations, but open source is just better – read the article linked from earlier in the sentence.

On being acquired – a personal view

It’s difficult to think of a better fit than IBM.

First off, today is one of those days when I need to point you at the standard disclaimer that the views expressed in this post are my own, and not necessarily those of my employers.  That said, I think that many of them probably align, but better safe than sorry[1].  Another note: I believe that all of the information in this article is public knowledge.

The news came out two days ago (last Sunday, 2018-10-28) that Red Hat, my employer, is being acquired by IBM for $34bn.  I didn’t know about it the deal in advance (I’m not that exalted within the company hierarchy, which is probably a good thing, as all those involved needed to keep very tight-lipped about it, and that would have been hard), so the first intimation I got was when people started sharing stories from various news sites on internal chat discussions.  They (IBM) are quite clear about the fact that they are acquiring us for the people, which means that each of us (including me!) is worth around $2.6m, based on our current headcount.  Sadly, I don’t think it works quite like this, and certainly nobody has (yet) offered to pay me that amount[2].  IBM have also said that they intend to keep Red Hat operating as a separate entity within IBM.

How do I feel?  My initial emotion was shock.  It’s always a surprise when you get news that you weren’t expecting, and the message that we’d carried for a long time was the Red Hat would attempt to keep ploughing its own furrow[3] for as long as possible.  But I’d always known that, as a public company, we were available to be bought, if the money was good enough.  It appears on this occasion that it was.  And that emotion turned to interest as to what was going to happen next.

And do you know what?  It’s difficult to think of a better fit than IBM.  I’m not going to enumerate the reasons that I feel that other possible acquirers would have been worse, but here are some of the reasons that IBM, at least in this arrangement, is good:

  • they “get” open source, and have a long history of encouraging its use;
  • they seem to understand that Red Hat has a very distinctive culture, and want to encourage that, post-acquisition;
  • they have a hybrid cloud strategy and products, Red Hat has a hybrid cloud strategy and products: they’re fairly well-aligned;
  • we’re complementary in a number of sectors and markets;
  • they’re a much bigger player than us, and suddenly, we’ll have access to more senior people in new and exciting companies.

What about the impact on me, though?  Well, IBM takes security seriously.  IBM has some fantastic research and academic connections.  The group in which I work has some really bright and interesting people in it, and it’s difficult to imagine IBM wanting to break it up.  A number of the things I’m working on will continue to align with both Red Hat’s direction and IBM’s.  The acquisition will take up to a year to complete – assuming no awkward regulatory hurdles along the way – and not much is going to change in the day-to-day.  Except that I hope to get even better access to my soon-to-be-colleagues working in similar fields to me, but within IBM.

Will there be issues along the way?  Yes.  Will there be uncertainty?  Yes.  But do I trust that the leadership within Red Hat and IBM have an honest commitment to making things work in a way that will benefit Red Hatters?  Yes.

And am I looking to jump ship?  Oh, no.  Far too much interesting stuff to be doing.  We’ve got an interesting few months and years ahead of us.  My future looked red, until Sunday night.  Then maybe blue.  But now I’m betting on something somewhere between the two: go Team Purple.


1 – because, well, lawyers, the SEC, etc., etc.

2 – if it does, then, well, could somebody please contact me?

3 – doing its own thing independently.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.