There are no absolutes in security

There is no “secure”.

Let’s stop using the word “secure”. There is no “secure” in IT.

I know that sounds crazy, but it’s true.

Sometimes, when I speak to colleagues and customers, there will be non-technical or non-security people there, and they ask how to get a secure system. So I explain how I’d make a system secure. It goes a bit like this.

  1. Remove any non-critical USB connections: in particular external or “thumb” drives.
  2. Turn off all bluetooth.
  3. Turn off all wifi.
  4. Remove any network cables.
  5. Remove any other USB connections, including mouse or keyboard.
  6. Disconnect any monitors.
  7. Disconnect any other cables that are connected to the system.
  8. Yes, that includes the power cable.
  9. Now take out any hard drives – SSD, HDD or other.
  10. Destroy them. My preferred method is to gouge tracks in all spinning media, break the heads, bash all pieces with a hammer and then throw them into Mount Doom, but any other volcano[1] will do. Thermite lances are probably acceptable. You should do the same with all other components that you removed in earlier steps.
  11. Destroy the motherboard, including all chips and RAM.
  12. Tip all remaining pieces down a well.
  13. Pour concrete down the well.[2]
  14. You probably now have a secure which is about as secure as you’re going to get.

Yes, it’s a bit extreme, but the point is that all of the components there are possible threat vectors or information leakage channels.

Can we design and operate a system where we manage and mitigate the risks of threats and information leakage? Yes. That’s where we improve the security of a system. Is that a secure system? No, it’s not. What we’ve done is raise the bar, but we’ve not made it absolutely secure.

Part of the problem is that there’s just no way, these days[4], that any single person can be certain of the security of all parts of a system: they are just too many, and too complex. You may understand the application layer, but what about the virtualisation layer, for instance? I presented a simplified layer diagram in my post Isolationism a few months back, in which I listed the host as the bottom layer, but that was, of course, just asking for trouble. Along came Meltdown and Spectre, and now it’s clear (as if we didn’t know it already) that you should never ignore the fact that you can’t even trust the silicon you’re running on to do the thing you think it ought.

None of this, however, stops people and companies telling you that they’ll “secure your perimeter”, or provide you with “secure systems”. And it annoys me[5]. “We’ll help you secure your perimeter” isn’t too bad, but anything that suggests that you can have “secure systems” smacks to me of marketing – bad marketing.

So here you go: please stop using the word “secure” as an unqualified adjective or verb. We’re grown-ups, now, and we know it’s not real. So let’s not pretend.

Now – where was that well-cover? I need to deal with little Tommy.


1 – terrestrial/Middle Earth. I’m not sure about volcano temperatures on other planets or in the Undying Lands across the Western Sea.

2 – it should probably therefore be a disused well. Check there are no animals down there first[3]. In fact, before you throw anything down there.

3 – what’s that, Lassie? Little Tommy’s down the well? Well, I wonder whether little Tommy is waiting for us to throw the components down there so that he can do bad things. Bad Tommy.

4 – I’d like to think that maybe there was, once, in the distant past, but I’m probably kidding myself.

5 – you might be surprised at the number of things that annoy me[6].

6 – unless you’re my wife, in which case you probably won’t be[7].

7 – surprised. Or, in fact, reading this article.

3 tests for NOT moving to blockchain 

How to tell when you can avoid the hype.

So, there’s this thing called “blockchain” which is quite popular…

You know that already, of course.  I keep wondering if we’ve hit “peak hype” for blockchain and related technologies yet, but so far there’s no sign of it.  As usual for this blog, when I’m talking about blockchain, I’m going to include DLTs – Distributed Ledger Technologies – which are, by some tight definitions of the term, not really blockchains at all.  I’m particularly interested, from a professional point of view, in permissioned blockchains.  You can read more about how that’s defined in my previous post Is blockchain a security topic? – the key point here is that I’m interested in business applications of blockchain beyond cryptocurrency[1].

And, if the hype is to be believed – and some of it probably should be[2] – then there is an almost infinite set of applications for blockchain.  That’s probably correct, but that doesn’t mean that they’re all good applications for blockchain.  Some, in fact, are likely to be very bad applications for blockchain.

The hype associated with blockchain, however, means that businesses are rushing to embrace this new technology[3] without really understanding what they’re doing.  The drivers towards this move are arguably three-fold:

  1. you can, if you try, make almost any application with multiple users which stores data into a blockchain-enable application;
  2. there are lots of conferences and “gurus” telling people that if they don’t embrace blockchain now, they’ll go out of business within six months[4];
  3. it’s not easy technology to understand fully, and lots of the proponents “on-the-ground” within organisations are techies.

I want to unpack that last statement before I get a hail of trolls flaming me[5].  I have nothing against techies – I’m one myself – but one of our characteristics tends to be enormous enthusiasm about new things (“shinies”) that we understand, but whose impact on the business we don’t always fully grok[6]. That’s not always a positive for business leaders.

The danger, then, is that the confluence of those three drivers may lead to businesses deciding to start moving to blockchain applications without fully understanding whether that’s a good idea.  I wrote in another previous post (Blockchain: should we all play?) about some tests that you can apply to decide whether a process is a good fit for blockchain and when it’s not.  They were useful, but the more I think about it, the more I’m convinced that we need some simple tests to tell us when we should definitely not move a process or an application to a blockchain.  I present my three tests.  If your answer any of these questions is “yes”, then you almost certainly don’t need a blockchain.

Test 1 – does it have a centralised controller or authority?

If the answer is “yes”, then you don’t need a blockchain.

If, for instance, you’re selling, I don’t know, futons, and you have a single ordering system, then you have single authority for deciding when to send out a futon.  You almost certainly don’t need to make this a blockchain.  If you are a purveyor of content that has to pass through a single editorial and publishing process, they you almost certainly don’t need to make this a blockchain.

The lesson is: blockchains really don’t make sense unless the tasks required in the process execution – and the trust associated with those tasks – is distributed between multiple entities.

Test 2 – could it work fine with a standard database?

If the answer to this question is “yes”, then you don’t need a blockchain.

This question and the previous one are somewhat intertwined, but don’t need to be.  There are applications where you have distributed processes, but need to store information centrally, or centralised authorities but distributed data, where one may be yes, but the other “no”.  But if this is question is a “yes”, then use a standard database.

Databases are good at what they do, they are cheaper in terms of design and operation than running a blockchain or distributed ledger, and we know how to make them work.  Blockchains are about letting everybody[8] see and hold data, but the overheads can be high, and the implications costly.

Test 3 – is adoption going to be costly, or annoying, to some stakeholders?

If the answer to this question is “yes”, then you don’t need a blockchain.

I’ve heard assertions that blockchains always benefit all users.  This is a patently false.  If you are creating an application for a process, and changing the way that your stakeholders interact with you and it, you need to consider whether that change is in their best interests.  It’s very easy to create and introduce an application, blockchain or not, which reduces business friction for the owner of the process, but increases it for other stakeholders.

If I make engine parts for the automotive industry, it may benefit me immensely to be able to track and manage the parts on a blockchain.  I may be able to see at a glance who’s supplied what, when, and the quality of the steel used in the ball-bearings.  On the other hand, if I’m a ball-bearing producer, and I have an established process which works for the forty companies to whom I sell ball-bearings, then adopting a new process for just one of them, with associated changes to my method of work, new systems and new storage and security requirements is unlikely to be in my best interests: it’s going to be both costly and annoying.

Conclusion

Tests are guidelines: they’re not fixed in stone.  One of these tests looks like a technical test (the database one), but is really as much about business roles and responsibilities as the other two.  All of them, hopefully, can be used as a counter-balance to the three drivers I mentioned.

 


1 – which, don’t get me wrong, is definitely interesting and a business application – it’s just not what I’m going to talk about in this post.

2 – the trick is knowing which bits.  Let me know if you work out how, OK?

3 – it’s actually quite a large set of technologies, to be honest.

4 – which is patently untrue, unless the word “they” refers there to the conferences and gurus, in which case it’s probably correct.

5 – which may happen anyway due to my egregious mixing of metaphors.

6 – there’s a word to love.  I’ve put it in to exhibit my techie credentials[7].

7 – and before you doubt them, yes, I’ve read the book, in both cut and uncut versions.

8 – within reason.

Moving to DevOps, what’s most important? 

Technology, process or culture? (Clue: it’s not the first two)

You’ve been appointed the DevOps champion in your organisation: congratulations.  So, what’s the most important issue that you need to address?

It’s the technology – tools and the toolchain – right?  Everybody knows that unless you get the right tools for the job, you’re never going to make things work.  You need integration with your existing stack – though whether you go with tight or loose integration will be an interesting question – a support plan (vendor, 3rd party or internal), and a bug-tracking system to go with your source code management system.  And that’s just the start.

No!  Don’t be ridiculous: it’s clearly the process that’s most important.  If the team doesn’t agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you’ll never be able institute a consistent, repeatable working pattern.

In fact, although both the technology and the process are important, there’s a third component which is equally important, but typically even harder to get right: culture.  Yup, it’s that touch-feely thing that we techies tend to struggle with[1].

Culture

I was visiting a medium-sized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO.  We were ushered into the CEO’s office and waited for a while as the two of them finished participating in the daily stand-up.  They apologised for being a minute or two late, but far from being offended, I was impressed.  Here was an organisation where the culture of participation was clearly infused all the way up to the top.

Not that culture can be imposed from the top – nor can you rely on it percolating up from the bottom[3] – but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it.  If you can get management to buy into the process – and to be seen to buy in – you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it.

So let’s say that management believes that you should give DevOps a go.  Where do you start?

Developers, tick?[5]

Developers may well be your easiest target group.  Developers are often keen to try new things, and to find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies.  DevOps has arguably been mainly driven by the development community. But you shouldn’t assume that all developers will be keen to embrace this change.  For some, the way things have always been done – your Rick Parfitts of dev, if you will[7] – is fine.  Finding ways to help them work efficiently in the new world is part of your job, not just theirs.  If you have superstar developers who aren’t happy with change, you risk alienating them and losing them if you try to force them into your brave new world.  What’s worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren’t going to change if it makes their lives more difficult and reduces their productivity.

Maybe you’re not going to be able to move all the systems and people to DevOps immediately.  Maybe you’re going to need to choose which apps start with, and who will be your first DevOps champions.  Maybe it’s time to move slowly.

Not maybe: definitely

No – I lied.  You’re definitely going to need to move slowly.  Trying to change everything at once is a recipe for disaster.

This goes for all elements of the change – which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose – bar one.  For all of those elements, if you try to move everything in one go, you will fail.  You’ll fail for a number of reasons.  You’ll fail for reasons I can’t imagine, and, more importantly, for reasons you can’t imagine, but some of the reasons will include:

  • people – most people – don’t like change;
  • technologies don’t like change (you can’t just switch and expect everything to work still);
  • applications don’t like change (things worked before, or at least failed in known ways: you want to change everything in one go?  Well, they’ll all fail in new and exciting[9] ways;
  • users don’t like change;
  • use cases don’t like change.

The one exception

You noticed that, above, I wrote “bar one”, when discussing which elements you shouldn’t choose to change all in one go?  Well done.

What’s that exception?  It’s the initial team.  When you choose your initial application to change, and you’re thinking about choosing the team to make that change, select the members carefully, and select a complete set.  This is important.  If you choose just developers, just test folks, or just security folks, or just ops folks, or just management, then you won’t actually have proved anything at all.  If you leave out one functional group from your list, you won’t actually have proved anything at all.  Well, you might have proved to a small section of your community that it kind of works, but you’ll have missed out on a trick.  And that trick is that if you choose keen people from across your functional groups, it’s much harder to fail.

Say that your first attempt goes brilliantly.  How are you going to convince other people to replicate your success and adopt DevOps?  Well, the company newsletter, of course.  And that will convince how many people, exactly?  Yes, that number[12].  If, on the other hand, you have team members from across the functional parts or the organisation, then when you succeed, they’ll tell their colleagues, and you’ll get more buy-in next time.

If, conversely, it fails, well, if you’ve chosen your team wisely, and they’re all enthusiastic, and know that “fail often, fail fast” is good, then they’ll be ready to go again.

So you need to choose enthusiasts from across your functional groups.  They can work on the technologies and the process, and once that’s working, it’s the people who will create that cultural change.  You can just sit back and enjoy.  Until the next crisis, of course.


1 – OK, you’re right.  It should be “with which we techies tend to struggle”[2]

2 – you thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn’t you?  Read it again: I put “tend to”.  That’s the best you’re getting.

3 – is percolating a bottom-up process?  I don’t drink coffee[4], so I wouldn’t know.

4 – do people even use percolators to make coffee anymore?  Feel free to let me know in the comments. I may pretend interest if you’re lucky.

5 – for US readers (and some other countries, maybe?), please substitute “tick” for “check” here[6].

6 – for US techie readers, feel free to perform “s/tick/check/;”.

7 – this is a Status Quo[8] reference for which I’m extremely sorry.

8 – for Millennial readers, please consult your favourite online reference engine or just roll your eyes and move on.

9 – for people who say, “but I love excitement”, trying being on call at 2am on a Sunday morning at end of quarter when your Chief Financial Officer calls you up to ask why all of last month’s sales figures have been corrupted with the letters “DEADBEEF”[10].

10 – for people not in the know, this is a string often used by techies as test data because a) it’s non-numerical; b) it’s numerical (in hexadecimal); c) it’s easy to search for in debug files and d) it’s funny[11].

11 – though see [9].

12 – it’s a low number, is all I’m saying.

Q: when is a backdoor not a backdoor?

An encryption backdoor isn’t the same as a house backdoor: the metaphor is faulty.

A: when you’re a politician.

I’m getting pretty bored of having to write about this, to be honest. I’ve blogged twice already on encryption backdoors:

But our politicians keep wanting us to come up with them, as the Register helpfully points out – thanks, both the UK Prime Minister and FBI Director.

I feel sorry for their advisers, because all of the technical folks I’ve ever spoken to within both the UK and US Establishments[1] absolutely understand that what’s being asked for by these senior people really isn’t plausible.

I really do understand the concern that the politicians have. They see a messaging channel which bad people may use to discuss bad things, and they want to stop those bad things. This is a good thing, age part of their job. The problem starts when they think “it’s like a phone: we have people who can tap phones”. Those who are more technologically savvy may even think, “it’s like email, and we can read email.” And in the old days[3], before end-to-end encryption, they weren’t far wrong.

The problem now is that many apps these days set up a confidential (encrypted) link between the two ends of the connection. And they do it in a way which means that nobody except the initiators of the two ends of the connection can read it. And they use strong encryption, which means that there’s no easy for anyone[4] to break it.

This means that it’s difficult for anyone to read the messages. So what can be done about it, then? Well, if you’re a politician, the trend is to tell the providers of these popular apps to provide a backdoor to let you, the “good people” in.

Oh, dear.

I believe that the problem here isn’t really that politicians are stupid, because I honestly don’t think that they are[5]. The problem is with metaphor. Metaphors are dangerous, because humans need them to get a handle on an aspect of something which is unfamiliar, but once they’ve latched on to a particular metaphor, they assume that all the other aspects of the thing to which the metaphor refers are the same.

An encryption backdoor isn’t the same as a house backdoor: the metaphor is faulty[6].

The key[7] similarity is that in order to open up your house backdoor, you need a key. That key gives you entry to the house, and it also allows any other person you give that key access to it, as well. So far, so good.

Here’s where it gets bad, though. I’m going to simplify things a little here, but let’s make some points.

  1. When you give a backdoor key to somebody, it’s not easily copyable if somebody happens to see it. In the electronic world, if you see the key once, you have it.
  2. The cost of copying an electronic key is basically zero once you have it. If one person decides to share the key indiscriminately, then the entire Internet has it.
  3. Access to a house Backdoor let’s you see what’s in the house at that particular moment. Access to an electronic backdoor lets you look at whatever the contents of the house were all the way up to the time the lock was changed, if you’ve taken copies (which is often easy).
  4. And here’s the big one. When you create a backdoor, you’re creating a backdoor for every house, and not just one. Let’s say that I’m a house builder. I’m very, very prolific, and I build thousands of houses a week. And I put the same lock in the backdoor of every house that I build. Does that make sense? No, it doesn’t. But that’s what the politicians are asking for.

So, the metaphor breaks down. Any talk about “skeleton keys” is an attempt to reestablish the metaphor. Which is broken.

What’s the lesson here? We should explain to politicians that backdoors are a metaphor, and that the metaphor only goes so far. Explain that clever people – clever, good people – don’t believe that what they (the politicians) think should be done is actually possible, and the move on to work that can be done. Because they’re right: there are bad people out there, doing bad things, and we need to address that. But not this way.


1 – the capital “E” is probably important here. In the UK, at least, “establishment” can mean pub[2].

2 – and people in pubs, though they may start up clued up, tend to get less clever as the evening goes on, though they may think, for a while, that they’re becoming more clever. This is in my (very) limited experience, obviously.

3 – 10 years ago? Not very long ago, to be honest.

4 – well, who’s owning up, anyway.

5 – mostly.

6 – or Fawlty, for John Cleese fans.

7 – ooh, look what I did there.

If it isn’t tested, it doesn’t work

Testing isn’t just coming up with tests for desired use cases.

Huh.  Shouldn’t that title be “If it isn’t tested, it’s not going to work”?

No.

I’m asserting something slightly different here – in fact, two things.  The first can be stated thus:

“In order for a system to ‘work’ correctly, and to defined parameters, test cases for all plausible conditions must be documented, crafted – and passed – before the system is considered to ‘work’.”

The second is a slightly more philosophical take on the question of what a “working system” is:

“An instantiated system – including software, hardware, data and wetware[1] components – may be considered to be ‘working’ if both its current state, and all known plausible future states from the working state have been anticipated, documented and appropriately tested.”

Let’s deal with these one by one, starting with the first[3].

Case 1 – a complete test suite

I may have given away the basis for my thinking by the phrasing in the subtitle above.  What I think we need to be looking for, when we’re designing a system, is what we should be doing ensuring that we have a test case for every plausible condition.  I considered “possible” here, but I think that may be going too far: for most systems, for instance, you don’t need to worry too much about meteor strikes.  This is an extension of the Agile methodology dictum: “a feature is not ‘done’ until it has a test case, and that test case has been passed.”  Each feature should be based on a use case, and a feature is considered correctly implemented when the test cases that are designed to test that feature are all correctly passed.

It’s too easy, however, to leave it there.  Defining features is, well not easy, but something we know how to do.  “When a user enters enters a valid username/password combination, the splash-screen should appear.”  “When a file has completed writing, a tick should appear on the relevant icon.”  “If a user cancels the transaction, no money should be transferred between accounts.”  The last is a good one, in that it deals with an error condition.  In fact, that’s the next step beyond considering test cases for features that implement functionality to support actions that are desired: considering test cases to manage conditions that arise from actions that are undesired.

The problem is that many people, when designing systems, only consider one particular type of undesired action: accidental, non-malicious action.  This is the reason that you need to get security folks[4] in when you’re designing your system, and the related test cases.  In order to ensure that you’re reaching all plausible conditions, you need to consider intentional, malicious actions.  A system which has not considered these and test for these cannot, in my opinion, be said truly to be “working”.

Case 2 – the bigger systems picture

I write fairly frequently[5] about the importance of systems and systems thinking, and one of the interesting things about a system, from my point of view, is that it’s arguably not really a system until it’s up and running: “instantiated”, in the language I used in my definition above.

Case 2 dealt, basically, with test cases and the development cycle.  That, by definition, is before you get to a fully instantiated system: one which is operating in the environment for which it was designed – you really, really hope – and is in situ.  Part of it may be quiescent, and that is hopefully as designed, but it is instantiated.

A system has a current state; it has a set of defined (if not known[7]) past states; and a set of possible future states that it can reach from there.  Again, I’m not going to insist that all possible states should be considered, for the same reasons I gave above, but I think that we do need to talk about all known plausible future states.

These types of conditions won’t all be security-related.  Many of them may be more appropriately thought of as to do with assurance or resilience.  But if you don’t get the security folks in, and early in the planning process, then you’re likely to miss some.

Here’s how it works.  If I am a business owner, and I am relying on a system to perform the tasks for which it was designed, then I’m likely to be annoyed if some IT person comes to me and says “the system isn’t working”.  However, if, in response to my question, “and did it fail due to something we had considered in our design and deployment of the system” is “yes”, then I’m quite lightly to move beyond annoyed to a state which, if we’re honest, the IT person could easily have considered, nay predicted, and which is closer to “incandescent” than “contented”[8].

Because if we’d considered a particular problem  – it was “known”, and “plausible” – then we should have put in place measures to deal with it. Some of those will be preventative measures, to stop the bad thing happening in the first place, and others will be mitigations, to deal with the effects of the bad thing that happened.  And there may also be known, plausible states for which we may consciously decide not to prepare.  If I’m a small business owner in Weston-super-mare[9], then I may be less worried about industrial espionage than if I’m a multi-national[10].  Some risks aren’t worth the bother, and that’s fine.

To be clear: the mitigations that we prepare won’t always be technical.  Let’s say that we come up with a scenario where an employee takes data from the system on a USB stick and gives it to a competitor.  It may be that we can’t restrict all employees from using USB sticks with the system, so we have to rely on legal recourse if that happens.  If, in that case, we call in the relevant law enforcement agency, then the system is working as designed if that was our plan to deal with this scenario.

Another point is that not all future conditions can be reached from the current working state, and if they can’t, then it’s fair to decide not to deal with them.  Once a TPM is initialised, for instance, taking it back to its factory state basically requires to reset it, so any system which is relying on it has also been reset.

What about the last bit of my definition?  “…[A]nticipated, documented and appropriately tested.”  Well, you can’t test everything fully.  Consider that the following scenarios are all known and plausible for your system:

  • a full power-down for your entire data centre;
  • all of your workers are incapacitate by a ‘flu virus;
  • your main sysadmin is kidnapped;
  • an employee takes data from the system on a USB stick and gives it to a competitor.

You’re really not going to want to test all of these.  But you can at least perform paper exercises to consider what steps you should take, and also document them.  You might ensure that you know which law enforcement agency to call, and what the number is, for instance, instead of actually convincing an employee to leak information to a competitor and then having them arrested[11].

Conclusion

Testing isn’t just coming up with tests for desired use cases.  It’s not even good enough just to prepare for accidental undesired use cases on top of that.  We need to consider malicious use cases, too.   And testing in development isn’t good enough either: we need to test with live systems, in situ.  Because if we don’t, something, somewhere, is going to go wrong.

And you really don’t want to be the person telling your boss that, “well, we thought it might, but we never tested it.”

 

 


1 – “wetware” is usually defined as human components of a system (as here), but you might have non-human inputs (from animals or aliens), or even from fauna[2], I suppose.

2 – “woodware”?

3 – because I, for one, need a bit of a mental run-up to the second one.

4 – preferably the cynical, suspicious types.

5 – if not necessarily regularly: people often confuse the two words.  A regular customer may only visit once a year, but always does it on the same day, whereas a frequent customer may visit on average once a week, but may choose a different day each week.[6]

6 – how is this relevant?  It’s not.

7 – yes, I know: Schrödinger’s cat, quantum effects, blah, blah.

8 – Short version: if the IT person says “it broke, and it did it in a way we had thought of before”, then I’m going to be mighty angry.

9 – I grew up nearby.  Windy, muddy, donkeys.

10 – which might, plausibly, also be based in Weston-super-mare, though I’m not aware of any.

11 – this is, I think, probably at least bordering on the unethical, and might get you in some hot water with your legal department, and possibly some other interested parties[12].

12 – your competitor might be pleased, though, so there is that.

Security patching and vaccinations: a surprising link

Learning from medicine, but recognising differences.

I’ve written a couple of times before about patching, and in one article (“The Curious Incident of the Patch in the Night-Time“), I said that I’d return to the question of how patches and vaccinations are similar.  Given the recent flurry of patching news since Meltdown and Spectre, I thought that now would be a good time to do that.

Now, one difference that I should point out up front is that nobody believes that applying security patches to your systems will give them autism[1].  Let’s counter that with the first obvious similarity, though: patching your systems makes them resistant to attacks based on particular vulnerabilities.  Equally, a particular patch may provide resistance to multiple types of attack of the same family, as do some vaccinations.  Also similarly, as new attacks emerge – or bacteria or viruses change and evolve – new patches are likely to be required to deal with the problem.

We shouldn’t overplay the similarities, of course.  Just because some types of malware are referred to as “viruses” doesn’t mean that their method of attack, or the mechanisms by which computer systems defend against them, are even vaguely alike[2].  Computer systems don’t have complex immune systems which adapt and learn how to deal with malware[3].  On the other hand, there are also lots of different types of vulnerability for which patches are efficacious which are very different to bacterial or virus attacks: a buffer overflow attack or SQL injection, for instance.  So, it’s clearly possible to over-egg this pudding[4].  But there is another similarity that I do think is worth drawing, though it’s not perfect.

There are some systems which, for whatever reason, it is actually quite risky to patch.   This is because of the business risk associated with patching them, and might be down to a number of factors, including:

  • projected downtime as the patch is applied and system rebooted is unacceptable;
  • side effects of the patch (e.g. performance impact) are too severe;
  • risk of the system not rebooting after patch application is too high;
  • other components of the system (e.g. hardware or other software) may be incompatible with the patch.

In these cases, a decision may be made that the business risk of patching the system outweighs the business risk of leaving it unpatched. Alternatively, it may be that you are running some systems which are old and outdated, and for which there is no patch available.

Here’s where there’s another surprising similarity with vaccinations.  There are, in any human population, individuals for whom the dangers of receiving a vaccination may outweigh the benefits.  The reasons for this are different from the computer case, and are generally down to weakened immune systems and/or poor health.  However, it turns out that as the percentage of a human population[6] that is vaccinated rises, the threat to the unvaccinated individuals reduces, as there are fewer infection vectors from whom those individuals can receive the infection.

We need to be careful with how closely we draw the analogy here, because we’re on shaky ground if we go too far, but there are types of system vulnerability – particularly malware – for which this is true for computer systems.  If you patch all the systems that you can, then the number of possible “jump-off” points for malware will reduce, meaning that the unpatched systems are less likely to be affected.  To a lesser degree, it’s probably true that as unsophisticated attackers notice that a particular attack vector is diminishing, they’ll ignore it and move to something else.  Over-stretching this thread, however, is particularly dangerous: a standard approach for any motivated attacker is to attempt attack vectors which are “old”, but to which unpatched systems are likely to be vulnerable.

Another difference is that in the computing world, attacks never die off.  Though there are stockpiles of viruses and bacteria which are extinct in the general population which are maintained for various reasons[7], some will die out over time.  In the world of IT, pretty much every vulnerability ever discovered will have been documented somewhere, whether there still exists an “infected” system or not, and so is still available for re-use or re-purposing.

What is the moral of this article?  Well, it’s this: even if you are unable to patch all of your systems, it’s still worth patching as many of them as you can.  It’s also worth considering whether there are some low-risk systems that you can patch immediately, and which require less business analysis before deciding whether they can be patched in a second or third round of patching.  It’s probably worth keeping a list of these somewhere.  Even better, you can maintain lists of high-, medium- and low-risk systems – both in terms of business risk and infection vulnerability – and use this to inform your patching, both automatic and manual.  But, dear reader: do patch.


1 – if you believe that – or, in fact, if you believe that vaccinations give children autism – then you’re reading the wrong blog.  I seriously suggest that you go elsewhere (and read some proper science on the subject).

2 – pace the attempts of Hollywood CGI departments to make us believe that they’re exactly the same.

3 – though this is obviously an interesting research area.

4 – “overextend this analogy”.  The pudding metaphor is a good one though, right?[5]

5 – and I like puddings, as my wife (and my waistline) will testify.

6 – or, come to think of it, animal (I’m unclear on flora).

7 – generally, one hopes, philanthropic.

Meltdown and Spectre: thinking about embargoes and disclosures

The technical details behind Meltdown and Spectre are complex and fascinating – and the possible impacts wide-ranging and scary.  I’m not going to go into those here, though there are some excellent articles explaining them.  I’d point readers in particular at the following URLs (which both resolve to the same page, so there’s no need to follow both):

I’d also recommend this article on the Red Hat blog, written by my colleague Jon Masters: What are Meltdown and Spectre? Here’s what you need to know.  It includes a handy cut-out-and-keep video[1] explaining the basics.   Jon has been intimately involved in the process of creating and releasing mitigations and fixes to Meltdown and Spectre, and is very well-placed to talk about it.

All that is interesting and important, but I want to talk about the mechanics and process behind how a vulnerability like this moves from discovery through to fixing.  Although similar processes exist for proprietary software, I’m most interested in open source software, so that’s what I’m going to describe.

 

Step 1 – telling the right people

Let’s say that someone has discovered a vulnerability in a piece of open source software.  There are a number of things they can do.  The first is ignore it.  This is no good, and isn’t interesting for our story, so we’ll ignore it in turn.  Second is to keep it to themselves or try to make money out of it.  Let’s assume that the discoverer is a good type of person[2], so isn’t going to do this[3].  A third is to announce it on the project mailing list.  This might seem like a sensible thing to do, but it can actually be very damaging to the project and the community at large. The problem with this is that the discoverer has just let all the Bad Folks[tm] know about a vulnerability which hasn’t been fixed, and they can now exploit.  Even if the discoverer submits a patch, it needs to be considered and tested, and it may be that the fix they’ve suggested isn’t the best approach anyway.  For a serious security issue, the project maintainers are going to want to have a really good look at it in order to decide what the best fix – or mitigation – should be.

It’s for this reason that many open source projects have a security disclosure process.  Typically, there’s a closed mailing list to which you can send suspected vulnerabilities, often something like “security@projectname.org“.  Once someone has emailed that list, that’s where the process kicks in.

Step 2 – finding a fix

Now, before that email got to the list, somebody had to decide that they needed a list in the first place, so let’s assume that you, the project leader, has already put a process of some kind in place.  There are a few key decisions to make, of which the most important are probably:

  • are you going to keep it secret[5]?  It’s easy to default to “yes” here, for the reasons that I mentioned above, but the number of times you actually put in place restrictions on telling people about the vulnerability – usually referred to as an embargo, given its similarity to a standard news embargo – should be very, very small.  This process is going to be painful and time-consuming: don’t overuse it.  Oh, and your project’s meant to be about open source software, yes?  Default to that.
  • who will you tell about the vulnerability?  We’re assuming that you ended up answering “yes” to te previous bullet, and that you don’t want everybody to know, and also that this is a closed list.  So, who’s the most important set of people?  Well, most obvious is the set of project maintainers, and key programmers and testers – of which more below.  These are the people who will get the fix completed, but there are two other constituencies you might want to consider: major distributors and consumers of the project. Why these people, and who are they?  Well, distributors might be OEMs, Operating System Vendors or ISVs who bundle your project as part of their offering.  These may be important because you’re going to want to ensure that people who will need to consume your patch can do so quickly, and via their preferred method of updating.  What about major consumers?  Well, if an organisation has a deployed estate of hundreds or thousands of instances of a project[6], then the impact of having to patch all of those at short notice – let alone the time required to test the patch – may be very significant, so they may want to know ahead of time, and they may be quite upset if they don’t.  This privileging of major over rank-and-file users of your project is politically sensitive, and causes much hand-wringing in the community.  And, of course, the more people you tell, the more likely it is that a leak will occur, and news of the vulnerability to get out before you’ve had a chance to fix it properly.
  • who should be on the fixing team?  Just because you tell people doesn’t mean that they need to be on the team.  By preference, you want people who are both security experts and also know your project software inside-out.  Good luck with this.  Many projects aren’t security projects, and will have no security experts attached – or a few at most.  You may need to call people in to help you, and you may not even know which people to call in until you get a disclosure in the first place.
  • how long are you going to give yourself?  Many discoverers of vulnerabilities want their discovery made public as soon as possible.  This could be for a variety of reasons: they have an academic deadline; they don’t want somebody else to beat them to disclosure; they believe that it’s important to the community that the project is protected as soon as possible; they’re concerned that other people have found – are are exploiting – the vulnerability; they don’t trust the project to take the vulnerability seriously and are concerned that it will just be ignored.  This last reason is sadly justifiable, as there are projects who don’t react quickly enough, or don’t take security vulnerabilities seriously, and there’s a sorry history of proprietary software vendors burying vulnerabilities and pretending they’ll just go away[7].  A standard period of time before disclosure is 2-3 weeks, but as projects get bigger and more complex, balancing that against issues such as pressure from those large-scale users to give them more time to test, holiday plans and all the rest becomes important.  Having an agreed time and sticking to it can be vital, however, if you want to avoid deadline slip after deadline slip.  There’s another interesting issue, as well, which is relevant to the Meltdown and Spectre vulnerabilities – you can’t just patch hardware.  Where hardware is involved, the fixes may involve multiple patches to multiple projects, and not just standard software but microcode as well: this may significantly increase the time needed.
  • what incentives are you going to provide?  Some large projects are in a position to offer bug bounties, but these are few and far between.  Most disclosers want – and should expect to be granted – public credit when the fix is finally provided and announced to the public at large.  This is not to say that disclosers should necessarily be involved in the wider process that we’re describing: this can, in fact, be counter-productive, as their priorities (around, for instance, timing) may be at odds with the rest of the team.

There’s another thing you might want to consider, which is “what are we going to do if this information leaks early?” I don’t have many good answers for this one, as it will depend on numerous factors such as how close are you to a fix, how major is the problem, and whether anybody in the press picks up on it.  You should definitely consider it, though.

Step 3 – external disclosure

You’ve come up with a fix?  Well done.  Everyone’s happy?  Very, very well done[8].  Now you need to tell people.  But before you do that, you’re going to need to decide what to tell them.  There are at least three types of information that you may well want to prepare:

  • technical documentation – by this, I mean project technical documentation.  Code snippets, inline comments, test cases, wiki information, etc., so that when people who code on your project – or code against it – need to know what’s happened, they can look at this and understand.
  • documentation for techies – there will be people who aren’t part of your project, but who use it, or are interested in it, who will want to understand the problem you had, and how you fixed it.  This will help them to understand the risk to them, or to similar software or systems.  Being able to explain to these people is really important.
  • press, blogger and family documentation – this is a category that I’ve just come up with.  Certain members of the press will be quite happy with “documentation for techies”, but, particularly if your project is widely used, many non-technical members of the press – or technical members of the press writing for non-technical audiences – are going to need something that they can consume and which will explain in easy-to-digest snippets a) what went wrong; b) why it matters to their readers; and c) how you fixed it.  In fact, on the whole, they’re going to be less interested in c), and probably more interested in a hypothetical d) & e) (which are “whose fault was it and how much blame can we heap on them?” and “how much damage has this been shown to do, and can you point us at people who’ve suffered due to it?” – I’m not sure how helpful either of these is).  Bloggers may also be useful in spreading the message, so considering them is good.  And generally, I reckon that if I can explain a problem to my family, who aren’t particularly technical, then I can probably explain it to pretty much anybody I care about.

Of course, you’re now going to need to coordinate how you disseminate all of these types of information.  Depending on how big your project is, your channels may range from your project code repository through your project wiki through marketing groups, PR and even government agencies[9].

 

Conclusion

There really is no “one size fits all” approach to security vulnerability disclosures, but it’s an important consideration for any open source software project, and one of those that can be forgotten as a project grows, until suddenly you’re facing a real use case.  I hope this overview is interesting and useful, and I’d love to hear thoughts and stories about examples where security disclosure processes have worked – and when they haven’t.


2 – because only good people read this blog, right?

3 – some government security agencies have a policy of collecting – and not disclosing – security vulnerabilities for their own ends.  I’m not going to get into the rights or wrongs of this approach.[4]

4 – well, not in this post, anyway.

5 – or try, at least.

6 – or millions or more – consider vulnerabilities in smartphone software, for instance, or cloud providers’ install bases.

7 – and even, shockingly, of using legal mechanisms to silence – or try to silence – the discloser.

8 – for the record, I don’t believe you: there’s never been a project – let alone an embargo – where everyone was happy.  But we’ll pretend, shall we?

9 – I really don’t think I want to be involved in any of these types of disclosure, thank you.