The technical details behind Meltdown and Spectre are complex and fascinating – and the possible impacts wide-ranging and scary. I’m not going to go into those here, though there are some excellent articles explaining them. I’d point readers in particular at the following URLs (which both resolve to the same page, so there’s no need to follow both):
I’d also recommend this article on the Red Hat blog, written by my colleague Jon Masters: What are Meltdown and Spectre? Here’s what you need to know. It includes a handy cut-out-and-keep video[1] explaining the basics. Jon has been intimately involved in the process of creating and releasing mitigations and fixes to Meltdown and Spectre, and is very well-placed to talk about it.
All that is interesting and important, but I want to talk about the mechanics and process behind how a vulnerability like this moves from discovery through to fixing. Although similar processes exist for proprietary software, I’m most interested in open source software, so that’s what I’m going to describe.
Step 1 – telling the right people
Let’s say that someone has discovered a vulnerability in a piece of open source software. There are a number of things they can do. The first is ignore it. This is no good, and isn’t interesting for our story, so we’ll ignore it in turn. Second is to keep it to themselves or try to make money out of it. Let’s assume that the discoverer is a good type of person[2], so isn’t going to do this[3]. A third is to announce it on the project mailing list. This might seem like a sensible thing to do, but it can actually be very damaging to the project and the community at large. The problem with this is that the discoverer has just let all the Bad Folks[tm] know about a vulnerability which hasn’t been fixed, and they can now exploit. Even if the discoverer submits a patch, it needs to be considered and tested, and it may be that the fix they’ve suggested isn’t the best approach anyway. For a serious security issue, the project maintainers are going to want to have a really good look at it in order to decide what the best fix – or mitigation – should be.
It’s for this reason that many open source projects have a security disclosure process. Typically, there’s a closed mailing list to which you can send suspected vulnerabilities, often something like “security@projectname.org“. Once someone has emailed that list, that’s where the process kicks in.
Step 2 – finding a fix
Now, before that email got to the list, somebody had to decide that they needed a list in the first place, so let’s assume that you, the project leader, has already put a process of some kind in place. There are a few key decisions to make, of which the most important are probably:
- are you going to keep it secret[5]? It’s easy to default to “yes” here, for the reasons that I mentioned above, but the number of times you actually put in place restrictions on telling people about the vulnerability – usually referred to as an embargo, given its similarity to a standard news embargo – should be very, very small. This process is going to be painful and time-consuming: don’t overuse it. Oh, and your project’s meant to be about open source software, yes? Default to that.
- who will you tell about the vulnerability? We’re assuming that you ended up answering “yes” to te previous bullet, and that you don’t want everybody to know, and also that this is a closed list. So, who’s the most important set of people? Well, most obvious is the set of project maintainers, and key programmers and testers – of which more below. These are the people who will get the fix completed, but there are two other constituencies you might want to consider: major distributors and consumers of the project. Why these people, and who are they? Well, distributors might be OEMs, Operating System Vendors or ISVs who bundle your project as part of their offering. These may be important because you’re going to want to ensure that people who will need to consume your patch can do so quickly, and via their preferred method of updating. What about major consumers? Well, if an organisation has a deployed estate of hundreds or thousands of instances of a project[6], then the impact of having to patch all of those at short notice – let alone the time required to test the patch – may be very significant, so they may want to know ahead of time, and they may be quite upset if they don’t. This privileging of major over rank-and-file users of your project is politically sensitive, and causes much hand-wringing in the community. And, of course, the more people you tell, the more likely it is that a leak will occur, and news of the vulnerability to get out before you’ve had a chance to fix it properly.
- who should be on the fixing team? Just because you tell people doesn’t mean that they need to be on the team. By preference, you want people who are both security experts and also know your project software inside-out. Good luck with this. Many projects aren’t security projects, and will have no security experts attached – or a few at most. You may need to call people in to help you, and you may not even know which people to call in until you get a disclosure in the first place.
- how long are you going to give yourself? Many discoverers of vulnerabilities want their discovery made public as soon as possible. This could be for a variety of reasons: they have an academic deadline; they don’t want somebody else to beat them to disclosure; they believe that it’s important to the community that the project is protected as soon as possible; they’re concerned that other people have found – and/or are exploiting – the vulnerability; they don’t trust the project to take the vulnerability seriously and are concerned that it will just be ignored. This last reason is sadly justifiable, as there are projects who don’t react quickly enough, or don’t take security vulnerabilities seriously, and there’s a sorry history of proprietary software vendors burying vulnerabilities and pretending they’ll just go away[7]. A standard period of time before disclosure is 2-3 weeks, but as projects get bigger and more complex, balancing that against issues such as pressure from those large-scale users to give them more time to test, holiday plans and all the rest becomes important. Having an agreed time and sticking to it can be vital, however, if you want to avoid deadline slip after deadline slip. There’s another interesting issue, as well, which is relevant to the Meltdown and Spectre vulnerabilities – you can’t just patch hardware. Where hardware is involved, the fixes may involve multiple patches to multiple projects, and not just standard software but microcode as well: this may significantly increase the time needed.
- what incentives are you going to provide? Some large projects are in a position to offer bug bounties, but these are few and far between. Most disclosers want – and should expect to be granted – public credit when the fix is finally provided and announced to the public at large. This is not to say that disclosers should necessarily be involved in the wider process that we’re describing: this can, in fact, be counter-productive, as their priorities (around, for instance, timing) may be at odds with the rest of the team.
There’s another thing you might want to consider, which is “what are we going to do if this information leaks early?” I don’t have many good answers for this one, as it will depend on numerous factors such as how close are you to a fix, how major is the problem, and whether anybody in the press picks up on it. You should definitely consider it, though.
Step 3 – external disclosure
You’ve come up with a fix? Well done. Everyone’s happy? Very, very well done[8]. Now you need to tell people. But before you do that, you’re going to need to decide what to tell them. There are at least three types of information that you may well want to prepare:
- technical documentation – by this, I mean project technical documentation. Code snippets, inline comments, test cases, wiki information, etc., so that when people who code on your project – or code against it – need to know what’s happened, they can look at this and understand.
- documentation for techies – there will be people who aren’t part of your project, but who use it, or are interested in it, who will want to understand the problem you had, and how you fixed it. This will help them to understand the risk to them, or to similar software or systems. Being able to explain to these people is really important.
- press, blogger and family documentation – this is a category that I’ve just come up with. Certain members of the press will be quite happy with “documentation for techies”, but, particularly if your project is widely used, many non-technical members of the press – or technical members of the press writing for non-technical audiences – are going to need something that they can consume and which will explain in easy-to-digest snippets a) what went wrong; b) why it matters to their readers; and c) how you fixed it. In fact, on the whole, they’re going to be less interested in c), and probably more interested in a hypothetical d) & e) (which are “whose fault was it and how much blame can we heap on them?” and “how much damage has this been shown to do, and can you point us at people who’ve suffered due to it?” – I’m not sure how helpful either of these is). Bloggers may also be useful in spreading the message, so considering them is good. And generally, I reckon that if I can explain a problem to my family, who aren’t particularly technical, then I can probably explain it to pretty much anybody I care about.
Of course, you’re now going to need to coordinate how you disseminate all of these types of information. Depending on how big your project is, your channels may range from your project code repository through your project wiki through marketing groups, PR and even government agencies[9].
Conclusion
There really is no “one size fits all” approach to security vulnerability disclosures, but it’s an important consideration for any open source software project, and one of those that can be forgotten as a project grows, until suddenly you’re facing a real use case. I hope this overview is interesting and useful, and I’d love to hear thoughts and stories about examples where security disclosure processes have worked – and when they haven’t.
2 – because only good people read this blog, right?
3 – some government security agencies have a policy of collecting – and not disclosing – security vulnerabilities for their own ends. I’m not going to get into the rights or wrongs of this approach.[4]
4 – well, not in this post, anyway.
5 – or try, at least.
6 – or millions or more – consider vulnerabilities in smartphone software, for instance, or cloud providers’ install bases.
7 – and even, shockingly, of using legal mechanisms to silence – or try to silence – the discloser.
8 – for the record, I don’t believe you: there’s never been a project – let alone an embargo – where everyone was happy. But we’ll pretend, shall we?
9 – I really don’t think I want to be involved in any of these types of disclosure, thank you.
5 thoughts on “Meltdown and Spectre: thinking about embargoes and disclosures”