I’m turning off your security.

“Don’t worry, I know what I’m doing.”

Today’s security story is people turning security off.  For me, the fact that it’s even a story is the story.  This particular story is covered in The Register, who explain (to nobody’s surprise) that some of the patches to fix issues identified in CPU’s (think Spectre, Meltdown, etc.) can actually slow down the applications running on them.  The problem is that, in some cases, they don’t slow them down a little bit, but rather a lot.  By which I mean up to 50%.  And if you’ve bought expensive hardware – or rented it [1] – then you’d generally prefer it if it runs your applications/programs/workloads quickly, rather than just half as fast as they might run.

And so you turn off the security patches.  Your decision: fine.

No, stop: this isn’t what has happened.

The mythical “you”, the person running the workload, isn’t the person who makes the decision, in most cases, because it’s been made for you.  This is the real story.

Linus Torvalds, and a bunch of other experts in the Linux kernel[2], have decided that although the patch that could make your workloads secure is available, the functionality that does it should be “off” by default.  They reason – quite correctly, in my opinion – that the vast majority of people running workloads, won’t easily be able to turn this functionality on themselves

They also reason – again, correctly, in my opinion – that most people will care more about how quickly their workloads run than about how secure they are.  I’m not happy about this, but that’s the way it is.

What I worry about is the final step in the logic to making the decision.  I’m going to quote Linus:

“Have you seen any actual realistic attacks for normal human users?” he asked. “Things where the kernel should actually care? The JavaScript thing is for the browser to fix up, not for the kernel to say ‘now everything should run up to 50 per cent slower.'”

I get the reasoning behind this, but I don’t like it.  To give some context, somebody came up with an example attack which could compromise certain workloads, and Linus points out that there are better ways to fix this attack than fixing it in the kernel. My concerns are two-fold:

  1. although there may be better places to fix that particular attack, a kernel-level fix is likely to fix an entire class of attacks, meaning better protection for users who are using any application which might include an attack vector.
  2. pointing out that there haven’t been any attacks yet not only ignores the fact that there is a future out there[3] but also points malicious actors in the direction of a likely attack vector.

Now, I know that the more dedicated malicious actors are already looking for these things, but do we really need to advertise?

What’s my fix?

I don’t have one, or at least not an easy one.

Somebody, somewhere, needs to decide whether security is turned on or off.  What I’d honestly like to see is an easier set of controls to allow people to turn on or off security, and to understand the trade-offs when they do that.  The problems with that are:

  • the trade-offs are often much more complex than just “fast and insecure” or “slow and secure”, and are really difficult to explain.
  • in order to make a sensible decision about trade-offs, people need to understand risk.  And people are awful at understanding risk.

And there’s a “chicken and egg problem”[7] here: people won’t understand risk until they are offered the chance to make decisions, but there’s little incentive to offer them complex decisions unless they understand risk.

My plea?  Where possible, expose risk, and explain what it is.  And if you’re turning off security-related functionality, make it easy to turn back on for those who need it.


1 – a quick heads-up: this is what “deploying to the cloud” actually is.

2 – what sits at the bottom of many of the workloads that are running in servers.

3 – hopefully.  If the Three Minute Warning[4] sounds while you’re reading this, you may wish to duck and cover.  You can come back to it later[6].

4 – “… sounds like this …”[5].

5 – 80s reference.

6 – or not.  See [3].

7 – for non-native English readers, this means “a problem where the solution requires two pieces, both of which are dependent on each other”.

Do you know what’s lurking on your system?

Every utility, every library, every executable … increases your attack surface.

I had a job once which involved designing hardening procedures for systems that we were going to be use for security-related projects.  It was fascinating.  This was probably 15 years ago, and not only were guides a little thin on the ground for the Linux distribution I was using, but what we were doing was quite niche.  At first, I think I’d assumed that I could just write a script to close down a few holes that originated from daemons[1] that had been left running for no reasons: httpd, sendmail, stuff like that.  It did involve that, of course, but then I realised there was more to do, and started to dive down the rabbit hole.

So, after those daemons, I looked at users and groups.  And then at file systems, networking, storage.  I left the two scariest pieces to last, for different reasons.  The first was the kernel.  I ended up hand-crafting a kernel, removing anything that I thought it was unlikely we’d need, and then restarting several times when I discovered that the system wouldn’t boot because the things I thought I understood were more … esoteric than I’d realised.  I’m not a kernel developer, and this was a salutary lesson in quite how skilled those folks are.  At least, at the time I was doing it, there was less code, and fewer options, than there are today.  On the other hand, I was having to hack back to a required state, and there are more cut-down kernels and systems to start with than there were back then.

The other piece that I left for last was just pruning the installed Operating System applications and associated utilities.  Again, there are cut-down options that are easier to use now than then, but I also had some odd requirements – I believe that we needed Java, for instance, which has, or had …. well let’s say a lot of dependencies.  Most modern Linux distributions[3] start off by installing lots of pieces so that you can get started quickly, without having to worry about trying to work out dependencies for every piece of external software you want to run.

This is understandable, but we need to realise when we do this that we’re making a usability/security trade-off[5].  Every utility, every library, every executable that you add to a system increases your attack surface, and increases the likelihood of vulnerabilities.

The problem isn’t just that you’re introducing vulnerabilities, but that once they’re there, they tend to stay there.  Not just in code that you need, but, even worse, in code that you don’t need.  It’s a rare, but praiseworthy hacker[6] who spends time going over old code removing dependencies that are no longer required.  It’s a boring, complex task, and it’s usually just easier to leave the cruft[7] where it is and ship a slightly bigger package for the next release.

Sometimes, code is refactored and stripped: most frequently, security-related code. This is a very Good Thing[tm], but it turns out that it’s far from sufficient.  The reason I’m writing this post is because of a recent story in The Register about the “beep” command.  This command used the little speaker that was installed on most PC-compatible motherboards to make a little noise.  It was a useful little utility back in the day, but is pretty irrelevant now that most motherboards don’t ship with the relevant hardware.  The problem[8] is that installing and using the beep command on a system allows information to be leaked to users who lack the relevant permissions.  This can be a very bad thing.  There’s a good, brief overview here.

Now, “beep” isn’t installed by default on the distribution I’m using on the system on which I’m writing this post (Fedora 27), though it’s easily installable from one of the standard repositories that I have enabled.  Something of a relief, though it’s not a likely attack vector for this machine anyway.

What, though, do I have installed on this system that is vulnerable, and which I’d never have thought to check?  Forget all of the kernel parameters which I don’t need turned on, but which have been enabled by the distribution for ease of use across multiple systems.  Forget the apps that I’ve installed and use everyday.  Forget, even the apps that I installed once to try, and then neglected to remove.  What about the apps that I didn’t even know were there, and which I never realised might have an impact on the security posture of my system?  I don’t know, and have little way to find out.

This system doesn’t run business-critical operations.  When I first got it and installed the Operating System, I decided to err towards usability, and to be ready to trash[9] it and start again if I had problems with compromise.  But that’s not the case for millions of other systems out there.  I urge you to consider what you’re running on a system, what’s actually installed on it, and what you really need.  Patch what you need, remove what you don’t.   It’s time for a Spring clean[10].


1- I so want to spell this word dæmons, but I think that might be taking my Middle English obsession too far[2].

2 – I mentioned that I studied Middle English, right?

3 – I’m most interested in (GNU) Linux here, as I’m a strong open source advocate and because it’s the Operating System that I know best[4].

4 – oh, and I should disclose that I work for Red Hat, of course.

5 – as with so many things in our business.

6 – the good type, or I’d have said “cracker”, probably.

7 – there’s even a word for it, see?

8 – beyond a second order problem that a suggested fix seems to have made the initial problem worse…

9 – physically, if needs be.

10 – In the Northern Hemisphere, at least.

Security patching and vaccinations: a surprising link

Learning from medicine, but recognising differences.

I’ve written a couple of times before about patching, and in one article (“The Curious Incident of the Patch in the Night-Time“), I said that I’d return to the question of how patches and vaccinations are similar.  Given the recent flurry of patching news since Meltdown and Spectre, I thought that now would be a good time to do that.

Now, one difference that I should point out up front is that nobody believes that applying security patches to your systems will give them autism[1].  Let’s counter that with the first obvious similarity, though: patching your systems makes them resistant to attacks based on particular vulnerabilities.  Equally, a particular patch may provide resistance to multiple types of attack of the same family, as do some vaccinations.  Also similarly, as new attacks emerge – or bacteria or viruses change and evolve – new patches are likely to be required to deal with the problem.

We shouldn’t overplay the similarities, of course.  Just because some types of malware are referred to as “viruses” doesn’t mean that their method of attack, or the mechanisms by which computer systems defend against them, are even vaguely alike[2].  Computer systems don’t have complex immune systems which adapt and learn how to deal with malware[3].  On the other hand, there are also lots of different types of vulnerability for which patches are efficacious which are very different to bacterial or virus attacks: a buffer overflow attack or SQL injection, for instance.  So, it’s clearly possible to over-egg this pudding[4].  But there is another similarity that I do think is worth drawing, though it’s not perfect.

There are some systems which, for whatever reason, it is actually quite risky to patch.   This is because of the business risk associated with patching them, and might be down to a number of factors, including:

  • projected downtime as the patch is applied and system rebooted is unacceptable;
  • side effects of the patch (e.g. performance impact) are too severe;
  • risk of the system not rebooting after patch application is too high;
  • other components of the system (e.g. hardware or other software) may be incompatible with the patch.

In these cases, a decision may be made that the business risk of patching the system outweighs the business risk of leaving it unpatched. Alternatively, it may be that you are running some systems which are old and outdated, and for which there is no patch available.

Here’s where there’s another surprising similarity with vaccinations.  There are, in any human population, individuals for whom the dangers of receiving a vaccination may outweigh the benefits.  The reasons for this are different from the computer case, and are generally down to weakened immune systems and/or poor health.  However, it turns out that as the percentage of a human population[6] that is vaccinated rises, the threat to the unvaccinated individuals reduces, as there are fewer infection vectors from whom those individuals can receive the infection.

We need to be careful with how closely we draw the analogy here, because we’re on shaky ground if we go too far, but there are types of system vulnerability – particularly malware – for which this is true for computer systems.  If you patch all the systems that you can, then the number of possible “jump-off” points for malware will reduce, meaning that the unpatched systems are less likely to be affected.  To a lesser degree, it’s probably true that as unsophisticated attackers notice that a particular attack vector is diminishing, they’ll ignore it and move to something else.  Over-stretching this thread, however, is particularly dangerous: a standard approach for any motivated attacker is to attempt attack vectors which are “old”, but to which unpatched systems are likely to be vulnerable.

Another difference is that in the computing world, attacks never die off.  Though there are stockpiles of viruses and bacteria which are extinct in the general population which are maintained for various reasons[7], some will die out over time.  In the world of IT, pretty much every vulnerability ever discovered will have been documented somewhere, whether there still exists an “infected” system or not, and so is still available for re-use or re-purposing.

What is the moral of this article?  Well, it’s this: even if you are unable to patch all of your systems, it’s still worth patching as many of them as you can.  It’s also worth considering whether there are some low-risk systems that you can patch immediately, and which require less business analysis before deciding whether they can be patched in a second or third round of patching.  It’s probably worth keeping a list of these somewhere.  Even better, you can maintain lists of high-, medium- and low-risk systems – both in terms of business risk and infection vulnerability – and use this to inform your patching, both automatic and manual.  But, dear reader: do patch.


1 – if you believe that – or, in fact, if you believe that vaccinations give children autism – then you’re reading the wrong blog.  I seriously suggest that you go elsewhere (and read some proper science on the subject).

2 – pace the attempts of Hollywood CGI departments to make us believe that they’re exactly the same.

3 – though this is obviously an interesting research area.

4 – “overextend this analogy”.  The pudding metaphor is a good one though, right?[5]

5 – and I like puddings, as my wife (and my waistline) will testify.

6 – or, come to think of it, animal (I’m unclear on flora).

7 – generally, one hopes, philanthropic.