Today’s security story is people turning security off. For me, the fact that it’s even a story is the story. This particular story is covered in The Register, who explain (to nobody’s surprise) that some of the patches to fix issues identified in CPU’s (think Spectre, Meltdown, etc.) can actually slow down the applications running on them. The problem is that, in some cases, they don’t slow them down a little bit, but rather a lot. By which I mean up to 50%. And if you’ve bought expensive hardware – or rented it  – then you’d generally prefer it if it runs your applications/programs/workloads quickly, rather than just half as fast as they might run.
And so you turn off the security patches. Your decision: fine.
No, stop: this isn’t what has happened.
The mythical “you”, the person running the workload, isn’t the person who makes the decision, in most cases, because it’s been made for you. This is the real story.
Linus Torvalds, and a bunch of other experts in the Linux kernel, have decided that although the patch that could make your workloads secure is available, the functionality that does it should be “off” by default. They reason – quite correctly, in my opinion – that the vast majority of people running workloads, won’t easily be able to turn this functionality on themselves
They also reason – again, correctly, in my opinion – that most people will care more about how quickly their workloads run than about how secure they are. I’m not happy about this, but that’s the way it is.
What I worry about is the final step in the logic to making the decision. I’m going to quote Linus:
I get the reasoning behind this, but I don’t like it. To give some context, somebody came up with an example attack which could compromise certain workloads, and Linus points out that there are better ways to fix this attack than fixing it in the kernel. My concerns are two-fold:
- although there may be better places to fix that particular attack, a kernel-level fix is likely to fix an entire class of attacks, meaning better protection for users who are using any application which might include an attack vector.
- pointing out that there haven’t been any attacks yet not only ignores the fact that there is a future out there but also points malicious actors in the direction of a likely attack vector.
Now, I know that the more dedicated malicious actors are already looking for these things, but do we really need to advertise?
What’s my fix?
I don’t have one, or at least not an easy one.
Somebody, somewhere, needs to decide whether security is turned on or off. What I’d honestly like to see is an easier set of controls to allow people to turn on or off security, and to understand the trade-offs when they do that. The problems with that are:
- the trade-offs are often much more complex than just “fast and insecure” or “slow and secure”, and are really difficult to explain.
- in order to make a sensible decision about trade-offs, people need to understand risk. And people are awful at understanding risk.
And there’s a “chicken and egg problem” here: people won’t understand risk until they are offered the chance to make decisions, but there’s little incentive to offer them complex decisions unless they understand risk.
My plea? Where possible, expose risk, and explain what it is. And if you’re turning off security-related functionality, make it easy to turn back on for those who need it.
1 – a quick heads-up: this is what “deploying to the cloud” actually is.
2 – what sits at the bottom of many of the workloads that are running in servers.
3 – hopefully. If the Three Minute Warning sounds while you’re reading this, you may wish to duck and cover. You can come back to it later.
4 – “… sounds like this …”.
5 – 80s reference.
6 – or not. See .
7 – for non-native English readers, this means “a problem where the solution requires two pieces, both of which are dependent on each other”.