Will quantum computing break security?

Do you want J. Random Hacker to be able to pretend that they’re your bank?

Over the past few years, a new type of computer has arrived on the block: the quantum computer.  It’s arguably the sixth type of computer:

  1. humans – before there were artificial computers, people used, well, people.  And people with this job were called “computers”.
  2. mechanical analogue – devices such as the Antikythera mechanism, astrolabes or slide rules.
  3. mechanical digital – in this category I’d count anything that allowed discrete mathematics, but didn’t use electronics for the actual calculation: the abacus, Babbage’s Difference Engine, etc.
  4. electronic analogue – many of these were invented for military uses such as bomb sights, gun aiming, etc.
  5. electronic digital – I’m going to go out on a limb here, and characterise Colossus as the first electronic digital computer[1]: these are basically what we use today for anything from mobile phones to supercomputers.
  6. quantum computers – these are coming, and are fundamentally different to all of the previous generations.

What is quantum computing?

Quantum computing uses concepts from quantum mechanics to allow very different types of calculations to what we’re used to in “classical computing”.  I’m not even going to try to explain, because I know that I’d do a terrible job, so I suggest you try something like Wikipedia’s definition as a starting point.  What’s important for our purposes is to understand that quantum computers use qubits to do calculations, and for quite a few types of mathematical algorithms – and therefore computing operations – they can solve problems much faster than classical computers.

What’s “much faster”?  Much, much faster: orders of magnitude faster.  A calculation that might take years or decades with a classical computer could, in certain circumstances, take seconds.  Impressive, yes?  And scary.  Because one of the types of problems that quantum computers should be good at solving is decrypting encrypted messages, even without the keys.

This means that someone with a sufficiently powerful quantum computer should be able to read all of your current and past messages, decrypt any stored data, and maybe fake digital signatures.  Is this a big thing?  Yes.  Do you want J. Random Hacker to be able to pretend that they’re your bank[2]?  Do you want that transaction on the blockchain where you were sold a 10 bedroom mansion in Mayfair to be “corrected” to be a bedsit in Weston-super-Mare[3]?

Some good news

This is all scary stuff, but there’s good news, of various types.

The first is that in order to make any of this work at all, you need a quantum computer with a good number of qubits operating, and this is turning out to be hard[4].  The general consensus is that we’ve got a few years before anybody has a “big” enough quantum computer to do serious damage to classical encryption algorithms.

The second is that, even with a sufficient number of qubits to attacks our existing algorithms, you still need even more in order to allow for error correction.

The third is that although there are theoretical models to show how to attack some of our existing algorithms, actually making them work is significantly harder than you or I[5] might expect.  In fact, some of the attacks may turn out to be infeasible, or just take more years to perfect that we’d worried about.

The fourth is that there are clever people out there who are designing quantum-computation resistant algorithms (sometimes referred to as “post-quantum algorithms”) that we can use, at least for new encryption, once they’ve been tested and become widely available.

All-in-all, in fact, there’s a strong body of expert opinion that says that we shouldn’t be overly worried about quantum computing breaking our encryption in the next 5 or even 10 years.

And some bad news

It’s not all rosy, however.  Two issues stick out to me as areas of concern.

  1. People are still designing and rolling out systems which don’t consider the issue.  If you’re coming up with a system which is likely to be in use for ten or more years, or which will be encrypting or signing data which must remain confidential or attributable over those sorts of periods, then you should be considering what the possible impact of quantum computing may have on your system.
  2. some of the new, quantum-computing resistant algorithms are proprietary.  This means that when you and I want to start implementing systems which are designed to be quantum-computing resistant, we’ll have to pay to do so.  I’m a big proponent of open source, and particularly of open source cryptography, and my big worry is that we just won’t be able to open source these things, and worse, that when new protocol standards are created – either de facto or through standards bodies – they will choose proprietary algorithms that exclude the use of open source, whether on purpose, through ignorance, or because few good alternatives are available.

What to do?

Luckily, there are things you can do to address both of the issues above.  The first is to think and plan, when designing a system, about what the impact of quantum computing might be on it.  Often – very often – you won’t actually need to implement anything explicit now (and it could be hard to, given the current state of the art), but you should at least embrace the concept of crypto-agility: designing protocols and systems so that you can swap out algorithms if required[7].

The second is a call to arms: get involved in the open source movement, and encourage everybody you know who has anything to do with cryptography to rally for open standards and for research into non-proprietary, quantum-computing resistant algorithms.  This is something that’s very much on my to-do list, and an area where pressure and lobbying is just as important as the research itself.


1 – I think it’s fair to call it the first electronic, programmable computer.  I know there were earlier non-programmable ones, and that some claim ENIAC, but I don’t have the space or the energy to argue the case here.

2 – no.

3 – see [2].  Don’t get me wrong, by the way – I grew up near Weston-super-Mare, and it’s got things going for it, but it’s not Mayfair.

4 – and if a quantum physicist says that something’s hard, then, to my mind, it’s hard.

5 – and I’m assuming that neither of us is a quantum physicist or mathematician[6].

6 – I’m definitely not.

7 – and not just for quantum-computing reasons: there’s a good chance that some of our existing classical algorithms may just fall to other, non-quantum attacks such as new mathematical approaches.

Mitigate or remediate?

What’s the difference between mitigate and remediate?

I very, very nearly titled this article “The ‘aters gonna ‘ate”, and then thought better of it. This is a rare event, and I put it down to the extreme temperatures that we’re having here in the UK at the moment[1].

What prompted this article was reading something online today where I saw the word mitigate, and thought to myself, “When did I start using that word? It’s not a normal one to drop into conversation. And what about remediate? What’s the difference between mitigate and remediate? In fact, how well could I describe the difference between the two?” Both are quite jargon-y word, so, in the spirit of my recent article Jargon – a force for good or ill? here’s my attempt at describing the difference, but also pointing out how important both are – along with a couple of other “-ate” words.

Let’s work backwards.

Remediate

Remediation is a set of actions to get things back the way they should be. It’s the last step in the process of recovery from an attack or other failure. The reprefix here is the give -away: like resetting and reconciliation. When you’re remediating, there may be an expectation that you’ll be returning your systems to the same state they were before, for example, power failure, but that’s not necessarily the case. What you should be focussing on is the service you’re providing, rather than the system(s) that are providing it. A set of steps for remediation might require you to replace your database with another one from a completely different provider[3], to add a load-balancer and to change all of your hardware, but you’re still remediating the problem that hit you. At the very least, if you’ve suffered an attack, you should make sure that you plug any hole that allowed the attacker in to start with[4].

Mitigate

Mitigation isn’t about making things better – it’s about reducing the impact of an attack or failure. Mitigation is the first set of steps you take when you realise that you’ve got a problem. You could argue that these things are connected, but mitigation isn’t about returning the service to normal (remediation), but about taking steps to reduce the impact of an attack. Those mitigations may be external – adding load-balancers to help deal with a DDoS attack, maybe – or internal – shutting down systems that have been actively compromised.

In fact, I’d argue that some mitigations might quite properly actually have an adverse effect on the service: there may be short term reductions in availability to ensure that long-term remediations can be performed. Planning for this is vitally important, as I’ve discussed previously, in Service degradation: actually a good thing.

The other -ates: update and operate

As I promised above, there are a couple of other words that are part of your response to an attack – or possible attack. The first is update. Updating is one of the key measures that you can put in place to reduce the chance of a successful attack. It’s the step you take before mitigation – because, if you’re lucky, you won’t need mitigation, because you’ll be immune from attack[5].

The second of these is operate. Operation is your normal state: it’s where you want to be. But operation doesn’t mean that you can just sit back and feel secure: you need to be keeping an eye on what’s going on, planning upgrades, considering mitigations and preparing for remediations. We too often think of the operate step as our Happy Place, where all is rosy, and we can sit back and watch the daisies grow. As DevOps (and DevSecOps) is teaching us, this is absolutely not the case: operate is very much a dynamic state, and not a passive one.


1 – 30C (~86F) is hot for the UK. Oh, and most houses (ours included) don’t have air conditioning[2].

2 – and I work from an office in the garden. Direct sunlight through the mainly glass wall is a blessing in the winter. Less so in the summer.

3 – hopefully open source, of course.

4 – hint: patch, patch, patch.

5 – well, that attack, at least. You’re never immune from all attacks: sorry.

7 steps to security policy greatness

… we’ve got a good chance of doing the right thing, and keeping the auditors happy.

Security policies are something that everybody knows they should have, but which are deceptively simple to get wrong.  A set of simple steps to check when you’re implementing them is something that I think it’s important to share.  You know when you come up a set of steps, you suddenly realise that you’ve got a couple of vowels in there and you think, “wow, I can make an acronym!”?[1] This was one of those times.  The problem was that when I looked at the different steps, I decided that DDEAVMM doesn’t actually have much of a ring to it.  I’ve clearly still got work to do before I can name major public sector projects, for instance[2].  However, I still think it’s worth sharing, so let’s go through them in order.  Order, as for many sets of steps, is important here.

I’m going to give an example and walk through the steps for clarity.  Let’s say that our CISO, in his or her infinite wisdom, has decided that they don’t want anybody outside our network to be able to access our corporate website via port 80.  This is the policy that we need to implement.

1. Define

The first thing I need is a useful definition.  We nearly have that from the request[3] noted above, when our CISO said “I don’t want anybody outside our network to be able to access our corporate website via port 80”.  So, let’s make that into a slightly more useful definition.

“Access to IP address mycorporate.network.com on port 80 must be blocked to all hosts outside the 10.0.x.x to 10.3.x.x network range.”

I’m assuming that we already know that our main internal network is within 10.0.x.x to 10.3.x.x.  It’s not exactly a machine readable definition, but actually, that’s the point: we’re looking for a definition which is clear and human understandable.  Let’s assume that we’re happy with this, at least for now.

2. Design

Next, we need to design a way to implement this.  There are lots of ways of doing this – from iptables[4] to a full-blown, commercially supported firewall[6] – and I’m not fluent in any of them these days, so I’m going to assume that somebody who is better equipped than I has created a rule or set of rules to implement the policy defined in step 1.

We also need to define some tests – we’ll come back to these in step 5.

3. Evaluate

But we need to check.  What if they’ve mis-implemented it?    What if they misunderstood the design requirement?  It’s good practice – in fact, it’s vital – to do some evaluation of the design to ensure it’s correct.  For this rather simple example, it should be pretty to check by eye, but we might want to set up a test environment to evaluate that it meets the policy definition or take other steps to evaluate its correctness.  And don’t forget: we’re not checking that the design does what the person/team writing it thinks it should do: we’re checking that it meets the definition.  It’s quite possible that at this point we’ll realise that the definition was incorrect.  Maybe there’s another subnet – 10.5.x.x, for instance – that the security policy designer didn’t know about.  Or maybe the initial request wasn’t sufficiently robust, and our CISO actually wanted to block all access on any port other than 443 (for HTTPS), which should be allowed.  Now is a very good time to find that out.

4. Apply

We’ve ascertained that the design does what it should do – although we may have iterated a couple of times on exactly “what it should do” means – so now we can implement it.  Whether that’s ssh-ing into a system, uploading a script, using some sort of GUI or physically installing a new box, it’s done.

Excellent: we’re finished, right?

No, we’re not.  The problem is that often, that’s exactly what people think.  Let’s move to our next step: arguably the most important, and the most often forgotten or ignored.

5. Validate

I really care about this one: it’s arguably the point of this post.  Once you’ve implemented a security policy, you need to validate that it actually does what it’s supposed to do.  I’ve written before about this, in my post If it isn’t tested, it doesn’t work, and it’s absolutely true of security policy implementations.  You need to check all of the different parts of the design.  This, you might hope, would be really easy, but even in the simple case that we’re working with, there are lots of tests you should be doing.  Let’s say that we took the two changes mentioned in step 3.  Here are some tests that I would want to be doing, with the expected result:

  • FAIL: access port 80 from an external IP address
  • FAIL: access port 8080 from an external IP address
  • FAIL: access port 80 from a 10.4.x.x address
  • PASS: access port 443 from an external IP address
  • UNDEFINED: access port 80 from a 10.5.x.x
  • UNDEFINED: access port 80 from a 10.0.x.x address
  • UNDEFINED: access port 443 from a 10.0.x.x address
  • UNDEFINED: access port 80 from an external IP address but with a VPN connection into 10.0.x.x

Of course, we’d want to be performing these tests on a number of other ports, and from a variety of IP addresses, too.

What’s really interesting about the list is the number of expected results that are “UNDEFINED”.  Unless we have a specific requirement, we just can’t be sure what’s expected.  We can guess, but we can’t be sure.  Maybe we don’t care?  I particularly like the last one, because the result we get may lead us much deeper into our IT deployment than we might expect.

The point, however, is that we need to check that the actual results meet our expectations, and maybe even define some new requirements if we want to remove some of the “UNDEFINED”.  We may be fine to leave some the expected results as “UNDEFINED”, particularly if they’re out of scope for our work or our role.  Obviously, if the results don’t meet our expectations, then we also need to make some changes and apply them and then re-validate.  We also need to record the final results.

When we’ve got more complex security policy – multiple authentication checks, or complex SDN[7] routing rules – then our tests are likely to be much, much more complex.

6. Monitor

We’re still not done.  Remember those results we got in our previous tests?  Well, we need to monitor our system and see if there’s any change.  We should do this on a fairly frequent basis.  I’m not going to say “regular”, because regular practices can lead to sloppiness, and also leave windows of opportunities open to attackers.  We also need to perform checks whenever we make a change to a connected system.  Oh, if I had a dollar for every time I’ve heard “oh, this won’t affect system X at all.”…[8]

One interesting point is that we should also note when results whose expected value remains “UNDEFINED” change.  This may be a sign that something in our system has changed, it may be a sign that a legitimate change has been made in a connected system, or it may be a sign of some sort of attack.  It may not be quite as important as a change in one of our expected “PASS” or “FAIL” results, but it certainly merits further investigation.

7. Mitigate

Things will go wrong.  Some of them will be our fault, some of them will be our colleagues’ fault[10], some of them will be accidental, or due to hardware failure, and some will be due to attacks.  In all of these cases, we need to act to mitigate the failure.  We are in charge of this policy, so even if the failure is out of our control, we want to make sure that mitigating mechanism are within our control.  And once we’ve completed the mitigation, we’re going to have to go back at least to step 2 and redesign our implementation.  We might even need to go back to step 1 and redefine what our definition should be.

Final steps

There are many other points to consider, and one of the most important is the question of responsibility, touched on in step 7 (and which is particularly important during holiday seasons), special circumstances and decommissioning, but if we can keep these steps in mind when we’re implementing – and running – security policies, we’ve got a good chance of doing the right thing, and keeping the auditors happy, which is always worthwhile.


1 – I’m really sure it’s not just me.  Ask your colleagues.

2 – although, back when Father Ted was on TV, a colleague of mine and I came up with a database result which we named a “Fully Evaluated Query”.  It made us happy, at least.

3 – if it’s from the CISO, and he/she is my boss, then it’s not a request, it’s an order, but you get the point.

4 – which would be my first port[5] of call, but might not be the appropriate approach in this context.

5 – sorry, that was unintentional.

6 – just because it’s commercially support doesn’t mean it has to be proprietary: open source is your friend, boys and girls, ladies and gentlemen.

7 – Software-Defined Networking.

8 – I’m going to leave you hanging, other than to say that, at current exchange rates, and assuming it was US dollars I was collecting, then I’d have almost exactly 3/4 of that amount in British pounds[9].

9 – other currencies are available, but please note that I’m not currently accepting bitcoin.

10 – one of the best types.

Happy 4th of July – from Europe

If I were going to launch a cyberattack on the US, I would do it on the 4th July.

There’s a piece of received wisdom from the years of the Cold War that if the Russians[1] had ever really wanted to start a land war in Europe, they would have done it on Christmas Day, when all of the US soldiers in Germany were partying and therefore unprepared for an attack.  I’m not sure if this is actually fair – I’m sure that US commanders had considered this eventuality – but it makes for a good story.

If I were going to launch a cyberattack on the US, I would do it on the 4thof  July.  Now, to be entirely clear, I have no intentions of performing any type of attack – cyber or not – on our great ally across the Pond[2]: not today (which is actually the 3rd July) or tomorrow.  Quite apart from anything else, I’m employed by[5] a US company, and I also need to travel to the US on business quite frequently.  I’d prefer to be able to continue both these activities without undue attention from the relevant security services.

The point, however, is that the 4th of July would be a good time to do it.  How do I know this?  I know it because it’s one of my favourite holidays.  This may sound strange to those of you who follow or regularly read this blog, who will know – from my spelling, grammar and occasional snide humour[6] – that I’m a Brit, live in the UK, and am proud of my Britishness.  The 4th of July is widely held, by residents and citizens of the USA, to be a US holiday, and, specifically, one where they get to cock a snook at the British[7].  But I know, and my European colleagues know – in fact, I suspect that the rest of the world outside the US knows – that if you are employed by, are partners of, or otherwise do business with the US, then the 4th of July is a holiday for you as well.

It’s the day when you don’t get emails.  Or phone calls.  There are no meetings arranged.

It’s the day when you can get some work done.  Sounds a bit odd for a holiday, but that’s what most of us do.

Now, I’m sure that, like the US military in the Cold War, some planning has taken place, and there is a phalanx of poor, benighted sysadmins ready to ssh into servers around the US in order to deal with any attacks that come in and battle with the unseen invaders.  But I wonder if there are enough of them, and I wonder whether the senior sysadmins, the really experienced ones who are most likely to be able to repulse the enemy, haven’t ensured that it’s their junior colleagues who are the ones on duty so that they – the senior ones – can get down to some serious barbecuing and craft beer consumption[8].  And I wonder what the chances are of getting hold of the CISO or CTO when urgent action is required.

I may be being harsh here: maybe everything’s completely under control across all organisations throughout the USA, and nobody will take an extra day or two of holiday this week.  In fact, I suspect that many sensible global organisations – even those based in the US – have ensured that they’ve readied Canadian, Latin American, Asian or European colleagues to deal with any urgent issues that come up.  I really, really hope so.  For now, though, I’m going to keep my head down and hope that the servers I need to get all that work done on my favourite holiday stay up and responsive.

Oh, and roll on Thanksgiving.


1 – I suppose it should really be “the Soviet Union”, but it was also “the Russians”: go figure.

2 – the Atlantic ocean – this is British litotes[3].

3 – which is, like, a million times better than hyperbole[4].

4 – look them up.

5 – saying “I work for” sets such a dangerous precendent, don’t you think?

6 – litotes again.

7 – the probably don’t cock a snook, actually, as that’s quite a British phrase.

8 – I’m assuming UNIX or Linux sysadmins: therefore most likely bearded, and most likely craft beer drinkers.  Your stereotypes may vary.

 

What are they attacking me for?

There are three main types of motivations: advantages to them; disadvantages to us; resources.

I wrote an article a few weeks ago called What’s a State Actor, and should I care?, and a number of readers asked if I could pull apart a number of the pieces that I presented there into separate discussions[1].  One of those pieces was the question of who is actually likely to attack me.

I presented a brief list thus:

  • insiders
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists
  • … and more.

One specific “more” that I mentioned was State Actors.  If you look around, you’ll find all manner of lists.  Other attacker types that I didn’t mention in my initial list include:

  • members of organised crime groups
  • terrorists
  • “mercenary” hackers.

I suspect that you could come up with more supersets or subsets if you tried hard enough.

This is all very well, but what’s the value in knowing who’s likely to attack you in the first place[3]?  There’s a useful dictum: “No system is secure against a sufficiently resourced and motivated attacker.”[5]  This gives us a starting point, because it causes us to ask the question

  • what motivates the attacker?

In other words: what do they want to achieve?  What, in fact, are they trying to do or get when they attack us?  This is the core theme of this article.

There are three main types of motivations:

  1. advantages to them
  2. disadvantages to us
  3. resources.

There is overlap between the three, but I think that they are sufficiently separate to warrant separate discussion.

Advantages to them

Any successful attack is arguably a disadvantage to us, the attacked, but that does not mean that the primary motivation of an attacker is necessarily to cause harm.  There are a number of other common motivations, including:

  • reputation or “bragging rights” – a successful attack may well be used to prove the skills of an attacker to other parties.
  • information to share – sometimes attackers wish to gain information about our systems to share with others, whether for gain or to enhance their reputation (see above).  Such attacks may be painted a security research, but if they occur outside an ethical framework (such as provided by academic institutions) and without consent, it is difficult to consider them anything other than hostile.
  • information to keep – attackers may gain information and keep it for themselves for later use, either against our systems or against similarly configured systems elsewhere.
  • practice/challenge – there are attacks which are undertaken solely to practice techniques or as a personal challenge (where an external challenge is made, I would categorise them under “reputation”).  Harmless as this motivation may seem to some parts of the community, such attacks still cause damage and require mitigation, and should be considered hostile.
  • for money – some attacks are undertaken at the request of others, with the primary motivation of the attacker being that money or other material recompense (though the motivation of the party commissioning that attack likely to be one of these other ones listed)[6].

Disadvantages to us

Attacks which focus on causing negative impact to the individual or organisation attacked can be listed in the following categories:

  • business impact – impact to the normal functioning of the organisation or individual attacked: causing orders to be disrupted, processes to be slowed, etc..
  • financial impact – direct impact to the financial functioning of the attacked party: fraud, for instance.
  • reputational impact – there have been many attacks where the intention has clearly been to damage the reputation of the attacked party.  Whether it is leaking information about someone’s use of a dating website, disseminating customer information or solely replacing text or images on a corporate website, the intention is the same: to damage the standing of those being attacked.  Such damage may be indirect – for instance if an attacker were to cause the failure of an oil pipeline, affecting the reputation of the owner or operator of that pipeline.
  • personal impact – subtly different from reputational or business impact, this is where the attack intends to damage the self-esteem of an individual, or their ability to function professionally, physically, personally or emotionally.  This could cover a wide range of attacks such as “doxxing” or use of vulnerabilities in insulin pumps.
  • ecosystem impact – this type of motivation is less about affecting the ability of the individual or organisation to function normally, and more about affecting the ecosystem that exists around it.  Impacting the quality control checks of a company that made batteries might impact the ability of a mobile phone company to function, for instance, or attacking a water supply might impact the ability of a fire service to respond to incidents.

Resources

The motivations for some attacks may be partly or solely to get access to resources.  These resources might include:

  • financial resources – by getting access to company accounts, attackers might be able to purchase items for themselves or others or otherwise defraud the company.
  • compute resources – access to compute resources can lead to further attacks or be used for purposes such as cryptocurrency mining.
  • storage resources – attackers may wish to store illegal or compromising material on others’ systems.
  • network resources – access to network resources allows attackers to launch attacks elsewhere or to stream information with little traceability.
  • human resources – access to some systems may allow human resources to be deployed in ways unintended by the party being attacked: deploying police officers to a scene a long distance away from a planned physical attack, for instance.
  • physical resources – access to some systems may also allow physical resources to be deployed in ways unintended by the party being attacked: sending ammunition to the wrong front in a war, might, for example, lead to military force becoming weakened.

Conclusion

It may seem unimportant to consider the motivations of those attacking us, but if we can understand what it is that they are looking for, we can decide what we should defend, and sometimes what types of defence we should put in place.  As always, I welcome comments on this article: I’m sure that I’ve missed out some points, or misrepresented others, so please do get in touch and let me know your thoughts.


1 – I considered this a kind and polite way of saying “you stuffed too much into a single article: what were you thinking?”[2]

2 – and I don’t necessarily disagree.

3 – unless you’re just trying to scare senior management[4].

4 – which may be enjoyable, but is ultimately likely to backfire if you’re doing it without evidence and for a good reason.

5 – I made a (brief) attempt to track the origins of this phrase: I’m happy to attribute if someone can find the original.

6 – hat-tip to Reddit user poopin for spotting that I’d missed this one out.

Do you know what’s lurking on your system?

Every utility, every library, every executable … increases your attack surface.

I had a job once which involved designing hardening procedures for systems that we were going to be use for security-related projects.  It was fascinating.  This was probably 15 years ago, and not only were guides a little thin on the ground for the Linux distribution I was using, but what we were doing was quite niche.  At first, I think I’d assumed that I could just write a script to close down a few holes that originated from daemons[1] that had been left running for no reasons: httpd, sendmail, stuff like that.  It did involve that, of course, but then I realised there was more to do, and started to dive down the rabbit hole.

So, after those daemons, I looked at users and groups.  And then at file systems, networking, storage.  I left the two scariest pieces to last, for different reasons.  The first was the kernel.  I ended up hand-crafting a kernel, removing anything that I thought it was unlikely we’d need, and then restarting several times when I discovered that the system wouldn’t boot because the things I thought I understood were more … esoteric than I’d realised.  I’m not a kernel developer, and this was a salutary lesson in quite how skilled those folks are.  At least, at the time I was doing it, there was less code, and fewer options, than there are today.  On the other hand, I was having to hack back to a required state, and there are more cut-down kernels and systems to start with than there were back then.

The other piece that I left for last was just pruning the installed Operating System applications and associated utilities.  Again, there are cut-down options that are easier to use now than then, but I also had some odd requirements – I believe that we needed Java, for instance, which has, or had …. well let’s say a lot of dependencies.  Most modern Linux distributions[3] start off by installing lots of pieces so that you can get started quickly, without having to worry about trying to work out dependencies for every piece of external software you want to run.

This is understandable, but we need to realise when we do this that we’re making a usability/security trade-off[5].  Every utility, every library, every executable that you add to a system increases your attack surface, and increases the likelihood of vulnerabilities.

The problem isn’t just that you’re introducing vulnerabilities, but that once they’re there, they tend to stay there.  Not just in code that you need, but, even worse, in code that you don’t need.  It’s a rare, but praiseworthy hacker[6] who spends time going over old code removing dependencies that are no longer required.  It’s a boring, complex task, and it’s usually just easier to leave the cruft[7] where it is and ship a slightly bigger package for the next release.

Sometimes, code is refactored and stripped: most frequently, security-related code. This is a very Good Thing[tm], but it turns out that it’s far from sufficient.  The reason I’m writing this post is because of a recent story in The Register about the “beep” command.  This command used the little speaker that was installed on most PC-compatible motherboards to make a little noise.  It was a useful little utility back in the day, but is pretty irrelevant now that most motherboards don’t ship with the relevant hardware.  The problem[8] is that installing and using the beep command on a system allows information to be leaked to users who lack the relevant permissions.  This can be a very bad thing.  There’s a good, brief overview here.

Now, “beep” isn’t installed by default on the distribution I’m using on the system on which I’m writing this post (Fedora 27), though it’s easily installable from one of the standard repositories that I have enabled.  Something of a relief, though it’s not a likely attack vector for this machine anyway.

What, though, do I have installed on this system that is vulnerable, and which I’d never have thought to check?  Forget all of the kernel parameters which I don’t need turned on, but which have been enabled by the distribution for ease of use across multiple systems.  Forget the apps that I’ve installed and use everyday.  Forget, even the apps that I installed once to try, and then neglected to remove.  What about the apps that I didn’t even know were there, and which I never realised might have an impact on the security posture of my system?  I don’t know, and have little way to find out.

This system doesn’t run business-critical operations.  When I first got it and installed the Operating System, I decided to err towards usability, and to be ready to trash[9] it and start again if I had problems with compromise.  But that’s not the case for millions of other systems out there.  I urge you to consider what you’re running on a system, what’s actually installed on it, and what you really need.  Patch what you need, remove what you don’t.   It’s time for a Spring clean[10].


1- I so want to spell this word dæmons, but I think that might be taking my Middle English obsession too far[2].

2 – I mentioned that I studied Middle English, right?

3 – I’m most interested in (GNU) Linux here, as I’m a strong open source advocate and because it’s the Operating System that I know best[4].

4 – oh, and I should disclose that I work for Red Hat, of course.

5 – as with so many things in our business.

6 – the good type, or I’d have said “cracker”, probably.

7 – there’s even a word for it, see?

8 – beyond a second order problem that a suggested fix seems to have made the initial problem worse…

9 – physically, if needs be.

10 – In the Northern Hemisphere, at least.

Confessions of an auditor

Moving to a postive view of security auditing.

So, right up front, I need to admit that I’ve been an auditor.  A security auditor.  Professionally.  It’s time to step up and be proud, not ashamed.  I am, however, dialled into a two-day workshop on compliance today and tomorrow, so this is front and centre of my thinking right at the moment.  Hence this post.

Now, I know that not everybody has an entirely … positive view of auditors.  And particularly security auditors[1].  They are, for many people, a necessary evil[2].  The intention of this post is to try to convince you that auditing[3] is, or can and should be, a Force for Good[tm].

The first thing to address is that most people have auditors in because they’re told that they have to, and not because they want to.  This is often – but not always – associated with regulatory requirements.  If you’re in the telco space, the government space or the financial services space, for instance, there are typically very clear requirements for you to adhere to particular governance frameworks.  In order to be compliant with the regulations, you’re going to need to show that you meet the frameworks, and in order to do that, you’re going to need to be audited.  For pretty much any sensible auditing regime, that’s going to require an external auditor who’s not employed by or otherwise accountable to the organisation being audited.

The other reason that audit may happen is when a C-level person (e.g. the CISO) decides that they don’t have a good enough idea of exactly what the security posture of their organisation – or specific parts of it – is like, so they put in place an auditing regime.  If you’re lucky, they choose an industry-standard regime, and/or one which is actually relevant to your organisation and what it does.  If you’re unlucky, … well, let’s not go there.

I think that both of the reasons above – for compliance and to better understand a security posture – are fairly good ones.  But they’re just the first order reasons, and the second order reasons – or the outcomes – are where it gets interesting.

Sadly, one of key outcomes of auditing seems to be blame.  It’s always nice to have somebody else to blame when things go wrong, and if you can point to an audited system which has been compromised, or a system which has failed audit, then you can start pointing fingers.  I’d like to move away from this.

Some positive suggestions

What I’d like to see would be a change in attitude towards auditing.  I’d like more people and organisations to see security auditing as a net benefit to their organisations, their people, their systems and their processes[4].  This requires some changes to thinking – changes which many organisations have, of course, already made, but which more could make.  These are the second order reasons that we should be considering.

  1. Stop tick-box[5] auditing.  Too many audits – particularly security audits – seem to be solely about ticking boxes.  “Does this product or system have this feature?”  Yes or no.  That may help you pass your audit, but it doesn’t give you a chance to go further, and think about really improving what’s going on.  In order to do this, you’re going to need to find auditors who actually understand the systems they’re looking at, and are trained to some something beyond tick-box auditing.  I was lucky enough to be encouraged to audit in this broader way, alongside a number of tick-boxes, and I know that the people I was auditing always preferred this approach, because they told me that they had to do the other type, too – and hated it.
  2. Employ internal auditors.  You may not be able to get approval for actual audit sign-off if you use internal auditors, so internal auditors may have to operate alongside – or before – your external auditors, but if you find and train people who can do this job internally, then you get to have people with deep knowledge (or deeper knowledge, at least, than the external auditors) of your systems, people and processes looking at what they all do, and how they can be improved.
  3. Look to be proactive, not just reactive.  Don’t just pick or develop products, applications and systems to meet audit, and don’t wait until the time of the audit to see whether they’re going to pass.  Auditing should be about measuring and improving your security posture, so think about posture, and how you can improve it instead.
  4. Use auditing for risk-based management.  Last, but not least – far from least – think about auditing as part of your risk-based management planning.  An audit shouldn’t be something you do, pass[6] and then put away in a drawer.  You should be pushing the results back into your governance and security policy model, monitoring, validation and enforcement mechanisms.  Auditing should be about building, rather than destroying – however often it feels like it.

 


1 – you may hear phrases like “a special circle of Hell is reserved for…”.

2 – in fact, many other people might say that they’re an unnecessary evil.

3 – if not auditors.

4 – I’ve been on holiday for a few days: I’ve maybe got a little over-optimistic while I’ve been away.

5 – British usage alert: for US readers, you’ll probably call this a “check-box”.

6 – You always pass every security audit first time, right?