On Friday, 29th November 2019, Jack Merritt and Saskia Jones were killed in a terrorist attack. A number of members of the public (some with with improvised weapons) and of the emergency services acted with great heroism. I wanted to mention the mention the names of the victims and to praise those involved in stopping him before mentioning the name of the attacker: Usman Khan. The victims, the attacker were taking part in an offender rehabilitation conference to help offenders released from prison to reintegrate into society: Khan had been convicted to 16 years in prison for terrorist offences.
There’s an important formula that everyone involved in risk – and given that IT security is all about mitigating risk, that’s anyone involved in security – should know. It’s usually expressed thus:
Risk = likelihood x impact
Sometimes likelihood is sometimes expressed as “probability”, impact as “consequence” or “loss”, and I’ve seen some other variants as well, but the version above is generally sufficient for most purposes.
Using the formula
How should you use the formula? Well, it’s most useful for comparing risks and deciding how to mitigate them. Humans are terrible at calculating risk, and any tools that help them is good. In order to use this formula correctly, you want to compare risks over the same time period. You could say that almost any eventuality may come to pass over the lifetime of the universe, but comparing the risk of losing broadband access to the risk of your lead developer quitting for another company between the Big Bang and the eventual heat death of the universe is probably not going to give you much actionable information.
Let’s look at the two variables that we need to have in order to calculate risk. We’ll start with the impact, because I want to devote most of this article to the other part: likelihood.
Impact is what the damage will be if the risk happens. In a business context, you want to look at the risk of your order system being brought down for a week by malicious attackers. You might calculate that you would lose £15,000 in orders. On top of that, there might be a loss of reputation which you might calculate at £30,000. Fixing the problem might add £10,000. Add these together, and the impact is £55,000.
What’s the likelihood? Well, remember that we need to consider a particular time period. What you choose will depend on what you’re interested in, but a classic use is for budgeting, and so the length of time considered is often a year. “What is the likelihood of my order system being brought down for a week by malicious attackers over the next twelve months?” is the question you want to ask. If you decide that it’s 0.005 (or 0.5%), then your risk is calculated thus:
Risk = 0.005 x 55,000
Risk = 275
The units don’t really matter, because what you want to do is compare risks. If the risk of your order system being brought down through hardware failure is higher (say 500), then you should probably balance the amount of resources you assign to mitigate these risks accordingly.
Time, reputation, trust and risk
What I’m interested in is a set of rather more complicated risks, however: those associated with human behaviour. I’m very interested in trust, and one of the interesting things about trust is how we decide to trust people. One way is by their reputation: if someone keeps behaving well over a long period, then we tend to trust them more – or if badly, then to trust them less. If we trust someone more, our calculation of risk is likely to be strongly based on that trust, as our view of the likelihood of a behaviour at odds with the reputation that person holds will be informed by that.
This makes sense: in the absence of perfect information about humans, their motivations and intentions, our view of risk must be based on something, and reputation is actually a fairly good measure for that. We might say that the likelihood of a customer defaulting on payment terms reduces year by year as we start to think of them as a “trusted customer”. As the likelihood reduces, we may decide to increase the amount we lend to them – and thereby the impact of defaulting – to keep the risk about the same, year on year.
The risk here is what is sometimes called “playing the long game”. Humans sometimes manipulate their reputation, or build up a reputation, in order to perform an action once they have gained trust. Online sellers my make lots of “good” sales in order to get a 5 star rating over time, only to wait and then make a set of “bad” sales, where they don’t ship goods at all, and then just pocket the money. Or, they may make many small sales in order to build up a good reputation, and then use that reputation to make one big sale which they have no intention of fulfilling. Online selling sites are wise to some of these tricks, and have algorithms to try to protect buyers (in fact, the same behaviour can be used by sellers in some cases), but these are not perfect.
I’d like to come back to the London Bridge attack. In this case, it seems likely that the attacker bided his time over many years, behaving well, and raising his “reputation” among those who knew him – the prison staff, parole board, rehabilitation conference organisers, etc. – so that he had the opportunity to perform one major action at odds with that reputation. The heroism of those around him stopped him being as successful as he may have hoped, but still at the cost of two innocent lives and several serious injuries.
There is no easy way to deal with such issues. We need reputation, and we need to allow people to show that they have changed and can be integrated into society, but when we make risk calculations based on reputation in any sphere, we should take care to consider whether actors are playing a long game, and what the possible ramifications would be if they were to act at odds with that reputation.
I noted above that humans are bad at calculating risk, and to follow our example of the non-defaulting customer, one mistake might be to increase the credit we give to that customer beyond the balance of the increase of reputation: actually accepting higher risk than we would have done previously, because we consider them trustworthy. If we do this, we’ve ceased to use the risk formula, and have started to act irrationally. Don’t do that.
1 – OK, then: “us”.
2 – I’m writing this in the lead up to a UK General Election, and it occurs to me that we actually don’t apply this to most of our politicians.