Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

Announcing Enarx

If I’ve managed the process properly, this article should be posting at almost exactly the time that we show a demo at Red Hat Summit 2019 in Boston.  That demo, to be delivered by my colleague Nathaniel McCallum, will be of an early incarnation of Enarx, a project that a few of us at Red Hat have been working on for a few months now, and which we’re ready to start announcing to the world.  We have code, we have a demo, we have a github repository, we have a logo: what more could a project want?  Well, people – but we’ll get to that.

What’s the problem?

When you run software (a “workload”) on a system (a “host”) on the cloud or on your own premises, there are lots and lots of layers.  You often don’t see those layers, but they’re there.  Here’s an example of the layers that you might see in a standard cloud virtualisation architecture.  The different colours represent different entities that “own”  different layers or sets of layers.

classic-cloud-virt-arch

Here’s a similar diagram depicting a standard cloud container architecture.  As before, each different colour represents a different “owner” of a layer or set of layers.

cloud-container-arch

These owners may be of very different types, from hardware vendors to OEMs to Cloud Service Providers (CSPs) to middleware vendors to Operating System vendors to application vendors to you, the workload owner.  And for each workload that you run, on each host, the exact list of layers is likely to be different.  And even when they’re the same, the versions of the layers instances may be different, whether it’s a different BIOS version, a different bootloader, a different kernel version or whatever else.

Now, in many contexts, you might not worry about this and your Cloud Service Provider goes out of its way to abstract these layers and their version details away from you.  But this is a security blog, for security people, and that means that anybody who’s reading this probably does care.

The reason we care is not just the different versions and the different layers, but the number of different things – and different entities – that we need to trust if we’re going to be happy running any sort of sensitive workload on these types of stacks.  I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised.  This is a big stretch when it comes to running my sensitive workloads.

What’s Enarx?

Enarx is a project which is trying to address this problem of having to trust all of those layers.  We made the decision that we wanted to allow people running workloads to be able to reduce the number of layers – and owners – that they need to trust to the absolute minimum.  We plan to use Trusted Execution Environments (“TEEs” – see Oh, how I love my TEE (or do I?)), and to provide an architecture that looks a little more like this:

reduced-arch

In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware – of which Enarx is part – but you don’t need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application.  The Enarx project will provide attestation of the TEE, so that you know you’re running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath you application.

The initial code is out there – working on AMD’s SEV TEE at the moment – and enough of it works now that we’re ready to tell you about it.

Making sure that your application meets your own security requirements is down to you.  🙂

How do I find out more?

Easiest is to visit the Enarx github: https://github.com/enarx.

We’ll be adding more information there – it’s currently just code – but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we’d like to have, but I thought I’d start here for now.

We’d love to have people in the community getting involved in the project.  It’s currently quite low-level, and requires quite a lot of knowledge to get running, but we’ll work on that.  You will need some specific hardware to make it work, of course.  Oh, and if you’re an early boot or a low-level kvm hacker, we’re particularly interested in hearing from you.

I will, of course, respond to comments on this article.

 

Security at Red Hat Summit

And a little teaser on my session…

I don’t often talk about my job specifically, but I’m very proud to be employed by Red Hat, working as Chief Security Architect, a role based in the Office of the CTO[1], and sometimes it’s the right time to talk about job-related stuff.  Next week is our annual Summit, and this year it’s in Boston[2], starting on Tuesday, 2019-05-07.  If you’re coming – great!  If you’re thinking about coming – please do!  And if you’re not able to come, then rest assured that many of the sessions will be recorded so that you can watch them in the future[3].

There is going to be a lot going on at Summit this year: including, I suspect, some big announcements[4].  There will also be lots of hands-on sessions, which are always extremely popular, and a number of excellent sessions and other activities around Diversity and Inclusion, a topic about which I’m extremely passionate.  As always, though, security is a big topic at Summit, and there are 50 security topic sessions listed in the agenda[5] (here’s the session catalog[ue]):

  • 26 breakout sessions
  • 11 instructor-led labs
  • 7 mini-sessions
  • 4 birds-of-a-feather sessions (“BOFs”)
  • 2 theatre sessions

These include sessions by partners and customers, as well as by Red Hatters themselves.

Many of my colleagues in OCTO will be presenting sessions in the “Emerging Technology” track, as will I.  My session is entitled “Security: Emerging technologies and open source”, and on Tuesday, at 1545 (3.45pm) I’ll be co-presenting it with my (non-OCTO) colleague Nathaniel McCallum.  The abstract is this:

What are some of the key emerging security technologies, and what impact will they have on the open source world? And what impact could open source have on them?

In this session, we’ll look at a handful of up-and-coming hardware and software technologies—from trusted execution environments to multi-party computation—and discuss the strategic impact we can expect them to have on our world. While individual technologies will be discussed (and you can expect a sneak peek demo of one of them), the focus of this session is not a deep-dive on any of them, but rather an architectural, strategic, and business view.

I’m trying to ensure that when I talk about all of these cool technologies, I talk about why open source is important to them, and/or why they are important to open source.

Here’s the particularly exciting bit, though: what’s not clear from the abstract – as it’s a late addition – is that Nathaniel plans to present a demo.  I can’t go into details at the moment, partly because we’re keeping it as a surprise, and partly because exactly what is demoed will depend on what Nathaniel’s frantic coding manages to achieve before Tuesday afternoon.  It’s one of the early results from a project we’re running, and I can tell you: a) that it involves TEEs (trusted execution environments); and b) that it’s really exciting.  I’m hoping that we can soon make more of a noise about it, and our Summit session is the start of that.

I’m hoping that the description above will be enough to convince you to attend Summit, but in case it isn’t, bear in mind the following:

  1. there will be keynotes from Jim Whitehurst (Red Hat CEO), Satya Nadella (Microsoft CEO) and Ginni Rometty (IBM CEO)
  2. the Summit party will feature Neon Trees[6].

There are lots of other great reasons to come as well, and if you do, please track me down and say hello: it’s always great to meet readers of this blog.  See you in Boston next week!


1 – “OCTO” – which, I guess, makes me one of the Octonauts.

2 – the picture at the top of this article is of Fenway Park, a place in Boston where they play baseball, which is like cricket, only quicker.  And you’re allowed to chuck the ball.

3 – in case, for any crazy reason, you’d like to see me speaking at last year’s Summit, here’s a link to the session: Getting strategic about security

4 – this should not be interpreted as a “forward-looking statement”, as I’m not privy to any particular definite decisions as to any such announcements.  Sorry – legal stuff…

5 – I’m indebted to my colleague Lucy Kerner, who’s organised and documented much of the security pieces, and from whom I have stolen copied gratefully reused much of the information in this article.

6 – I’ve only just clocked this, and my elder daughter is going to be very, very jealous when she gets back from school to discover this information.

Trust you? I can’t trust myself.

Cognitive biases are everywhere.

William Gibson’s book Virtual Light includes a bar which goes by the name of “Cognitive Dissidents”.  I noticed this last night when I was reading to bed, and it seemed apposite, because I wanted to write about cognitive bias, and the fact that I’d noticed it so strikingly was, I realised, an example of exactly that: in this case, The Frequency Illusion, or The Baader-Meinhof Effect.  Cognitive biases are everywhere, and there are far, far more of them than you might expect.

The problem is that we think of ourselves as rational beings, and it’s quite clear from decades – in some cases, centuries – of research that we’re anything but.  We’re very likely to tell ourselves that we’re rational, and it’s such a common fallacy that The Illusion of Validity (another cognitive bias) will help us believe it.  Cognitive biases are, according to Wikipedia, “systematic patterns of deviation from norm or rationality in judgment” or put maybe more simply, “our brains managing to think things which seem sensible, but aren’t.”[1]

The Wikipedia entry above gives lots of examples of cognitive bias – lots and lots of examples – and I’m far from being an expert in the field.  The more I think about risk and how we consider risk, however, the more I’m convinced that we – security professionals and those with whom we work – need to have a better understanding of our own cognitive biases and those of the people around us.  We like to believe that we make decisions and recommendations rationally, but it’s clear from the study of cognitive bias that:

  1. we generally don’t; and
  2. that even if we do, we shouldn’t expect those to whom we present them to consider them entirely rationally.

I should be clear, before we continue, that there are opportunities for abuse here.  There are techniques beloved of advertisers and the media to manipulate our thinking to their ends which we could use to our advantage and to try to manipulate others.  One example is the The Framing Effect.  If you want your management not to fund a new anti-virus product because you have other ideas for the earmarked funding, you might say:

  • “Our current product is 80% effective!”

Whereas if you do want them to fund it, you might say:

  • “Our current product is 20% ineffective!”

People react in different ways, depending on how the same information is presented, and the way the two statements above are framed aims to manipulate your listeners to the outcome you want.  So, don’t do this, and more important, look for vendors[2] who are doing this, and call them out on it.  Here, then, are a three of the more obvious cognitive biases that you may come across:

  • Irrational escalation or Sunk cost fallacy – this is the tendency for people to keep throwing money or resources at a project, vendor or product when it’s clear that it’s no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent – when it’s actually already gone.  This often comes over as misplaced pride, or people just not wanting to let go of a pet project because they’ve become attached to it, but it’s really dangerous for security, because if something clearly isn’t effective, we should be throwing it out, not sending good money after bad.
  • Normalcy bias – this is the refusal to address a risk because it’s never happened before, and is an interesting one in security, for the simple reason that so many products and vendors are trying to make us do exactly that.  What needs to happen here is that a good risk analysis needs to be performed, and then measures put in place to deal with those which are actually high priority, not those which may not happen, or which don’t seem likely at first glance.
  • Observer-expectancy effect – this is when people who are looking for particular results find it, because they have (consciously or unconsciously) misused the data.  This is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way which confirms this expectation, rather than analysed and presented in ways which are more neutral.

I intend to address more specific cognitive biases in future articles, tying them even more closely to security concerns – if you have any particular examples or war stories, I’d love to hear them.


1 – my words

2 – or, I suppose, underhand colleagues…

AI, blockchain and security – huh? Ahh!

There’s a place for the security community to be more involved AI and blockchain.

Next week, I’m flying to the US on Tuesday, heading back on Thursday evening, arriving back in Heathrow on Friday morning, and heading (hopefully after a shower) to Cyber Security & Cloud Expo Global to present a session and then, later in the afternoon, to participate in a panel.  One of the reasons that I’m bothering to post this here is that I have a feeling that I’m going to be quite tired by the end of Friday, and it’s probably a good idea to get at least a few thoughts in order for the panel session up front[1].  This is particularly the case as there’s some holiday happening between now and then, and if I’m lucky, I’ll be able to forget about everything work-related in the meantime.

The panel has an interesting title.  It’s “How artificial intelligence and blockchain are the battlegrounds for the next security wars“.  You could say that this is buzzwordy attempt to shoehorn security into a session about two unrelated topics, but the more I think about it, the more I’m glad that the organisers have put this together.  It seems to me that although AI and blockchain are huge topics, have their own conference circuits[2] and garner huge amounts of interest from the press, there’s a danger that people whose main job and professional focus is security won’t be the most engaged in this debate.  To turn that around a bit, what I mean is that although there are people within the AI and blockchain communities considering security, I think that there’s a place for people within the security community to be more involved in considering AI and blockchain.

AI and security

“What?” you may say.  “Have you not noticed all of the AI-enabled products being sold by vendors in the security community?”

To which I reply, “pfft.”

It would be rude[3] to suggest that none of those security products have any “Artificial Intelligence” actually anywhere near them.  However, it seems to me (and I’m not alone) that most of those products are actually employing much more basic algorithms which are more accurately portrayed as “Machine Learning”.  And that’s if we’re being generous.

More importantly, however, I think that what’s really important to discuss here is security in a world where AI (or, OK, ML) is the key element of a product or service, or in a world where AI/ML is a defining feature of how we live at least part of our world.  In other words, what’s the impact of security when we have self-driving cars, AI-led hiring practices and fewer medical professionals performing examinations and diagnoses?  What does “the next battleground” mean in this context?  Are we fighting to keep these systems from overstepping their mark, or fighting to stop malicious actors from compromising or suborning them to their ends?  I don’t know, and that’s part of why I’m looking forward to my involvement on the panel.

Blockchain and security

Blockchain is the other piece, of course, and there are lots of areas for us to consider here, too.  Three that spring to my mind are:

Whether you believe that blockchain(s) is(are) going to take over the world or not, it’s a vastly compelling topic, and I think it’s important for it to be discussed outside its “hype bubble” in the context of a security conference.

Conclusion

While I’m disappointed that I’m not going to be able to see some of the other interesting folks speaking at different times of the conference, I’m rather looking forward to the day that I will be there.  I’m also somewhat relieved to have been able to use this article to consider some of the points that I suspect – and hope! – will be brought up in the panel.  As always, I welcome comments and thoughts around areas that I might not have considered (or just missed out).  And last, I’d be very happy to meet you at the conference if you’re attending.


1 – I’ve already created the slides for the talk, I’m pleased to say.

2 – my, do they have their own conference circuits…

3 – not to mention possibly actionable.  And possibly even incorrect.

3 tips to avoid security distracti… Look: squirrel!

Executive fashions change – and not just whether shoulder-pads are “in”.

There are many security issues to worry about as an organisation or business[1].  Let’s list some of them:

  • insider threats
  • employee incompetence
  • hacktivists
  • unpatched systems
  • patched systems that you didn’t test properly
  • zero-day attacks
  • state actor attacks
  • code quality
  • test quality
  • operations quality
  • underlogging
  • overlogging
  • employee-owned devices
  • malware
  • advanced persistent threats
  • data leakage
  • official wifi points
  • unofficial wifi points
  • approved external access to internal systems via VPN
  • unapproved external access to internal systems via VPN
  • unapproved external access to internal systems via backdoors
  • junior employees not following IT-mandated rules
  • executives not following IT-mandated rules

I could go on: it’s very, very easy to find lots of things that should concern us.  And it’s particularly confusing if you just go around finding lots of unconnected things which are entirely unrelated to each other and aren’t even of the same type[2]. I mean: why list “code quality” in the same list as “executives not following IT-mandated rules”?  How are you supposed to address issues which are so diverse?

And here, of course, is the problem: this is what organisations and businesses do have to address.  All of these issues may present real risks to the functioning (or at least continued profitability) of the organisations[3].  What are you supposed to do?  How are you supposed to keep track of all these things?

The first answer that I want to give is “don’t get distracted”, but that’s actually the final piece of advice, because it doesn’t really work unless you’ve already done some work up front.  So what are my actual answers?

1 – Perform risk analysis

You’re never going to be able to give your entire attention to everything, all the time: that’s not how life works.  Nor are you likely to have sufficient resources to be happy that everything has been made as secure as you would like[4].  So where do you focus your attention and apply those precious, scarce resources?  The answer is that you need to consider what poses the most risk to your organisation.  The classic way to do this is to use the following formula:

Risk = Likelihood x Impact

This looks really simple, but sadly it’s not, and there are entire books and companies dedicated to the topic.  Impact may be to reputation, to physical infrastructure, system up-time, employee morale, or one of hundreds of other items.  The difficulty of assessing the likelihood may range from simple (“the failure rate on this component is once every 20 years, and we have 500 of them”[5]) to extremely difficult (“what’s the likelihood of our CFO clicking on a phishing attack?”[6]).  Once it’s complete, however, for all the various parts of the business you can think of – and get other people from different departments in to help, as they’ll think of different risks, I can 100% guarantee – then you have an idea of what needs the most attention.  (For now: because you need to repeat this exercise of a regular basis, considering changes to risk, your business and the threats themselves.)

2 – Identify and apply measures

You have a list of risks.  What to do?  Well, a group of people – and this is important, as one person won’t have a good enough view of everything – needs to sit[7] down and work out what measures to put in place to try to reduce or at least mitigate the various risks.  The amount of resources that the organisation should be willing to apply to this will vary from risk to risk, and should generally be proportional to the risk being addressed, but won’t always be of the same kind.  This is another reason why having different people involved is important.  For example, one risk that you might be able to mitigate by spending a £50,000 (that’s about the same amount of US dollars) on a software solution might be equally well addressed by a physical barrier and a sign for a few hundred pounds.  On the other hand, the business may decide that some risks should not be mitigated against directly, but rather insured against.  Other may require training regimes and investment in t-shirts.

Once you’ve identified what measures are appropriate, and how much they are going to cost, somebody’s going to need to find money to apply them.  Again, it may be that they are not all mitigated: it may just be too expensive.  But the person who makes that decision should be someone senior – someone senior enough to take the flak should the risk come home to roost.

Then you apply your measures, and, wherever possible, you automate them and their reporting.  If something is triggered, or logged, you then know:

  1. that you need to pay attention, and maybe apply some more measures;
  2. that the measure was at least partially effective;
  3. that you should report to the business how good a job you – and all those involved – have done.

3 – Don’t get distracted

My final point is where I’ve been trying to go with this article all along: don’t get distracted.  Distractions come in many flavours, but here are three of the most dangerous.

  1. A measure was triggered, and you start paying all of your attention to that measure, or the system(s) that it’s defending.  If you do this, you will miss all of the other attacks that are going on.  In fact, here’s your opportunity to look more broadly and work out whether there are risks that you’d not considered, and attacks that are coming in right now, masked by the one you have noticed.
  2. You assume that the most expensive measures are the ones that require the most attention, and ignore the others.  Remember: the amount of resources you should be ready to apply to each risk should be proportional to the risk, but the amount actually applied may not be.  Check that the barrier you installed still works and that the sign is still legible – and if not, then consider whether you need to spend that £50,000 on software after all.  Also remember that just because a risk is small, that doesn’t mean that it’s zero, or that the impact won’t be high if it does happen.
  3. Executive fashions change – and not just whether shoulder-pads are “in”, or the key to the boardroom bathroom is now electronic, but a realisation that executives (like everybody else) are bombarded with information.  The latest concern that your C-levels read about in the business section, or hears about from their buddies on the golf course[9] may require consideration, but you need to ensure that it’s considered in exactly the same way as all of the other risks that you addressed in the first step.  You need to be firm about this – both with the executive(s), but also yourself, because although I identified this as an executive risk, the same goes for the rest of us. Humans are generally better at keeping their focus on the new, shiny thing in front of them, rather than the familiar and the mundane.

Conclusion

You can’t know everything, and you probably won’t be able to cover everything, either, but having a good understanding of risk – and maintaining your focus in the event of distractions – means that at least you’ll be covering and managing what you can know, and can be ready to address new ones as they arrive.


1 – let’s be honest: there are lots if you’re a private individual, too, but that’s for another day.

2- I did this on purpose, annoying as it may be to some readers. Stick with it.

3 – not to mention the continued employment of those tasked with stopping these issues.

4 – note that I didn’t write “everything has been made secure”: there is no “secure”.

5 – and, to be picky, this isn’t as simple as it looks either: does that likelihood increase or decrease over time, for instance?

6 – did I say “extremely difficult”?  I meant to say…

7 – you can try standing, but you’re going to get tired: this is not a short process.

8 – now that, ladles and gentlespoons, is a nicely mixed metaphor, though I did stick with an aerial theme.

9 – this is a gross generalisation, I know: not all executives play golf.  Some of them play squash/racketball instead.