“Unlawful, void, and of no effect”

The news from the UK is amazing today: the Supreme Court has ruled that the Prime Minister has failed to “prorogue” Parliament – the in other words, that the members of the House of Commons and the House of Lords are still in session. The words in the title come from the judgment that they have just handed down.

I’m travelling this week, and wasn’t expecting to write a post today, but this triggered a thought in me: what provisions are in place in your organisation to cope with abuses of power and possible illegal actions by managers and executives?

Having a whistle-blowing policy and an independent appeals process is vital. This is true for all employees, but having specific rules in place for employees who are involved in such areas as compliance and implementations involving regulatory requirements is vital. Robust procedures protect not only an employee who finds themself in a difficult position, but, in the long view, the organisation itself. They a can also act as a deterrent to managers and executives considering actions which might, in the absence of such procedures, likely go unreported.

Such procedures are not enough on their own – they fall into the category of “necessary, but not sufficient” – and a culture of ethical probity also needs to be encouraged. But without such a set of procedures, your organisation is at real risk.

How to be a no-shame generalist

There is no shame in being a generalist, and knowing when you need to consult a specialist.

There comes a time in any person’s life[1] when they realise that they’re not going to be able to do all the things they might like to do to a high level of expertise.  I used to kid myself that I could do anything if I tried hard enough and practised enough, but then I tried juggling.  It turns out that I’m never going to be able to juggle.  Not just juggle expertly.  I mean juggle at all.  My trying to juggle – with only one ball, let alone more than one – is so amusing that my family realised years ago that it was a great party trick.  “Daddy,” they’ll say, “show everyone your juggling.  It’s really funny.”  “But I can’t juggle,” I retort.  “Yes,” they respond, “that’s what’s funny[2].”

I’m also never going to be able to draw or do any art with any competence.

Or play any racquet sport with any level of skill.

Or do any gardening, painting or DIY-based household jobs with any degree of expertise[3].

Some people will retort that any old fool can be taught to do x activity (usually, it’s juggling, actually), but not only do I not believe this, but also, to be honest, there just isn’t enough time in the day to learn all the things I’d kind of like to try.

What has all this to do with security?

Specialism and education

Well, I’ve posted before that I’m a systems person, and the core of thinking about systems is that you need to look at the big picture.  In order to do that, you need to be a generalist.  There’s a phrase[5] in English: “Jack of all trades, master of none”, which is often used to condemn those who know a little about many things and are seen to dabble in them without a full understanding of any of them.  Interestingly, this version may be an abbreviation of the original, more positive:

Jack of all trades, master of none,
though oftentimes better than master of one.

The core inference, though, is that generalists aren’t as useful as specialists.  I don’t believe this.

In many educational systems, there’s a tendency to push students towards narrower and narrower fields of study.  For some, this is just what is needed, but for others – “systems people”, “synthesists” and “generalists” – this isn’t the best way to harness their talents, at least in the long term.  We need people who can see the big picture, who can take a wider view, and look beyond a single blocking issue to realise that the answer to a problem may not be a better implementation of an authentication library, but a change in the authorisation mechanism being used at the component level, for instance.

There are dangers to following this approach too far, however:

  1. it can lead to disparagement of specialists and their skills, even to a distrust of experts;
  2. it can lead to arrogance on the part of generalists.

We see the first in desperately concerning trends such as politicians thinking they know more than economists or climate scientists, anti-vaxxers ignoring the benefits of vaccination, and idiocy around chem-trails, flat-earth beliefs and moon landing conspiracies.  It happens in the world of work, as well, I’m sad to say.  There is a particular type of MBA recipient, for instance, who believes that the completion of the course and award of the degree confers on them some sort of superhuman ability to know what is is best for all organisations in all circumstances[6].

Specialise first

To come back to the world of security, my recommendation is that even if you know that your skills and interests are leading you to a career as a generalist, then you need to become a specialist first, in at least area.  You may not become an expert in that field, but you need to know it well.  Better still, strive for at least a level of competence in several fields – an ability to converse knowledgeably with true experts and to understand at least why they are making the choices and recommendations that they are.

And that leads us to the key point here: if you become a generalist, you need to acknowledge lack of expertise: it must become your modus operandi, your métier, your way of working.  You need to recognise that your strength is not in your knowing many things, but in knowing what you don’t know, and when it is time to call in the specialists.

I’m not a cryptographer, but I know enough about cryptography to realise when it’s time to call in an expert.  I’m not an expert on legal issues around cryptography, either, but know when to call on a lawyer.  Nor am I an expert on block storage, blockchain consensus, quantum key exchange protocols, CPU scheduling or compression algorithms.  The same will go for many areas which I may be called on to touch as part of my job.  I hope to have enough training and expertise within related fields – or the ability to gain it – to be able to ask sensible questions, but sometimes even that won’t be true, and the best (and most productive) interaction will be to say “I don’t know about this: please explain it to me, or at least tell me what the options are.”  This seems to me to be particularly important for security folks: there are so many overlapping disciplines, and getting one piece wrong means that your defence in depth strategy just got a whole lot shallower.

Being too lazy to look things up, too arrogant to listen to others or too short-sighted to realise that there are areas in which we are not expert are things of which we should be ashamed.

But there is no shame in being a generalist, and knowing when you need to consult a specialist.


1 – I’m extrapolating horribly here, but it’s true for me so I’m assuming it’s a universal truth.

2 – apparently the look on my face, and the things I do with my tongue, are a sight to behold.

3 – I’m constantly trying to convince my wife of these, and although she’s sceptical about some, we’re now agreed that I shouldn’t be allowed access to any power tools again if we want avoid further trips to the Accident and Emergency department at the hospital[4].

4 – it’s not only power tools.  I once nearly removed my foot with a wallpaper stripper.  I still have the scar nearly 25 years later.

5 – somewhat gendered, for which I apologise.

6 – disclaimer – I have an MBA, and met many talented and humble people on my course (and have met many since) who don’t suffer from this predicament.

My 7 rules for remote-work sanity

If I need to get out of my office, I’ll take the dog for a walk

I work remotely, and have done, on and off, for a good percentage of the past 10-15 years.  I’m lucky that I’m in a role where this suits my responsibilities, and in a company – Red Hat – that is set up for it.  Not all roles – those with many customer onsite meetings, or those with a major service component – are suited to remote working, of course, but it’s clear that an increasing number of organisations are considering having at least some of their workers doing so remotely.

I’ve carefully avoided using the phrase either “working from home” or “working at home” above.  I’ve seen discussion that the latter gives a better “vibe” for some reason, but it’s not accurate for many remote workers.  In fact, it doesn’t describe my role perfectly, either.  My role is remote, in that I have no company-provided “base” – with chair, desk, meeting rooms, phone, Internet access, etc. – but I don’t spend all of my time at home.  I spend maybe one and a half weeks a month, on average, travelling – to attend or speak at conferences, to have face-to-face (“F2F”) meetings, etc..  During these times, I’m generally expected to be contactable and to keep at least vaguely up-to-date on email – though the exact nature of the activities in which I’m engaged, and the urgency of the contacts and email, may increase or reduce my engagement.

Open source

One of the reasons that I can work remotely is that I work for a company that works with open source software.  I’m currently involved in a very exciting project called Enarx (which I first announced on this blog).  We have contributors in Europe and the US – and interest from further abroad.  Our stand-ups are all virtual, and we default to turning on video.  At least two of our regulars will participate from a treadmill, I will typically actually stand at my desk.  We use github for all of our code (it’s all open source, of course), and there’s basically no reason for us to meet in person very often.  We try to celebrate together – agreeing to get cake, wherever we are, to mark special occasions, for instance – and have laptop stickers to brand ourselves and help team unity. We have a shared chat, and IRC channel and spend a lot of time communicating via different channels.  We’re still quite a small team, but it works for now.  If you’re looking for more tips about how to manage, coordinate and work in remote teams, particularly around open source projects, you’ll find lots of information at the brilliant Opensource.com.

The environment

When I’m not travelling around the place, I’m based at home.  There, I have a commute – depending on weather conditions – of around 30-45 seconds, which is generally pretty bearable.  My office is separate from the rest of the house (set in the garden), and outfitted with an office chair, desk, laptop dock, monitor, webcam, phone, keyboard and printer: these are the obvious work-related items in the room.

Equally important, however, are the other accoutrements that make for a good working environment.  These will vary from person to person, but I also have:

  • a Sonos, attached to an amplifier and good speakers
  • a sofa, often occupied by my dog, and sometimes one of the cats
  • a bookshelf, where the books which aren’t littering the floor reside
  • tea-making facilities (I’m British – this is important)
  • a fridge, filled with milk (for the tea), beer and wine (don’t worry: I don’t drink these during work hours, and it’s more that the fridge is good for “overflow” from our main kitchen one)
  • wide-opening windows and blinds for the summer (we have no air-conditioning: I’m British, remember?)
  • underfloor heating and a wood-burning stove for the winter (the former to keep the room above freezing until I get the latter warmed up)
  • a “NUC” computer and monitor for activities that aren’t specifically work-related
  • a few spiders.

What you have will depend on your work style, but these “non-work-related” items are important (bar the spiders, possibly) to my comfort and work practice.  For instance, I often like to listen to music to help me concentrate; I often sit on the sofa with the dog/cats to read long documents; and without the fridge and tea-making facilities, I might as well be American[1].

My rules

How does it work, then?  Well, first of all, most of us like human contact from time to time.  Some remote workers will rent space in a shared work environment, and work there most of the time: they prefer an office environment, or don’t have a dedicated space for working a home.  Others will mainly work in coffee shops, or on their boat[2], or may spend half of the year in the office, and the other half working from a second home.  Whatever you do, finding something that works for you is important.  Here’s what I tend to do, and why:

  1. I try to have fairly rigid work hours – officially (and as advertised on our intranet for the information of colleagues), I work 10am-6pm UK time.  This gives me a good overlap with the US (where many of my colleagues are based), and time in the morning to go for a run or a cycle and/or to walk the dog (see below).  I don’t always manage these times, but when I flex in one direction, I attempt to pull some time back the other way, as otherwise I know that I’ll just work ridiculous hours.
  2. I ensure that I get up and have a cup of tea – in an office environment, I would typically be interrupted from time to time by conversations, invitations to get tea, phyiscal meetings in meeting rooms, lunch trips, etc..  This doesn’t happen at home, so it’s important to keep moving, or you’ll be stuck at your desk for 3-4 hours at a time, frequently.  This isn’t good for your health, and often, for your productivity (and I enjoy drinking tea).
  3. I have an app which tells me when I’ve been inactive – this is new for me, but I like it.  If I’ve basically not moved for an hour, my watch (could be phone or laptop) tells me to do some exercise.  It even suggests something, but I’ll often ignore that, and get up for some tea, for instance[3].
  4. I use my standing desk’s up/down capability – I try to vary my position through the day from standing to sitting and back again.  It’s good for posture, and keeps me more alert.
  5. I walk the dog – if I just need to get out of my office and do some deep thinking (or just escape a particularly painful email thread!), I’ll take the dog for a walk.  Even if I’m not thinking about work for all of the time, I know that it’ll make me more productive, and if it’s a longish walk, I’ll make sure that I compensate with extra time spent working (which is always easy).
  6. I have family rules – the family knows that when I’m in my office, I’m at work.  They can message me on my phone (which I may ignore), or may come to the window to see if I’m available, but if I’m not, I’m not.  Emergencies (lack of milk for tea, for example) can be negotiated on a case-by-case basis.
  7. I go for tea (and usually cake) at a cafe – sometimes, I need to get into a different environment, and have a chat with actual people.  For me, popping into the car for 10 minutes and going to a cafe is the way to do this.  I’ve found one which makes good cakes (and tea).

These rules don’t describe my complete practice, but they are an important summary of what I try to do, and what keeps me (relatively) sane.  Your rules will be different, but I think it’s really important to have rules, and to make it clear to yourself, your colleagues, your friends and your family, what they are.  Remote working is not always easy, and requires discipline – but that discipline is, more often than not, in giving yourself some slack, rather than making yourself sit down for eight hours a day.


1 – I realise that many people, including many of my readers, are American.  That’s fine: you be you.  I actively like tea, however (and know how to make it properly, which seems to be an issue when I visit).

2 – I know a couple of these: lucky, lucky people!

3 – can you spot a pattern?

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

Security at Red Hat Summit

And a little teaser on my session…

I don’t often talk about my job specifically, but I’m very proud to be employed by Red Hat, working as Chief Security Architect, a role based in the Office of the CTO[1], and sometimes it’s the right time to talk about job-related stuff.  Next week is our annual Summit, and this year it’s in Boston[2], starting on Tuesday, 2019-05-07.  If you’re coming – great!  If you’re thinking about coming – please do!  And if you’re not able to come, then rest assured that many of the sessions will be recorded so that you can watch them in the future[3].

There is going to be a lot going on at Summit this year: including, I suspect, some big announcements[4].  There will also be lots of hands-on sessions, which are always extremely popular, and a number of excellent sessions and other activities around Diversity and Inclusion, a topic about which I’m extremely passionate.  As always, though, security is a big topic at Summit, and there are 50 security topic sessions listed in the agenda[5] (here’s the session catalog[ue]):

  • 26 breakout sessions
  • 11 instructor-led labs
  • 7 mini-sessions
  • 4 birds-of-a-feather sessions (“BOFs”)
  • 2 theatre sessions

These include sessions by partners and customers, as well as by Red Hatters themselves.

Many of my colleagues in OCTO will be presenting sessions in the “Emerging Technology” track, as will I.  My session is entitled “Security: Emerging technologies and open source”, and on Tuesday, at 1545 (3.45pm) I’ll be co-presenting it with my (non-OCTO) colleague Nathaniel McCallum.  The abstract is this:

What are some of the key emerging security technologies, and what impact will they have on the open source world? And what impact could open source have on them?

In this session, we’ll look at a handful of up-and-coming hardware and software technologies—from trusted execution environments to multi-party computation—and discuss the strategic impact we can expect them to have on our world. While individual technologies will be discussed (and you can expect a sneak peek demo of one of them), the focus of this session is not a deep-dive on any of them, but rather an architectural, strategic, and business view.

I’m trying to ensure that when I talk about all of these cool technologies, I talk about why open source is important to them, and/or why they are important to open source.

Here’s the particularly exciting bit, though: what’s not clear from the abstract – as it’s a late addition – is that Nathaniel plans to present a demo.  I can’t go into details at the moment, partly because we’re keeping it as a surprise, and partly because exactly what is demoed will depend on what Nathaniel’s frantic coding manages to achieve before Tuesday afternoon.  It’s one of the early results from a project we’re running, and I can tell you: a) that it involves TEEs (trusted execution environments); and b) that it’s really exciting.  I’m hoping that we can soon make more of a noise about it, and our Summit session is the start of that.

I’m hoping that the description above will be enough to convince you to attend Summit, but in case it isn’t, bear in mind the following:

  1. there will be keynotes from Jim Whitehurst (Red Hat CEO), Satya Nadella (Microsoft CEO) and Ginni Rometty (IBM CEO)
  2. the Summit party will feature Neon Trees[6].

There are lots of other great reasons to come as well, and if you do, please track me down and say hello: it’s always great to meet readers of this blog.  See you in Boston next week!


1 – “OCTO” – which, I guess, makes me one of the Octonauts.

2 – the picture at the top of this article is of Fenway Park, a place in Boston where they play baseball, which is like cricket, only quicker.  And you’re allowed to chuck the ball.

3 – in case, for any crazy reason, you’d like to see me speaking at last year’s Summit, here’s a link to the session: Getting strategic about security

4 – this should not be interpreted as a “forward-looking statement”, as I’m not privy to any particular definite decisions as to any such announcements.  Sorry – legal stuff…

5 – I’m indebted to my colleague Lucy Kerner, who’s organised and documented much of the security pieces, and from whom I have stolen copied gratefully reused much of the information in this article.

6 – I’ve only just clocked this, and my elder daughter is going to be very, very jealous when she gets back from school to discover this information.

Trust you? I can’t trust myself.

Cognitive biases are everywhere.

William Gibson’s book Virtual Light includes a bar which goes by the name of “Cognitive Dissidents”.  I noticed this last night when I was reading to bed, and it seemed apposite, because I wanted to write about cognitive bias, and the fact that I’d noticed it so strikingly was, I realised, an example of exactly that: in this case, The Frequency Illusion, or The Baader-Meinhof Effect.  Cognitive biases are everywhere, and there are far, far more of them than you might expect.

The problem is that we think of ourselves as rational beings, and it’s quite clear from decades – in some cases, centuries – of research that we’re anything but.  We’re very likely to tell ourselves that we’re rational, and it’s such a common fallacy that The Illusion of Validity (another cognitive bias) will help us believe it.  Cognitive biases are, according to Wikipedia, “systematic patterns of deviation from norm or rationality in judgment” or put maybe more simply, “our brains managing to think things which seem sensible, but aren’t.”[1]

The Wikipedia entry above gives lots of examples of cognitive bias – lots and lots of examples – and I’m far from being an expert in the field.  The more I think about risk and how we consider risk, however, the more I’m convinced that we – security professionals and those with whom we work – need to have a better understanding of our own cognitive biases and those of the people around us.  We like to believe that we make decisions and recommendations rationally, but it’s clear from the study of cognitive bias that:

  1. we generally don’t; and
  2. that even if we do, we shouldn’t expect those to whom we present them to consider them entirely rationally.

I should be clear, before we continue, that there are opportunities for abuse here.  There are techniques beloved of advertisers and the media to manipulate our thinking to their ends which we could use to our advantage and to try to manipulate others.  One example is the The Framing Effect.  If you want your management not to fund a new anti-virus product because you have other ideas for the earmarked funding, you might say:

  • “Our current product is 80% effective!”

Whereas if you do want them to fund it, you might say:

  • “Our current product is 20% ineffective!”

People react in different ways, depending on how the same information is presented, and the way the two statements above are framed aims to manipulate your listeners to the outcome you want.  So, don’t do this, and more important, look for vendors[2] who are doing this, and call them out on it.  Here, then, are a three of the more obvious cognitive biases that you may come across:

  • Irrational escalation or Sunk cost fallacy – this is the tendency for people to keep throwing money or resources at a project, vendor or product when it’s clear that it’s no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent – when it’s actually already gone.  This often comes over as misplaced pride, or people just not wanting to let go of a pet project because they’ve become attached to it, but it’s really dangerous for security, because if something clearly isn’t effective, we should be throwing it out, not sending good money after bad.
  • Normalcy bias – this is the refusal to address a risk because it’s never happened before, and is an interesting one in security, for the simple reason that so many products and vendors are trying to make us do exactly that.  What needs to happen here is that a good risk analysis needs to be performed, and then measures put in place to deal with those which are actually high priority, not those which may not happen, or which don’t seem likely at first glance.
  • Observer-expectancy effect – this is when people who are looking for particular results find it, because they have (consciously or unconsciously) misused the data.  This is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way which confirms this expectation, rather than analysed and presented in ways which are more neutral.

I intend to address more specific cognitive biases in future articles, tying them even more closely to security concerns – if you have any particular examples or war stories, I’d love to hear them.


1 – my words

2 – or, I suppose, underhand colleagues…