Humans and (being bad at) trust

Why “signing parties” were never a good idea.

I went to a party recently, and it reminded of quite how bad humans are at trust. It was a work “mixer”, and an attempt to get people who didn’t know each other well to chat and exchange some information. We were each given two cards to hang around our necks: one on which to write our own name, and the other on which we were supposed to collect the initials of those to whom we spoke (in their own hand). At the end of the event, the plan was to hand out rewards whose value was related to the number of initials collected. Pens/markers were provided.

I gamed the system by standing by the entrance, giving out the cards, controlling the markers and ensuring that everybody signed my card, hence ending up with easily the largest number of initials of anyone at the party. But that’s not the point. Somebody – a number of people, in fact – pointed out the similarities between this and “key signing parties”, and that got me thinking. For those of you not old enough – or not security-geeky enough – to have come across these, they were events which were popular in the late nineties and early parts of the first decade of the twenty-first century[1] where people would get together, typically at a tech show, and sign each other’s PGP keys. PGP keys are an interesting idea whereby you maintain a public-private key pair which you use to sign emails, assert your identity, etc., in the online world. In order for this to work, however, you need to establish that you are who you say you are, and in order for this to work, you need to convince someone of this fact.

There are two easy ways to do this:

  1. meet someone IRL[2], get them to validate your public key, and sign it with theirs;
  2. have someone who knows the person you met in step 1 agree that they can probably trust you, as the person in step 1 did, and they trust them.

This is a form of trust based on reputation, and it turns out that it is a terrible model for trust. Let’s talk about some of the reasons for it not working. There are four main ones:

  • context
  • decay
  • transitive trust
  • peer pressure.

Let’s evaluate these briefly.

Context

I can’t emphasise this enough: trust is always, always contextual (see “What is trust?” for a quick primer). When people signed other people’s key-pairs, all they should really have been saying was “I believe that the identity of this person is as stated”, but signatures and encryption based on these keys was (and is) frequently misused to make statements about, or claim access to, capabilities that were not necessarily related to identity.

I lay some of the fault of this at the US alcohol consumption policy. Many (US) Americans use their driving licence/license as a form of authorisation: I am over this age, and am therefore entitled to purchase alcohol. It was designed to prove that their were authorised to drive, and nothing more than that, but you can now get a US driving licence to prove your age even if you can’t drive, and it can be used, for instance, as security identification for getting on aircraft at airportsThis is crazy, but partly explains why there is such a confusion between identification, authentication and authorisation.

Decay

Trust, as I’ve noted before in many articles, decays. Just because I trust you now (within a particular context) doesn’t mean that I should trust you in the future (in that or any other context). Mechanisms exist within the PGP framework to expire keys, but it was (I believe) typical for someone to resign a new set of keys just because they’d signed the previous set. If they were only being used for identity, then that’s probably OK – most people rarely change their identity, after all – but, as explained above, these key pairs were often used more widely.

Transitive trust

This is the whole “trusting someone because I trust you” problem. Again, if this were only about identity, then I’d be less worried, but given people’s lack of ability to specify context, and their equal inability to communicate that to others, the “fuzziness” of the trust relationships being expressed was only going to increase with the level of transitiveness, reducing the efficacy of the system as a whole.

Peer pressure

Honestly, this occurred to me due to my gaming of the system, as described in the second paragraph at the top of this article. I remember meeting people at events and agreeing to endorse their key-pairs basically because everybody else was doing it. I didn’t really know them, though (I hope) I had at least heard of them (“oh, you’re Denny’s friend, I think he mentioned you”), and I certainly shouldn’t have been signing their key-pairs. I am certain that I was not the only person to fall into this trap, and it’s a trap because humans are generally social animals[3], and they like to please others. There was ample opportunity for people to game the system much more cynically than I did at the party, and I’d be surprised if this didn’t happen from time to time.

Stepping back a bit

To be fair, it is possible to run a model like this properly. It’s possible to avoid all of these by insisting on proper contextual trust (with multiple keys for different contexts), by re-evaluating trust relationships on a regular basis, by being very careful about trusting people just due to their trusting someone else (or refusing to do so at all), and by refusing just to agree to trust someone because you’ve met them and they “seem nice”. But I’m not aware of anyone – anyone – who kept to these rules, and it’s why I gave up on this trust model over a decade ago. I suspect that I’m going to get some angry comments from people who assert that they used (and use) the system properly, and I’m sure that there are people out there who did and do: but as a widespread system, it was only going to work if the large majority of all users treated it correctly, and given human nature and failings, that never really happened.

I’m also not suggesting that we have many better models – but we really, really need to start looking for some, as this is important, and difficult stuff.


1 – I refuse to refer to these years the “aughts”.

2 – In Real Life – this used to be an actual distinction to online.

3 – even a large enough percentage of IT folks to make this a problem.

Breaking the security chain(s)

Your environment is n-dimensional – your trust must be, too.

One of the security principles by which we[1] live[2] is that security is only as strong as the weakest link in a chain.  That link is variously identified as:

  • your employees
  • external threat actors
  • all humans
  • lack of training
  • cryptography
  • logging
  • anti-virus
  • auditing capabilities
  • the development lifecycle
  • waterfall methodology
  • passwords
  • any other authentication mechanisms
  • electrical wiring
  • hurricanes
  • earthquakes
  • and pestilence.

Actually, I don’t think I’ve ever seen the last one mentioned, but it’s only a matter of time.  However, very rarely does anybody bother to identify exactly what the chain is that it is being broken by the weakest link splintering into a thousand pieces.

There are a number of candidates that spring to mind:

  1. your application flow.  This is rather an old-fashioned way of thinking of applications: that a program is started, goes through a set of actions, and then terminates, but to think more broadly about it, any action which causes an application to behave in unexpected or unintended ways is a possible security flow, whether that is a monolithic application, a set of microservices or an app on  mobile device.
  2. your software stack.  Depending on how you think about your stack, there are likely to be at least 5, maybe a dozen or maybe even scores of layers in your software stack (for an example with just a few simple layers, see Announcing Enarx).  However you think about it, you need to trust all of those layers to do what who expect them to do.  If one of them is compromised, malicious, or just poorly implemented or maintained, then you have a security issue.
  3. your hardware stack.  There was a time, barely five years ago, when most people (excepting us[1], of course), assumed that hardware did what we thought it was supposed to do, all of the time.  In fact, we should all have known better, given the Clipper Chip and the Pentium bug (to name just to famous examples), but with Spectre, Meltdown and a growing realisation that hardware isn’t as trustworthy as was previously thought, everybody needs to decide exactly what security they can trust in which components.
  4. your operational processes.  You can have the best software and hardware in the world, but if you don’t maintain it and operate it properly, it’s going to be full of holes.  Failing to invest in operations, monitoring, logging, auditing and the rest leaves you wide open.
  5. your supply chain. There’s a growing understanding in the industry that our software and hardware supply chains are possible points of failure[3].  Whether your vendor is entirely proprietary (in which case their security is largely opaque) or open source (in which case you’ve got a chance to be able to see what’s going on), errors or maliciousness in the supply chain can scupper any hopes you had of security for your deployment.
  6. your software and hardware lifecycle.  Developing software?  Patching it?  Upgrading hardware (or software)?  Unit testing?  Unit testingg?  We all know that a failure to manage the lifecycle of our environment can lead to security problems.

The point I’m trying to make above is that there’s no single chain.  Your environment is n-dimensional – your trust must be, too.  If you don’t think about all of these contexts – and there will be more beyond the half-dozen that I’ve just noted – then you can’t have a good chance of managing security in your environment.  I honestly don’t think that there’s any single weakest link in the chain, because there are always already multiple chains in play: our job is to think about as many of them as possible, and then manage an mitigate the risks associated with each.


1 – the mythical “IT security community”.

2 – you’re right: “which we live by” would sound much more natural.

3 – and a growing industry to try to provide fixes.

“Unlawful, void, and of no effect”

The news from the UK is amazing today: the Supreme Court has ruled that the Prime Minister has failed to “prorogue” Parliament – the in other words, that the members of the House of Commons and the House of Lords are still in session. The words in the title come from the judgment that they have just handed down.

I’m travelling this week, and wasn’t expecting to write a post today, but this triggered a thought in me: what provisions are in place in your organisation to cope with abuses of power and possible illegal actions by managers and executives?

Having a whistle-blowing policy and an independent appeals process is vital. This is true for all employees, but having specific rules in place for employees who are involved in such areas as compliance and implementations involving regulatory requirements is vital. Robust procedures protect not only an employee who finds themself in a difficult position, but, in the long view, the organisation itself. They a can also act as a deterrent to managers and executives considering actions which might, in the absence of such procedures, likely go unreported.

Such procedures are not enough on their own – they fall into the category of “necessary, but not sufficient” – and a culture of ethical probity also needs to be encouraged. But without such a set of procedures, your organisation is at real risk.

How to be a no-shame generalist

There is no shame in being a generalist, and knowing when you need to consult a specialist.

There comes a time in any person’s life[1] when they realise that they’re not going to be able to do all the things they might like to do to a high level of expertise.  I used to kid myself that I could do anything if I tried hard enough and practised enough, but then I tried juggling.  It turns out that I’m never going to be able to juggle.  Not just juggle expertly.  I mean juggle at all.  My trying to juggle – with only one ball, let alone more than one – is so amusing that my family realised years ago that it was a great party trick.  “Daddy,” they’ll say, “show everyone your juggling.  It’s really funny.”  “But I can’t juggle,” I retort.  “Yes,” they respond, “that’s what’s funny[2].”

I’m also never going to be able to draw or do any art with any competence.

Or play any racquet sport with any level of skill.

Or do any gardening, painting or DIY-based household jobs with any degree of expertise[3].

Some people will retort that any old fool can be taught to do x activity (usually, it’s juggling, actually), but not only do I not believe this, but also, to be honest, there just isn’t enough time in the day to learn all the things I’d kind of like to try.

What has all this to do with security?

Specialism and education

Well, I’ve posted before that I’m a systems person, and the core of thinking about systems is that you need to look at the big picture.  In order to do that, you need to be a generalist.  There’s a phrase[5] in English: “Jack of all trades, master of none”, which is often used to condemn those who know a little about many things and are seen to dabble in them without a full understanding of any of them.  Interestingly, this version may be an abbreviation of the original, more positive:

Jack of all trades, master of none,
though oftentimes better than master of one.

The core inference, though, is that generalists aren’t as useful as specialists.  I don’t believe this.

In many educational systems, there’s a tendency to push students towards narrower and narrower fields of study.  For some, this is just what is needed, but for others – “systems people”, “synthesists” and “generalists” – this isn’t the best way to harness their talents, at least in the long term.  We need people who can see the big picture, who can take a wider view, and look beyond a single blocking issue to realise that the answer to a problem may not be a better implementation of an authentication library, but a change in the authorisation mechanism being used at the component level, for instance.

There are dangers to following this approach too far, however:

  1. it can lead to disparagement of specialists and their skills, even to a distrust of experts;
  2. it can lead to arrogance on the part of generalists.

We see the first in desperately concerning trends such as politicians thinking they know more than economists or climate scientists, anti-vaxxers ignoring the benefits of vaccination, and idiocy around chem-trails, flat-earth beliefs and moon landing conspiracies.  It happens in the world of work, as well, I’m sad to say.  There is a particular type of MBA recipient, for instance, who believes that the completion of the course and award of the degree confers on them some sort of superhuman ability to know what is is best for all organisations in all circumstances[6].

Specialise first

To come back to the world of security, my recommendation is that even if you know that your skills and interests are leading you to a career as a generalist, then you need to become a specialist first, in at least area.  You may not become an expert in that field, but you need to know it well.  Better still, strive for at least a level of competence in several fields – an ability to converse knowledgeably with true experts and to understand at least why they are making the choices and recommendations that they are.

And that leads us to the key point here: if you become a generalist, you need to acknowledge lack of expertise: it must become your modus operandi, your métier, your way of working.  You need to recognise that your strength is not in your knowing many things, but in knowing what you don’t know, and when it is time to call in the specialists.

I’m not a cryptographer, but I know enough about cryptography to realise when it’s time to call in an expert.  I’m not an expert on legal issues around cryptography, either, but know when to call on a lawyer.  Nor am I an expert on block storage, blockchain consensus, quantum key exchange protocols, CPU scheduling or compression algorithms.  The same will go for many areas which I may be called on to touch as part of my job.  I hope to have enough training and expertise within related fields – or the ability to gain it – to be able to ask sensible questions, but sometimes even that won’t be true, and the best (and most productive) interaction will be to say “I don’t know about this: please explain it to me, or at least tell me what the options are.”  This seems to me to be particularly important for security folks: there are so many overlapping disciplines, and getting one piece wrong means that your defence in depth strategy just got a whole lot shallower.

Being too lazy to look things up, too arrogant to listen to others or too short-sighted to realise that there are areas in which we are not expert are things of which we should be ashamed.

But there is no shame in being a generalist, and knowing when you need to consult a specialist.


1 – I’m extrapolating horribly here, but it’s true for me so I’m assuming it’s a universal truth.

2 – apparently the look on my face, and the things I do with my tongue, are a sight to behold.

3 – I’m constantly trying to convince my wife of these, and although she’s sceptical about some, we’re now agreed that I shouldn’t be allowed access to any power tools again if we want avoid further trips to the Accident and Emergency department at the hospital[4].

4 – it’s not only power tools.  I once nearly removed my foot with a wallpaper stripper.  I still have the scar nearly 25 years later.

5 – somewhat gendered, for which I apologise.

6 – disclaimer – I have an MBA, and met many talented and humble people on my course (and have met many since) who don’t suffer from this predicament.

Open source and – well, bad people

オープンソースと悪人と

For most people writing open source, it – open source software – seems like an unalloyed good.  You write code, and other nice people, like you, get to add to it, test it, document it and use it.  Look what good it can do to the world!  Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source.  Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed.  The very soul of open source – the licence – is doing our work for us: improving the world.

And on the whole, I’d agree.  But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder.  Because I know, and you know, when you think about it, that not everybody who uses open source is a good person.  Crackers (that’s “bad hackers”) use open source.  Drug dealers use open source.  People traffickers use open source.  Terrorists use open source.  Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people.  These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”.  You’re happy to continue contributing to open source because many more people are helped by it than harmed by it.  The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1].  This is sometimes described as:

“Actions are right in proportion as they tend to promote overall human happiness.”

I certainly hope that open source does tend to promote overall human happiness.  The problem is that criminals are not the only people who will be using open source – your open source – code.  There will be businesses whose practices are shady, governments that  oppress their detractors, police forces that spy on the citizens they watch.  This is your code, being used to do bad things.

But what even are bad things?  This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad.  We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:

  • tobacco manufacturers;
  • petrochemical and fracking companies;
  • plastics manufacturers;
  • organisations who don’t support LGBTQ+ people;
  • gun manufacturers.

There’s quite a range here, and that’s intentional.  Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3].  He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”.  For ESR, “it’s all about freedom”.  I disagree, although I don’t feel the need to attack him for it.  But it’s clear that his view about what constitutes good is different to mine.  I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view.  Although we tend to characterise the open source community as liberal, this has never been a good generalisation.  According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:

Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.

This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community.  Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation.  Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.

I still think, though, that I’m not too worried.  I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).


1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.

2 – you should be used to this by now: UK spelling throughout.

3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page

First aid – are you ready?

Your using the defibrillator is the best chance that the patient has of surviving.

Disclaimer: I am not a doctor, nor a medical professional. I will attempt not to give specific medical or legal advice in this article: please check your local medical and legal professionals before embarking on any course of action about which you are unsure.

This is, generally, a blog about security – that is, information security or cybersecurity – but I sometimes blog about other things. This is one of those articles. It’s still about security, if you will – the security and safety of those around you. Here’s how it came about: I recently saw a video on LinkedIn about a restaurant manager performing Abdominal Thrusts (it’s not called the Heimlich Manoeuvre any more due to trademarking) on a choking customer, quite possibly saving his life.

And I thought: I’ve done that.

And then I thought: I’ve performed CPR, and used a defibrillator, and looked after people who were drunk or concussed, and helped people having a diabetic episode, and encouraged a father to apply an epipen[1] to a confused child suffering from anaphylactic shock, and comforted a schoolchild who had just had an epileptic fit, and attended people in more than one car crash (typically referred to as an “RTC”, or “Road Traffic Collision” in the UK these days[2]).

And then I thought: I should tell people about these stories. Not to boast[3], but because if you travel a lot, or you commute to work, or you have a family, or you work in an office, or you ever go out to a party, or you play sports, or engage in hobby activities, or get on a plane or train or boat or drive anywhere, then there’s a decent chance that you may come across someone who needs your help, and it’s good – very good – if you can offer them some aid. It’s called “First Aid” for a reason: you’re not expected to know everything, or fix everything, but you’re the first person there who can provide aid, and that’s the best the patient can expect until professionals arrive.

Types of training

There are a variety of levels of first aid training that might be appropriate for you. These include:

  • family and children focussed;
  • workplace first aid;
  • hobby, sports and event first aid;
  • ambulance and local health service support and volunteering.

There’s an overlap between all of these, of course, and what you’re interested in, and what’s available to you, will vary based on your circumstances and location. There may be other constraints such as age and physical ability or criminal background checks: these will definitely be dependent on your location and individual context.

I’m what’s called, in the UK, a Community First Responder (CFR). We’re given some specific training to help provide emergency first aid in our communities. What exactly you do depends on your local ambulance trust – I’m with the East of England Ambulance Service Trust, and I have a kit with items to allow basic diagnosis and treatment which includes:

  • a defibrillator (AED) and associated pads, razors[4], shears, etc.
  • a tank of oxygen and various masks
  • some airway management equipment whose name I can never remember
  • glucogel for diabetic treatment
  • a pulsoximeter for heartrate and blood oxygen saturation measurement
  • gloves
  • bandages, plasters[6]
  • lots of forms to fill in
  • some other bits and pieces.

I also have a phone and a radio (not all CFRs get a radio, but our area is rural and has particularly bad mobile phone reception.

I’m on duty as I type this – I work from home, and my employer (the lovely Red Hat) is cool with my attending emergency calls in certain circumstances – and could be called out at any moment to an emergency in about a 10 mile/15km radius. Among the call-outs I’ve attended are cardiac arrests (“heart attacks”), fits, anaphylaxis (extreme allergic reactions), strokes, falls, diabetics with problems, drunks with problems, major bleeding, patients with difficulty breathing or chest pains, sepsis, and lots of stuff which is less serious (and which has maybe been misreported). The plan is that if it’s considered a serious condition, it looks like I can get there before an ambulance, or if the crew is likely to need more hands to help (for treating a full cardiac arrest, a good number of people can really help), then I get dispatched. I drive my own car, I’m not allowed sirens or lights, I’m not allowed to break the speed limit or go through red lights and I don’t attend road traffic collisions. I volunteer whatever hours fit around my job and broader life, I don’t get paid, and I provide my own fuel and vehicle insurance. I get anywhere from zero to four calls a day (but most often zero or one).

There are volunteers in other fields who attend events, provide sports or hobby first aid (I did some scuba diving training a while ago), and there are all sorts of types of training for workplace first aid. Most workplaces will have designated first aiders who can be called on if there’s a problem.

The minimum to know

The people I’ve just noted above – the trained ones – won’t always be available. Sometimes, you – with no training – will be the first on scene. In most jurisdictions, if you attempt first aid, the law will look kindly on you, even if you don’t get it all perfect[7]. In some jurisdictions, there’s actually an expectation that you’ll step in. What should you know? What should you do?

Here’s my view. It’s not the view of a professional, and it doesn’t take into account everybody’s circumstances. Again, it’s my view, and it’s that you should consider enough training to be able to cope with two of the most common – and serious – medical emergencies.

  1. Everybody should know how to deal with a choking patient.
  2. Everybody should know how do to CPR (Cardiopulmonary resuscitation) – chest compressions, at minimum, but with artificial respiration if you feel confident.

In the first of these cases, if someone is choking, and they continue to fail to breathe, they will die.

In the second of these cases, if someone’s heart has stopped beating, they are dead. Doing nothing means that they stay that way. Doing something gives them a chance.

There are videos and training available on the Internet, or provided by many organisations.

The minimum to try

If you come across somebody who is in cardiac arrest, call the emergency services. Dispatch someone (if you’re not alone) to try to find a defibrillator (AED) – the emergency services call centre will often help with this, or there’s an app called “GoodSam” which will locate one for you.

Use the defibrillator.

They are designed for untrained people. You open it up, and it will talk to you. Do what it says.

Even if you don’t feel confident giving CPR, use a defibrillator.

I have used a defibrillator. They are easy to use.

Use that defibrillator.

The defibrillator is not the best chance that the patient has of surviving: your using the defibrillator is the best chance that the patient has of surviving.

Conclusion

Providing first aid for someone in a serious situation doesn’t always work. Sometimes people die. In fact, in the case of a cardiac arrest (heart attack), the percentage of times that CPR is successful is low – even in a hospital setting, with professionals on hand. If you have tried, you’ve given them a chance. It is not your fault if the outcome isn’t perfect. But if you hadn’t tried, there was no chance.

Please respect and support professionals, as well. They are often busy and concerned, and may not have the time to thank you, but your help is appreciated. We are lucky, in our area, that the huge majority of EEAST ambulance personnel are very supportive of CFRs and others who help out in an emergency.

If this article has been interesting to you, and you are considering taking some training, then get to the end of the post, share it via social media(!), and then search online for something appropriate to you. There are many organisations who will provide training – some for free – and many opportunities for volunteering. You know that if a member of your family needed help, you would hope that somebody was capable and willing to provide it.

Final note – if you have been affected by anything in this article, please find some help, whether professional or just with friends. Many of the medical issues I’ve discussed are distressing, and self care is important (it’s one of the things that EEAST takes seriously for all its members, including its CFRs).


1 – a special adrenaline-administering device (don’t use somebody else’s – they’re calibrated pretty carefully to an individual).

2 – calling it an “accident” suggests it was no-one’s fault, when often, it really was.

3 – well, maybe a little bit.

4 – to shave hairy chests – no, really.

5 – to cut through clothing. And nipples chains, if required. Again, no, really.

6 – “Bandaids” for our US cousins.

7 – please check your local jurisdiction’s rules on this.

16 ways in which security folks are(n’t) like puppies

Following the phenomenal[1] success of my ground-breaking[2] article 16 ways in which users are(n’t) like kittens, I’ve decided to follow up with a yet more inciteful[3] article on security folks. I’m using the word “folks” to encompass all of the different types of security people who “normal”[4] people think of “those annoying people who are always going on about that security stuff, and always say ‘no’ when we want to do anything interesting, important, urgent or business-critical”. I think you’ll agree that “folks” is a more accessible shorthand term, and now that I’ve made it clear who we’re talking about, we can move away from that awkwardness to the important[5] issue at hand.

As with my previous article on cats, I’d like my readers to pretend that this is a carefully researched article, and not one that I hastily threw together at short notice because I got up a bit late today.

Note 1: in an attempt to make security folks seem a little bit useful and positive, I’ve sorted the answers so that the ones where security folks turn out actually to share some properties with puppies appear at the end. But I know that I’m not really fooling anyone.

Note 2: the picture (credit: Miriam Bursell) at the top of this article is of my lovely basset hound Sherlock, who’s well past being a puppy. But any excuse to post a picture of him is fair game in my book. Or on my blog.

Research findings

Hastily compiled table

Property Security folks Puppies
Completely understand and share your priorities
No No
Everybody likes them
No Yes
Generally fun to be around No Yes
Generally lovable No Yes
Feel just like a member of the family No Yes
Always seem very happy to see you No Yes
Are exactly who you want to see at the end of a long day No Yes
Get in the way a lot when you’re in a hurry
Yes Yes
Make a lot of noise about things you don’t care about Yes Yes
Don’t seem to do much most of the time
Yes Yes
Constantly need cleaning up after
Yes Yes
Forget what you told them just 10 minutes ago Yes Yes
Seem to spend much of their waking hours eating or drinking Yes Yes
Wake you up at night to deal with imaginary attackers Yes Yes
Can turn bitey and aggressive for no obvious reason
Yes Yes
Have tickly tummies Yes[6] Yes

1 – relatively.

2 – no, you’re right: this is just hype.

3 – this is almost impossible to prove, given quite how uninciteful the previous one was.

4 – i.e., non-security.

5 – well, let’s pretend.

6 – well, I know I do.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

Trust you? I can’t trust myself.

Cognitive biases are everywhere.

William Gibson’s book Virtual Light includes a bar which goes by the name of “Cognitive Dissidents”.  I noticed this last night when I was reading to bed, and it seemed apposite, because I wanted to write about cognitive bias, and the fact that I’d noticed it so strikingly was, I realised, an example of exactly that: in this case, The Frequency Illusion, or The Baader-Meinhof Effect.  Cognitive biases are everywhere, and there are far, far more of them than you might expect.

The problem is that we think of ourselves as rational beings, and it’s quite clear from decades – in some cases, centuries – of research that we’re anything but.  We’re very likely to tell ourselves that we’re rational, and it’s such a common fallacy that The Illusion of Validity (another cognitive bias) will help us believe it.  Cognitive biases are, according to Wikipedia, “systematic patterns of deviation from norm or rationality in judgment” or put maybe more simply, “our brains managing to think things which seem sensible, but aren’t.”[1]

The Wikipedia entry above gives lots of examples of cognitive bias – lots and lots of examples – and I’m far from being an expert in the field.  The more I think about risk and how we consider risk, however, the more I’m convinced that we – security professionals and those with whom we work – need to have a better understanding of our own cognitive biases and those of the people around us.  We like to believe that we make decisions and recommendations rationally, but it’s clear from the study of cognitive bias that:

  1. we generally don’t; and
  2. that even if we do, we shouldn’t expect those to whom we present them to consider them entirely rationally.

I should be clear, before we continue, that there are opportunities for abuse here.  There are techniques beloved of advertisers and the media to manipulate our thinking to their ends which we could use to our advantage and to try to manipulate others.  One example is the The Framing Effect.  If you want your management not to fund a new anti-virus product because you have other ideas for the earmarked funding, you might say:

  • “Our current product is 80% effective!”

Whereas if you do want them to fund it, you might say:

  • “Our current product is 20% ineffective!”

People react in different ways, depending on how the same information is presented, and the way the two statements above are framed aims to manipulate your listeners to the outcome you want.  So, don’t do this, and more important, look for vendors[2] who are doing this, and call them out on it.  Here, then, are a three of the more obvious cognitive biases that you may come across:

  • Irrational escalation or Sunk cost fallacy – this is the tendency for people to keep throwing money or resources at a project, vendor or product when it’s clear that it’s no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent – when it’s actually already gone.  This often comes over as misplaced pride, or people just not wanting to let go of a pet project because they’ve become attached to it, but it’s really dangerous for security, because if something clearly isn’t effective, we should be throwing it out, not sending good money after bad.
  • Normalcy bias – this is the refusal to address a risk because it’s never happened before, and is an interesting one in security, for the simple reason that so many products and vendors are trying to make us do exactly that.  What needs to happen here is that a good risk analysis needs to be performed, and then measures put in place to deal with those which are actually high priority, not those which may not happen, or which don’t seem likely at first glance.
  • Observer-expectancy effect – this is when people who are looking for particular results find it, because they have (consciously or unconsciously) misused the data.  This is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way which confirms this expectation, rather than analysed and presented in ways which are more neutral.

I intend to address more specific cognitive biases in future articles, tying them even more closely to security concerns – if you have any particular examples or war stories, I’d love to hear them.


1 – my words

2 – or, I suppose, underhand colleagues…