Humans and (being bad at) trust

Why “signing parties” were never a good idea.

I went to a party recently, and it reminded of quite how bad humans are at trust. It was a work “mixer”, and an attempt to get people who didn’t know each other well to chat and exchange some information. We were each given two cards to hang around our necks: one on which to write our own name, and the other on which we were supposed to collect the initials of those to whom we spoke (in their own hand). At the end of the event, the plan was to hand out rewards whose value was related to the number of initials collected. Pens/markers were provided.

I gamed the system by standing by the entrance, giving out the cards, controlling the markers and ensuring that everybody signed card, hence ending up with easily the largest number of initials of anyone at the party. But that’s not the point. Somebody – a number of people, in fact – pointed out the similarities between this and “key signing parties”, and that got me thinking. For those of you not old enough – or not security-geeky enough – to have come across these, they were events which were popular in the late nineties and early parts of the first decade of the twenty-first century[1] where people would get together, typically at a tech show, and sign each other’s PGP keys. PGP keys are an interesting idea whereby you maintain a public-private key pair which you use to sign emails, assert your identity, etc., in the online world. In order for this to work, however, you need to establish that you are who you say you are, and in order for this to work, you need to convince someone of this fact.

There are two easy ways to do this:

  1. meet someone IRL[2], get them to validate your public key, and sign it with theirs;
  2. have someone who knows the person you met in step 1 agree that they can probably trust you, as the person in step 1 did, and they trust them.

This is a form of trust based on reputation, and it turns out that it is a terrible model for trust. Let’s talk about some of the reasons for it not working. There are four main ones:

  • context
  • decay
  • transitive trust
  • peer pressure.

Let’s evaluate these briefly.

Context

I can’t emphasise this enough: trust is always, always contextual (see “What is trust?” for a quick primer). When people signed other people’s key-pairs, all they should really have been saying was “I believe that the identity of this person is as stated”, but signatures and encryption based on these keys was (and is) frequently misused to make statements about, or claim access to, capabilities that were not necessarily related to identity.

I lay some of the fault of this at the US alcohol consumption policy. Many (US) Americans use their driving licence/license as a form of authorisation: I am over this age, and am therefore entitled to purchase alcohol. It was designed to prove that their were authorised to drive, and nothing more than that, but you can now get a US driving licence to prove your age even if you can’t drive, and it can be used, for instance, as security identification for getting on aircraft at airportsThis is crazy, but partly explains why there is such a confusion between identification, authentication and authorisation.

Decay

Trust, as I’ve noted before in many articles, decays. Just because I trust you now (within a particular context) doesn’t mean that I should trust you in the future (in that or any other context). Mechanisms exist within the PGP framework to expire keys, but it was (I believe) typical for someone to resign a new set of keys just because they’d signed the previous set. If they were only being used for identity, then that’s probably OK – most people rarely change their identity, after all – but, as explained above, these key pairs were often used more widely.

Transitive trust

This is the whole “trusting someone because I trust you” problem. Again, if this were only about identity, then I’d be less worried, but given people’s lack of ability to specify context, and their equal inability to communicate that to others, the “fuzziness” of the trust relationships being expressed was only going to increase with the level of transitiveness, reducing the efficacy of the system as a whole.

Peer pressure

Honestly, this occurred to me due to my gaming of the system, as described in the second paragraph at the top of this article. I remember meeting people at events and agreeing to endorse their key-pairs basically because everybody else was doing it. I didn’t really know them, though (I hope) I had at least heard of them (“oh, you’re Denny’s friend, I think he mentioned you”), and I certainly shouldn’t have been signing their key-pairs. I am certain that I was not the only person to fall into this trap, and it’s a trap because humans are generally social animals[3], and they like to please others. There was ample opportunity for people to game the system much more cynically than I did at the party, and I’d be surprised if this didn’t happen from time to time.

Stepping back a bit

To be fair, it is possible to run a model like this properly. It’s possible to avoid all of these by insisting on proper contextual trust (with multiple keys for different contexts), by re-evaluating trust relationships on a regular basis, by being very careful about trusting people just due to their trusting someone else (or refusing to do so at all), and by refusing just to agree to trust someone because you’ve met them and they “seem nice”. But I’m not aware of anyone – anyone – who kept to these rules, and it’s why I gave up on this trust model over a decade ago. I suspect that I’m going to get some angry comments from people who assert that they used (and use) the system properly, and I’m sure that there are people out there who did and do: but as a widespread system, it was only going to work if the large majority of all users treated it correctly, and given human nature and failings, that never really happened.

I’m also not suggesting that we have many better models – but we really, really need to start looking for some, as this is important, and difficult stuff.


1 – I refuse to refer to these years the “aughts”.

2 – In Real Life – this used to be an actual distinction to online.

3 – even a large enough percentage of IT folks to make this a problem.

Breaking the security chain(s)

Your environment is n-dimensional – your trust must be, too.

One of the security principles by which we[1] live[2] is that security is only as strong as the weakest link in a chain.  That link is variously identified as:

  • your employees
  • external threat actors
  • all humans
  • lack of training
  • cryptography
  • logging
  • anti-virus
  • auditing capabilities
  • the development lifecycle
  • waterfall methodology
  • passwords
  • any other authentication mechanisms
  • electrical wiring
  • hurricanes
  • earthquakes
  • and pestilence.

Actually, I don’t think I’ve ever seen the last one mentioned, but it’s only a matter of time.  However, very rarely does anybody bother to identify exactly what the chain is that it is being broken by the weakest link splintering into a thousand pieces.

There are a number of candidates that spring to mind:

  1. your application flow.  This is rather an old-fashioned way of thinking of applications: that a program is started, goes through a set of actions, and then terminates, but to think more broadly about it, any action which causes an application to behave in unexpected or unintended ways is a possible security flow, whether that is a monolithic application, a set of microservices or an app on  mobile device.
  2. your software stack.  Depending on how you think about your stack, there are likely to be at least 5, maybe a dozen or maybe even scores of layers in your software stack (for an example with just a few simple layers, see Announcing Enarx).  However you think about it, you need to trust all of those layers to do what who expect them to do.  If one of them is compromised, malicious, or just poorly implemented or maintained, then you have a security issue.
  3. your hardware stack.  There was a time, barely five years ago, when most people (excepting us[1], of course), assumed that hardware did what we thought it was supposed to do, all of the time.  In fact, we should all have known better, given the Clipper Chip and the Pentium bug (to name just to famous examples), but with Spectre, Meltdown and a growing realisation that hardware isn’t as trustworthy as was previously thought, everybody needs to decide exactly what security they can trust in which components.
  4. your operational processes.  You can have the best software and hardware in the world, but if you don’t maintain it and operate it properly, it’s going to be full of holes.  Failing to invest in operations, monitoring, logging, auditing and the rest leaves you wide open.
  5. your supply chain. There’s a growing understanding in the industry that our software and hardware supply chains are possible points of failure[3].  Whether your vendor is entirely proprietary (in which case their security is largely opaque) or open source (in which case you’ve got a chance to be able to see what’s going on), errors or maliciousness in the supply chain can scupper any hopes you had of security for your deployment.
  6. your software and hardware lifecycle.  Developing software?  Patching it?  Upgrading hardware (or software)?  Unit testing?  Unit testingg?  We all know that a failure to manage the lifecycle of our environment can lead to security problems.

The point I’m trying to make above is that there’s no single chain.  Your environment is n-dimensional – your trust must be, too.  If you don’t think about all of these contexts – and there will be more beyond the half-dozen that I’ve just noted – then you can’t have a good chance of managing security in your environment.  I honestly don’t think that there’s any single weakest link in the chain, because there are always already multiple chains in play: our job is to think about as many of them as possible, and then manage an mitigate the risks associated with each.


1 – the mythical “IT security community”.

2 – you’re right: “which we live by” would sound much more natural.

3 – and a growing industry to try to provide fixes.

“Unlawful, void, and of no effect”

The news from the UK is amazing today: the Supreme Court has ruled that the Prime Minister has failed to “prorogue” Parliament – the in other words, that the members of the House of Commons and the House of Lords are still in session. The words in the title come from the judgment that they have just handed down.

I’m travelling this week, and wasn’t expecting to write a post today, but this triggered a thought in me: what provisions are in place in your organisation to cope with abuses of power and possible illegal actions by managers and executives?

Having a whistle-blowing policy and an independent appeals process is vital. This is true for all employees, but having specific rules in place for employees who are involved in such areas as compliance and implementations involving regulatory requirements is vital. Robust procedures protect not only an employee who finds themself in a difficult position, but, in the long view, the organisation itself. They a can also act as a deterrent to managers and executives considering actions which might, in the absence of such procedures, likely go unreported.

Such procedures are not enough on their own – they fall into the category of “necessary, but not sufficient” – and a culture of ethical probity also needs to be encouraged. But without such a set of procedures, your organisation is at real risk.

Turtles – and chains of trust

There’s a story about turtles that I want to tell.

One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3].  This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.

As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:

A learned gentleman[5] is giving a public lecture.  He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology.  At the end of the lecture, an elderly lady comes up to him.

“Young man,” she says.

“Yes,” says he.

“That was a very interesting lecture,” she continues.

“I’m glad you enjoyed it,” he replies.

“You are, however, completely wrong.”

“Really?” he says, somewhat taken aback.

“Yes.  All that rubbish about the Earth hovering in space, circling the sun.  Everybody knows that the Earth sits on the back of a turtle.”

The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”

The elderly lady looks at him with a look of disdain.  “What a ridiculous question!  It’s turtles all the way down, of course!”

The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point.  If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware.  Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.

Principles for chains of trust

When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up.  This idea of a chain of trust is very important, and very pervasive, within security.  It allows for some important principles:

  • there has to be a root of trust somewhere (the “bottom turtle”);
  • the chain is only as strong as its weakest link (and attackers will find it);
  • be explicit about each of the links in the chain;
  • realise that some of the links in the chain may change (e.g. if software is updated);
  • be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
  • simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.

Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar).  In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting.  Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank.  In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.

And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust.  Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation?  Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used?  Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.

Chains of trust are everywhere we look.  Some are short, and some are long.  In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low.  This may be due to a variety of factors, including time, expertise and visibility.  But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.


1 – citizens of the US of A,

2 – have a look on a stock photography site like Pixabay if you don’t believe me.

3 – tortoises are land-based, turtles are aquatic, I believe.

4 – Wikipedia has a good article explaining both the concept and the story’s etymology.

5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.

6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!

7 – or is it?  You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle.  For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.

Trust & choosing open source

Your impact on open source can be equal to that of others.

A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”.  Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of  very important issues including (unsurprisingly, given the title), a number of issues around trust.  It defines transitive trust thus:

“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”

It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.

At no point in the document does it mention open source software.  To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.

My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source.  It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises.  I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).

What has this to do with trust, specifically transitive trust?  Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it.  Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.

What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software.  This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself.  What are we doing when we say “I trust open source software”?  We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.

There’s actually a lot going on here, some of which is very interesting:

  • we are trusting architects and designers to design software to meet our use cases and requirements;
  • we are trusting developers to implement code well, to those designs;
  • we are trusting developers to review each others’ code;
  • we are trusting documentation folks to document the software correctly;
  • we are trusting testers to write, run and check tests which are appropriate to my use cases;
  • we are trusting those who deploy the code to run in it ways which are similar to my use cases;
  • we are trusting those who deploy the code to report bugs;
  • we are trusting those who receive bug reports to fix them as expected.

There’s more, of course, but that’s definitely enough to get us going.  Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.

In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low.  There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.

That’s because, for open source, I can be any of the entities outlined above.  I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter.  When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive.  You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.

Why, then, would anybody buy or license open source software from a vendor?  Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above.  There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.

Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.


1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal.  You may need help.

2 – see 1.

3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.

Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.