Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

What are they attacking me for?

There are three main types of motivations: advantages to them; disadvantages to us; resources.

I wrote an article a few weeks ago called What’s a State Actor, and should I care?, and a number of readers asked if I could pull apart a number of the pieces that I presented there into separate discussions[1].  One of those pieces was the question of who is actually likely to attack me.

I presented a brief list thus:

  • insiders
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists
  • … and more.

One specific “more” that I mentioned was State Actors.  If you look around, you’ll find all manner of lists.  Other attacker types that I didn’t mention in my initial list include:

  • members of organised crime groups
  • terrorists
  • “mercenary” hackers.

I suspect that you could come up with more supersets or subsets if you tried hard enough.

This is all very well, but what’s the value in knowing who’s likely to attack you in the first place[3]?  There’s a useful dictum: “No system is secure against a sufficiently resourced and motivated attacker.”[5]  This gives us a starting point, because it causes us to ask the question

  • what motivates the attacker?

In other words: what do they want to achieve?  What, in fact, are they trying to do or get when they attack us?  This is the core theme of this article.

There are three main types of motivations:

  1. advantages to them
  2. disadvantages to us
  3. resources.

There is overlap between the three, but I think that they are sufficiently separate to warrant separate discussion.

Advantages to them

Any successful attack is arguably a disadvantage to us, the attacked, but that does not mean that the primary motivation of an attacker is necessarily to cause harm.  There are a number of other common motivations, including:

  • reputation or “bragging rights” – a successful attack may well be used to prove the skills of an attacker to other parties.
  • information to share – sometimes attackers wish to gain information about our systems to share with others, whether for gain or to enhance their reputation (see above).  Such attacks may be painted a security research, but if they occur outside an ethical framework (such as provided by academic institutions) and without consent, it is difficult to consider them anything other than hostile.
  • information to keep – attackers may gain information and keep it for themselves for later use, either against our systems or against similarly configured systems elsewhere.
  • practice/challenge – there are attacks which are undertaken solely to practice techniques or as a personal challenge (where an external challenge is made, I would categorise them under “reputation”).  Harmless as this motivation may seem to some parts of the community, such attacks still cause damage and require mitigation, and should be considered hostile.
  • for money – some attacks are undertaken at the request of others, with the primary motivation of the attacker being that money or other material recompense (though the motivation of the party commissioning that attack likely to be one of these other ones listed)[6].

Disadvantages to us

Attacks which focus on causing negative impact to the individual or organisation attacked can be listed in the following categories:

  • business impact – impact to the normal functioning of the organisation or individual attacked: causing orders to be disrupted, processes to be slowed, etc..
  • financial impact – direct impact to the financial functioning of the attacked party: fraud, for instance.
  • reputational impact – there have been many attacks where the intention has clearly been to damage the reputation of the attacked party.  Whether it is leaking information about someone’s use of a dating website, disseminating customer information or solely replacing text or images on a corporate website, the intention is the same: to damage the standing of those being attacked.  Such damage may be indirect – for instance if an attacker were to cause the failure of an oil pipeline, affecting the reputation of the owner or operator of that pipeline.
  • personal impact – subtly different from reputational or business impact, this is where the attack intends to damage the self-esteem of an individual, or their ability to function professionally, physically, personally or emotionally.  This could cover a wide range of attacks such as “doxxing” or use of vulnerabilities in insulin pumps.
  • ecosystem impact – this type of motivation is less about affecting the ability of the individual or organisation to function normally, and more about affecting the ecosystem that exists around it.  Impacting the quality control checks of a company that made batteries might impact the ability of a mobile phone company to function, for instance, or attacking a water supply might impact the ability of a fire service to respond to incidents.

Resources

The motivations for some attacks may be partly or solely to get access to resources.  These resources might include:

  • financial resources – by getting access to company accounts, attackers might be able to purchase items for themselves or others or otherwise defraud the company.
  • compute resources – access to compute resources can lead to further attacks or be used for purposes such as cryptocurrency mining.
  • storage resources – attackers may wish to store illegal or compromising material on others’ systems.
  • network resources – access to network resources allows attackers to launch attacks elsewhere or to stream information with little traceability.
  • human resources – access to some systems may allow human resources to be deployed in ways unintended by the party being attacked: deploying police officers to a scene a long distance away from a planned physical attack, for instance.
  • physical resources – access to some systems may also allow physical resources to be deployed in ways unintended by the party being attacked: sending ammunition to the wrong front in a war, might, for example, lead to military force becoming weakened.

Conclusion

It may seem unimportant to consider the motivations of those attacking us, but if we can understand what it is that they are looking for, we can decide what we should defend, and sometimes what types of defence we should put in place.  As always, I welcome comments on this article: I’m sure that I’ve missed out some points, or misrepresented others, so please do get in touch and let me know your thoughts.


1 – I considered this a kind and polite way of saying “you stuffed too much into a single article: what were you thinking?”[2]

2 – and I don’t necessarily disagree.

3 – unless you’re just trying to scare senior management[4].

4 – which may be enjoyable, but is ultimately likely to backfire if you’re doing it without evidence and for a good reason.

5 – I made a (brief) attempt to track the origins of this phrase: I’m happy to attribute if someone can find the original.

6 – hat-tip to Reddit user poopin for spotting that I’d missed this one out.