Encryption “backdoor”? No, it’s an gaping archway.

Backdoors are just a non-starter.

Note: this is probably one of those posts where I should point out that the views expressed in this article aren’t necessarily those of my employer, Red Hat.  Though I hope that they are.

I understand that governments don’t like encryption. Well, to be fair, they like encryption for their stuff, but they don’t want criminals, or people who might be criminals to have it. The problem is that “people who might be criminals” means you and me[1]. I need encryption, and you need encryption. For banking, but business, for health records, for lots of things. This isn’t the first time I’ve blogged on this issue, and I actually compiled a list in a previous post, giving some examples of perfectly legal, perfectly appropriate reasons for “us” to be using encryption. I’ve even written about the importance of helping governments go about their business.

Unluckily, it seems that the Government of Australia has been paying insufficient attention to the points that I have[2] been making. It seems that they are hell-bent on passing a law that would require relevant organisations (the types of organisations listed are broad and ill-defined, in the coverage that I’ve seen) to provide a backdoor into individuals’ encrypted messages. Only for individuals, you’ll note, not blanket decryption.  Well, that’s a relief. And that was sarcasm.

The problem? Mathematics. Cryptography is based on mathematics. Much of it is actually quite simple, though some of it is admittedly complex. But you don’t argue with mathematics, and the mathematics say that you can’t just create a backdoor and have the rest of the scheme continue to be as secure.

Most existing encryption/decryption schemes[3] allow one party to send encrypted data to another with a single shared key. To decrypt, you need that key. In order to get that key, you either need to be one of the two parties (typically referred to as “Alice” and “Bob”), or hope that, as a malicious[5] third party (typically referred to as “Eve”[6]), you can do one of the following:

  1. get Alice or Bob to give you their key;
  2. get access to the key by looking at some or all of the encrypted messages;
  3. use a weakness in the encryption process to decrypt the messages.

Now, number 1 isn’t great if you don’t want Alice or Bob to know that you’re snooping on their messages. Number 2 is a protocol weakness, and designers of cryptographic protocols try very, very hard to avoid them. Number 3 is an implementation weakness, and reputable application developers will be try very, very hard to avoid those. What’s more, for applications which are open source, anyone can have a look at them, so putting them in on purpose isn’t likely to last for long.

Both 2 and 3 can lead to backdoors. But they’re not single-use backdoors, they’re gaping archways that anyone can find out about and exploit.

Would it be possible to design protocols that allowed a third party to hold a key for each encryption session, allowing individual sessions to be decrypted by a “trusted party” such as law enforcement? Yes, it would. But a) no-one with half a brain would knowingly use such a scheme[7]; b) the operational overhead of running such a scheme would be unmanageable; and c) it would only a matter of time before untrusted parties got access to the systems behind the scheme and misused it.

Backdoors are just a non-starter. Governments need to find sensible ways to perform legally approved surveillance, but encryption backdoors are not one of them.


1 – and I’m not even intentionally addressing any criminals who might be reading this article.

2 – quite eloquently, in my humble opinion.

3 – the two tend to go together as there isn’t much point in one without the other[4] .

4 – in most cases. You’d be surprised, though.

5 – at least as far as Alice and Bob are concerned.

6 – guess where the name of this blog originated?

7 – hint: not me, not you, and certainly not criminals.

Knowing me, knowing you: on Russian spies and identity

Who are you, and who tells me so?

Who are you, and who tells me so?  These are questions which are really important for almost any IT-related system in use today.  I’ve previously discussed the difference between identification and authentication (and three other oft-confused terms) in Explained: five misused security words, and what I want to look at in this post is the shady hinterland between identification and authentication.

There has been a lot in the news recently about the poisoning in the UK of two Russian nationals and two British nationals, leading to the tragic death of Dawn Sturgess.  I’m not going to talk about that, or even about the alleged perpetrators, but about the problem of identity – their identity – and how that relates to IT.  The two men who travelled to Salisbury, and were named by British police as the perpetrators, travelled under Russian passports.  These documents provided their identities, as far as the British authorities – including UK Border Control, who allowed them into the country – were aware, and led to their being allowed into the country.

When we set up a new user in an IT system or allow them physical access to a building, for instance, we often ask for “Government-issued ID” as the basis for authenticating the identity that they have presented, in preparation for deciding whether to authorise them to perform whatever action they have requested.  There are two issues here – one obvious, and one less so.  The first, obvious one, is that it’s not always easy to tell whether a document has actually been issued by the authority by which it appears to be have been issued – document forgers have been making a prosperous living for hundreds, if not thousands of years.  The problem, of course, is that the more tell-tale signs of authenticity you reveal to those whose job it is to check a document, the more clues you give to would-be forgers for how to improve the quality of the false versions that they create.

The second, and less obvious problem, is that just because a document has been issued by a government authority doesn’t mean that it is real.  Well, actually, it does, and there’s the issue.  Its issuance by the approved authority makes it real – that is to say “authentic” – but it doesn’t mean that it is correct.  Correctness is a different property to authenticity. Any authority may decide to issue identification documents that may be authentic, but do not truly represent the identity of the person carrying them. When we realise that a claim of identity is backed up by an authority which is issuing documents that we do not believe to be correct, that means that we should change our trust relationship with that authority.  For most entities, IDs which have been authentically issued by a government authority are quite sufficient, but it is quite plausible, for instance, that the UK Border Force (and other equivalent entities around the world) may choose to view passports issued by certain government authorities as suspect in terms of their correctness.

What impact does this have on the wider IT security community?  Well, there are times when we are accepting government-issued ID when we might want to check with relevant home nation authorities as to whether we should trust them[1].  More broadly than that, we must remember that every time that we authenticate a user, we are making a decision to trust the authority that represented that user’s identity to us.  The level of trust we place in that authority may safely be reduced as we grow to know that user, but it may not – either because our interactions are infrequent, or maybe because we need to consider that they are playing “the long game”, and are acting as so-called “sleepers”.

What does this continuous trust mean?  What it means is that if we are relying on an external supplier to provide contractors for us, we need to keep remembering that this is a trust relationship, and one which can change.  If one of those contractors turns out to have faked educational qualifications, then we need to doubt the authenticity of the educational qualifications of all of the other contractors – and possibly other aspects of the identity which the external supplier has presented to us.  This is because we have placed transitive trust in the external supplier, which we must now re-evaluate.  What other examples might there be?  The problem is that the particular aspects of identity that we care about are so varied and differ between different roles that we perform.  Sometimes, we might not care about education qualifications, but credit score, or criminal background, or blood type.  In the film Gattaca[2], identity is tied to physical and mental ability to perform a particular task.

There are various techniques available to allow us to tie a particular individual to a set of pieces of information: DNA, iris scans and fingerprints are pretty good at telling us that the person in front of us now is the person who was in front of us a year ago.  But tying that person to other information relies on trust relationships with external entities, and apart from a typically small set of inferences that we can draw from our direct experiences of this person, it’s difficult to know exactly what is truly correct unless we can trust those external entities.


1 – That assumes, of course, that we trust our home nation authorities…

2 – I’m not going to put a spoiler in here, but it’s a great film, and really makes you think about identity: go and watch it!

My brush with GDPR

I ended up reporting a possible breach of data. My data

Since the first appearance of GDPR[1], I’ve strenuously avoided any direct interaction with it if at all possible.  In particular, I’ve been careful to ensure that nobody is under any illusion that my role involves any responsibility for our company’s implementation of GDPR.  In this I have been largely successful.  I say largely, because the people who send spam don’t seem to have noticed[2]: I suspect that anybody with the word “security” in their title has had a similar experience.

Of course, I have a decent idea what GDPR is supposed to be about: making sure that data that organisations hold about people is only used as it should be, is kept up to date, and that people can find out what exactly what information relating to them exactly is held[3].

This week, I got more involved GDPR than I’d expected: I ended up reporting a possible breach of data.  My data.  As it happens, my experience with the process was pretty good: so good, in fact, that I think it’s worth giving it as an example.

Breach!

A bit of scene-setting.  I live in the UK, which is (curently[4]) in the EU, and which means that, like pretty much all companies and organisations here, it is subject to the GDPR.  Last week, I had occasion to email a department within local government about an issue around services in my area.  Their website had suggested that they’d get back to me within 21 days or so, so I was slightly (and pleasantly) surprised when they replied within 5.

The email started so well: the title referred to the village in which I live.

It went downhill from there.  “Dear Mr Benedict[5]”, it ran.  I should be clear that I had used my actual name (which is not Benedict) for the purposes of this enquiry, so this was something of a surprise.  “Oh, well,” I thought to myself, “they’ve failed to mail merge the name field properly.” I read on.  “Here is the information you have requested about Ambridge…[5][6]”.  I do not live in Ambridge.  So far, this was just annoying: clearly the department had responded to the wrong query.  But it got worse.  “In particular regards to your residence, Willow Farm, Ambridge…[5]”

The department had sent me information which allowed me to identify Mr Benedict and his place of residence.  They had also failed to send me the information that I had requested.  What worried me more was that this might well not be an isolated event.  There was every chance that my name and address details had been sent to somebody else, and even that there was a cascade effect of private details being sent to email address after email address.  I mentioned this in annoyance to my wife – and she was the one to point out that it was a likely breach of GDPR.  “You should report it,” she said.

So I went to the local government office website and had a quick look around it.  Nothing obvious for reporting GDPR breaches.  I phoned the main number and got through to enquiries.  “I’d like to report a possible data breach, please,” I said.  “Could you put me through to whoever covers GDPR?”

To be honest, this was where I thought it would all go wrong.  It didn’t.  The person on the enquiries desk asked for more information.  I explained what department it was, about the email, and the fact that somebody else’s details had been exposed to me, I strongly suspected in breach of GDPR.

“Let me just see if I can find someone in our data team,” she said, and put me on hold.

I don’t know if you’ve ever been put on hold by someone in a local government office, but it’s rarely an event that should be greeted with rejoicing.  I prepared myself for a long wait, and was surprised when I was put through to someone fairly quickly.

The man to whom I spoke knew what he was doing.  In fact, he did an excellent job.  He took my details, he took details of the possible breach, he reassured me that this would be investigated.  He was polite, and seemed keen to get to the bottom of the affair.  He also immediately grasped what the problem was, and agreed that it needed to be investigated.  I’m not sure whether I was the first person ever to call up and so this was an adrenaline-fuelled roller coaster ride into uncharted territory[8] for him, or whether this was a routine conversation in the office, but he pitched his questions and responses at exactly the right level.  I offered to forward the relevant email to him so that he had the data himself.  He accepted.  The last point was the one that impressed me the most.  “Could you please delete the email from your system?” he asked.  This was absolutely the right request.  I agreed and did so.

And how slowly do the wheels of local government grind?  How long would it take me to get a response to my query?

I received a response the next day, from the department concerned.  They assured me that this was a one-off problem, that my personal data had not been compromised, and that there had not been a widespread breach, as I had feared.  They even sent me the information I had initially requested.

My conclusions from this?  They’re two-fold:

  1. From an individual’s point of view: yes, it is worth reporting breaches.  Action can, and should be taken.  If you don’t get a good response, you may need to escalate, but you have rights, and organisations have responsibilities: exercise those rights, and hold the organisations to account.  You may be pleasantly surprised by the outcome, as I was.
  2. From an organisation’s point of view: make sure that people within your organisation know what to do if someone contacts them about a data breach, whether it’s covered by statutory regulations (like GDPR) or not.  This should include whoever it is answers your main enquiry line or receives messages to generic company email accounts, and not just your IT or legal departments.  Educate everybody in the basics, and make sure that those tasked with dealing with issues are as well-trained and ready to respond as the people I encountered.

1 – General Data Protection Regulation, wouldn’t it be more fun if it were something like “Good Dogs Pee Regularly?”

2 – though GDPR does seem to have reduced the amount of spam, I think.

3 – if you’re looking for more information, I did write an article about this on Opensource.com earlier this year: Being open about data privacy.

4 – don’t start me.

5 – I’ve changed this information, for reasons which I hope are obvious.

6 – “dum-di-dum-di-dum-di-dum, dum-di-dum-di-da-da”[7]

7 – if you get this, then you either live in the UK (and probably listen to Radio 4), or you’re a serious Anglophile.

8 – hopefully not so uncharted for the designers and builders of the metaphorical roller coaster.

What’s a State Actor, and should I care?

The bad thing about State Actors is that they rarely adhere to an ethical code.

How do you know what security measures to put in place for your organisation and the systems that you run?  As we’ve seen previously, There are no absolutes in security, so there’s no way that you’re ever going to make everything perfectly safe.  But how much effort should you put in?

There are a number of parameters to take into account.  One thing to remember is that there are always trade-offs with security.  My favourite one is a three-way: security, cost and usability.  You can, the saying goes, have two of the three, but not the third: choose security, cost or usability.  If you want security and usability, it’s going to cost you.  If you want a cheaper solution with usability, security will be reduced.  And if you want security cheaply, it’s not going to be easily usable.  Of course, cost stands in for time spent as well: you might be able to make things more secure and usable by spending more time on the solution, but that time is costly.

But how do you know what’s an appropriate level of security?  Well, you need to think about what you’re protecting, the impact of it being:

  • exposed (confidentiality);
  • changed (integrity);
  • inaccessible (availability).

These are three classic types of control, often recorded as C.I.A.[1].  Who would be impacted?  What mitigations might you put in place? What vulnerabilities might exist in the system, and how might they be exploited?

One of the big questions to ask alongside all of these is “who exactly might be wanting to attack my systems?”  There’s a classic adage within security that “no system is secure against a sufficiently motivated and resourced attacker”.  Luckily for us, there are actually very few attackers who fall into this category.  Some examples of attackers might be:

  • insiders[3]
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists[4]
  • … and more.

Most of these will either not be that motivated, or not particularly well-resourced.  There are two types of attackers for whom that is not the case: academics and State Actors.  The good thing about academics is that they have adhere to an ethical code, and shouldn’t be trying anything against your systems without your permission.  The bad thing about State Actors is that they rarely adhere to an ethical code.

State Actors have the resources of a nation state behind them, and are therefore well-resourced.  They are also generally considered to be well-motivated – if only because they have many people available to perform attack, and those people are well-protected from legal process.

One thing that State Actors may not be, however, is government departments or parts of the military.  While some nations may choose to attack their competitors or enemies (or even, sometimes, partners) with “official” parts of the state apparatus, others may choose a “softer” approach, directing attacks in a more hands-off manner.  This help may take a variety of forms, from encouragement, logistical support, tools, money or even staff.  The intent, here, is to combine direction with plausible deniability: “it wasn’t us, it was this group of people/these individuals/this criminal gang working against you.  We’ll certainly stop them from performing any further attacks[5].”

Why should this matter to you, a private organisation or public company?  Why should a nation state have any interest in you if you’re not part of the military or government?

The answer is that there are many reasons why a State Actor may consider attacking you.  These may include:

  • providing a launch point for further attacks
  • to compromise electoral processes
  • to gain Intellectual Property information
  • to gain competitive information for local companies
  • to gain competitive information for government-level trade talks
  • to compromise national infrastructure, e.g.
    • financial
    • power
    • water
    • transport
    • telecoms
  • to compromise national supply chains
  • to gain customer information
  • as revenge for perceived slights against the nation by your company – or just by your government
  • and, I’m sure, many others.

All of these examples may be reasons to consider State Actors when you’re performing your attacker analysis, but I don’t want to alarm you.  Most organisations won’t be a target, and for those that are, there are few measures that are likely to protect you from a true State Actor beyond measures that you should be taking anyway: frequent patching, following industry practice on encryption, etc..  Equally important is monitoring for possible compromise, which, again, you should be doing anyway.  The good news is that if you might be on the list of possible State Actor targets, most countries provide good advice and support before and after the act for organisations which are based or operate within their jurisdiction.


1 – I’d like to think that they tried to find a set of initials for G.C.H.Q. or M.I.5., but I suspect that they didn’t[2].

2 – who’s “they” in this context?  Don’t ask.  Just don’t.

3 – always more common than we might think: malicious, bored, incompetent, bankrupt – the reasons for insider-related security issues are many and varied.

4 – one of those portmanteau words that I want to dislike, but find rather useful.

5 – yuh-huh, right.

Q: when is a backdoor not a backdoor?

An encryption backdoor isn’t the same as a house backdoor: the metaphor is faulty.

A: when you’re a politician.

I’m getting pretty bored of having to write about this, to be honest. I’ve blogged twice already on encryption backdoors:

But our politicians keep wanting us to come up with them, as the Register helpfully points out – thanks, both the UK Prime Minister and FBI Director.

I feel sorry for their advisers, because all of the technical folks I’ve ever spoken to within both the UK and US Establishments[1] absolutely understand that what’s being asked for by these senior people really isn’t plausible.

I really do understand the concern that the politicians have. They see a messaging channel which bad people may use to discuss bad things, and they want to stop those bad things. This is a good thing, and part of their job. The problem starts when they think “it’s like a phone: we have people who can tap phones”. Those who are more technologically savvy may even think, “it’s like email, and we can read email.” And in the old days[3], before end-to-end encryption, they weren’t far wrong.

The problem now is that many apps these days set up a confidential (encrypted) link between the two ends of the connection. And they do it in a way which means that nobody except the initiators of the two ends of the connection can read it. And they use strong encryption, which means that there’s no easy way for anyone[4] to break it.

This means that it’s difficult for anyone to read the messages. So what can be done about it, then? Well, if you’re a politician, the trend is to tell the providers of these popular apps to provide a backdoor to let you, the “good people” in.

Oh, dear.

I believe that the problem here isn’t really that politicians are stupid, because I honestly don’t think that they are[5]. The problem is with metaphor. Metaphors are dangerous, because humans need them to get a handle on an aspect of something which is unfamiliar, but once they’ve latched on to a particular metaphor, they assume that all the other aspects of the thing to which the metaphor refers are the same.

An encryption backdoor isn’t the same as a house backdoor: the metaphor is faulty[6].

The key[7] similarity is that in order to open up your house backdoor, you need a key. That key gives you entry to the house, and it also allows any other person you give that key access to it, as well. So far, so good.

Here’s where it gets bad, though. I’m going to simplify things a little here, but let’s make some points.

  1. When you give a backdoor key to somebody, it’s not easily copyable if somebody happens to see it. In the electronic world, if you see the key once, you have it.
  2. The cost of copying an electronic key is basically zero once you have it. If one person decides to share the key indiscriminately, then the entire Internet has it.
  3. Access to a house Backdoor let’s you see what’s in the house at that particular moment. Access to an electronic backdoor lets you look at whatever the contents of the house were all the way up to the time the lock was changed, if you’ve taken copies (which is often easy).
  4. And here’s the big one. When you create a backdoor, you’re creating a backdoor for every house, and not just one. Let’s say that I’m a house builder. I’m very, very prolific, and I build thousands of houses a week. And I put the same lock in the backdoor of every house that I build. Does that make sense? No, it doesn’t. But that’s what the politicians are asking for.

So, the metaphor breaks down. Any talk about “skeleton keys” is an attempt to reestablish the metaphor. Which is broken.

What’s the lesson here? We should explain to politicians that backdoors are a metaphor, and that the metaphor only goes so far. Explain that clever people – clever, good people – don’t believe that what they (the politicians) think should be done is actually possible, and the move on to work that can be done. Because they’re right: there are bad people out there, doing bad things, and we need to address that. But not this way.


1 – the capital “E” is probably important here. In the UK, at least, “establishment” can mean pub[2].

2 – and people in pubs, though they may start up clued up, tend to get less clever as the evening goes on, though they may think, for a while, that they’re becoming more clever. This is in my (very) limited experience, obviously.

3 – 10 years ago? Not very long ago, to be honest.

4 – well, who’s owning up, anyway.

5 – mostly.

6 – or Fawlty, for John Cleese fans.

7 – ooh, look what I did there.

That Backdoor Fallacy revisited – delving a bit deeper

…if it breaks just once that becomes always,..

A few weeks ago, I wrote a post called The Backdoor Fallacy: explaining it slowly for governments.  I wish that it hadn’t been so popular.  Not that I don’t like the page views – I do – but because it seems that it was very timely, and this issue isn’t going away.  The German government is making the same sort of noises that the British government* was making when I wrote that post**.  In other words, they’re talking about forcing backdoors in encryption.  There was also an amusing/worrying story from slashdot which alleges that “US intelligence agencies” attempted to bribe the developers of Telegram to weaken the encryption in their app.

Given some of the recent press on this, and some conversations I’ve had with colleagues, I thought it was worth delving a little deeper***.  There seem to be three sets of use cases that it’s worth addressing, and I’m going to call them TSPs, CSPs and Other.  I’d also like to make it clear here that I’m talking about “above the board” access to encrypted messages: access that has been condoned by the relevant local legal system.  Not, in other words, the case of the “spooks”.  What they get up to is for another blog post entirely****.  So, let’s look at our three cases.

TSPs – telecommunications service providers

In order to get permission to run a telecommunications service(wired or wireless) in most (all?) jurisdictions, you need to get approval from the local regulator: a licence.  This licence is likely to include lots of requirements: a typical one is that you, the telco (telecoms company) must provide access at all times to emergency numbers (999, 911, 112, etc.).  And another is likely to be that, when local law enforcement come knocking with a legal warrant, you must give them access to data and call information so that they can basically do wire-taps.  There are well-established ways to do this, and fairly standard legal frameworks within which it happens: basically, if a call or data stream is happening on a telco’s network, they must provide access to it to legal authorities.  I don’t see an enormous change to this provision in what we’re talking about.

CSPs – cloud service providers

Things get a little more tricky where cloud service providers are concerned.  Now, I’m being rather broad with my definition, and I’m going to lump your Amazons, Googles, Rackspaces and such in with folks like Facebook, Microsoft and other providers who could be said to be providing “OTT” (Over-The-Top – in that they provide services over the top of infrastructure that they don’t own) services.  Here things are a little greyer*****.  As many of these companies (some of who are telcos, how also have a business operating cloud services, just to muddy the waters further) are running messaging, email services and the like, governments are very keen to apply similar rules to them as those regulating the telcos. The CSPs aren’t keen, and the legal issues around jurisdiction, geography and what the services are complicate matter.  And companies have a duty to their shareholders, many of whom are of the opinion that keeping data private from government view is to be encouraged.  I’m not sure how this is going to pan out, to be honest, but I watch it with interest.  It’s a legal battle that these folks need to fight, and I think it’s generally more about cryptographic key management – who controls the keys to decrypt customer information – than about backdoors in protocols or applications.

Other

And so we come to other.  This bucket includes everything else.  And sadly, our friends the governments want their hands on all of that everything else.    Here’s a little list of some of that everything else.  Just a subset.  See if you can see anything on the list that you don’t think there should be unfettered access to (and remember my previous post about how once access is granted, it’s basically game over, as I don’t believe that backdoors end up staying secret only to “approved” parties…):

  • the messages you send via apps on your phone, or tablet, or laptop or PC;
  • what you buy on Amazon;
  • your banking records – whether on your phone or at the bank;
  • your emails via your company VPN;
  • the stored texts on your phone when you enquired about the woman’s shelter
  • your emails to your doctor;
  • your health records – whether stored at your insurers, your hospital or your doctor’s surgery;
  • your browser records about emergency contraception services;
  • access to your video doorbell;
  • access to your home wifi network;
  • your neighbour’s child’s chat message to the ChildLine (a charity for abused children in the UK – similar exist elsewhere)
  • the woman’s shelter’s records;
  • the rape crisis charity’s records;
  • your mortgage details.

This is a short list.  I’ve chosen emotive issues, of course I have, but they’re all legal.  They don’t even include issues like extra-marital affairs or access to legal pornography or organising dissent against oppressive regimes, all of which might well edge into any list that many people might copmile.  But remember – if a backdoor is put into encryption, or applications, then these sorts of information will start leaking.  And they will leak to people you don’t want to have them.

Our lives revolve around the Internet and the services that run on top of it.  We have expectations of privacy.  Governments have an expectation that they can breach that privacy when occasion demands.  And I don’t dispute that such an expectation is valid.  The problem that this is not the way to do it, because of that phrase “when occasion demands”.  If the occasion breaks just once, then that becomes always, and not just to “friendly” governments.  To unfriendly governments, to criminals, to abusive partners and abusive adults and bad, bad people.  This is not a fight for us to lose.


*I’m giving the UK the benefit of the doubt here: as I write, it’s unclear whether we really have a government, and if we do, for how long it’ll last, but let’s just with it for now.

**to be fair, we did have a government then.

***and not just because I like the word “delving”.  Del-ving.  Lovely.

****one which I probably won’t be writing if I know what’s good for me.

*****I’m a Brit, so I use British spelling: get over it.

Helping our governments – differently

… we may live in a new security and terrorism landscape

Two weeks ago, I didn’t write a full post, because the Manchester arena bombing was too raw.  We are only a few days on from the London Bridge attack, and I could make the same decision, but think it’s time to recognise that we have a new reality that we need to face in Britain: that we may live in a new security and terrorism landscape.  The sorts of attacks – atrocities – that have been perpetrated over the past few weeks (and the police and security services say that despite three succeeding, they’ve foiled another five) are likely to keep happening.

And they’re difficult to predict, which means that they’re difficult to stop.  There are already renewed calls for tech companies* to provide tools to allow the Good Guys[tm**] to read the correspondence of the people who are going to commit terrorist acts.  The problem is that the preferred approach requested/demanded by governments seems to be backdoors in encryption and/or communications software, which just doesn’t work – see my post The Backdoor Fallacy – explaining it slowly for governments.  I understand that “reasonable people” believe that this is a solution, but it really isn’t, for all sorts of reasons, most of which aren’t really that technical at all.

So what can we do?  Three things spring to mind, and before I go into them, I’d like to make something clear, and it’s that I have a huge amount of respect for the men and women who make up our security services and intelligence community.  All those who I’ve met have a strong desire to perform their job to the best of their ability, and to help protect us from people and threats which could damage us, our property, and our way of life.  Many of these people and threats we know nothing about, and neither do we need to.  The job that the people in the security services do is vital, and I really don’t see any conspiracy to harm us or take huge amounts of power because it’s there for the taking.  I’m all for helping them, but not at the expense of the rights and freedoms that we hold dear.  So back to the question of what we can do.  And by “we” I mean the nebulous Security Community****.  Please treat these people with respect, and be aware they they work very, very hard, and often in difficult and stressful jobs*****.

The first is to be more aware of our environment.  We’re encouraged to do this in our daily lives (“Report unaccompanied luggage”…), but what more could we do in our professional lives?  Or what could we do in our daily lives by applying our professional capabilities and expertise to everyday activities?  What suspicious activities – from traffic on networks from unexpected place to new malware – might be a precursor to something else?  I’m not saying that we’re likely to spot the next terrorism attack – though we might – but helping to combat other crime more effectively both reduces the attack surface for terrorists and increases the available resourcing for counter-intelligence.

Second: there are, I’m sure, many techniques that are available to the intelligence community that we don’t know about.  But there is a great deal of innovation within enterprise, health and telco (to choose three sectors that I happen to know quite well******) that could well benefit our security services.  Maybe your new network analysis tool, intrusion detector, data aggregator has some clever smarts in it, or creates information which might be of interest to the security community.  I think we need to be more open to the idea of sharing these projects, products and skills – proactively.

The third is information sharing.  I work for Red Hat, an Open Source company which also tries to foster open thinking and open management styles.  We’re used to sharing, and industry, in general, is getting better about sharing information with other organisations, government and the security services.  We need to get better at sharing both active data from systems which are running as designed and bad data from systems that are failing, under attack or compromised.  Open, I firmly believe, should be our default state*******.

If we get better at sharing information and expertise which can help the intelligence services in ways which don’t impinge negatively on our existing freedoms, maybe we can reduce the calls for laws that will do so.  And maybe we can help stop more injuries, maimings and deaths.  Stand tall, stand proud.  We will win.


*who isn’t a tech company, these days, though?  If you sell home-made birthday cards on Etsy, or send invoices via email, are you a tech company?  Who knows.

**this an ironic tm***

***not that I don’t think that there are good guys – and gals – but just that it’s difficult to define them.  Read on: you’ll see.

****I’ve talked about this before – some day I’ll define it.

*****and most likely for less money than most of the rest of us.

******feel free to add or substitute your own.

*******OK, DROP for firewall and authorisation rules, but you get my point.