What’s a certificate?

For want of a nail, the kingdom was lost.

There was a huge story in the UK last week about how an expired certificate basically brought an entire mobile phone network (O2) to its knees. But what is a certificate, why do they expire, and why would that have such a big impact? In order to understand, let’s step back a bit and look at why you need certificates in the first place.

Let’s assume that two people, Alice and Bob, want to exchange some secret information. Let’s go further, and say that Bob is actually Bobcorp, Alice’s bank, and she wants to be able send and receive her bank statements in encrypted form. There are well established ways to do this, and the easiest way is for them to agree on a shared key that they use to both encrypt and decrypt each others’ messages. How do they agrees this key? Luckily, there are some clever ways in which they can manage a “handshake” between they two of them, even if they’ve not communicated before, which ends in their both having a copy of the key, without the chance of anybody else getting hold of it.

The problem is that Alice can’t actually be sure that she’s talking to Bobcorp (or vice versa). Bobcorp probably doesn’t mind, at this point, because he can ask Alice to provide her login credentials, which will allow him to authenticate her. But Alice really does care: she certainly shouldn’t be handing her login details to somebody – let’s call her “Eve” – who’s just pretending to be Bob.

The solution to this problems comes in two parts: certificates and Certificate Authorities (CAs). A CA is a well-known and trusted party with whom Bobcorp has already established a relationship: typically by providing company details, website details and the like. Bobcorp also creates and sends the CA a special key and very specific information about itself (maybe including the business name, address and website information). The CA, having established Bobcorp’s bona fides, creates a certificate for Bobcorp, incorporating the information that was requested – in fact, some of the information that Bobcorp sends the CA is usually in the form of a “self-signed certificate”, so pretty much all that the CA needs to do is provide its own signature.

Astute readers will be asking themselves: “How did this help? Alice still needs to trust the CA, right?” The answer is that she does. But there will typically be a very small number of CAs in comparison to Bobcorp-type companies, so all Alice needs to do is ensure that she can trust a few CAs, and she’s now good to go. In a Web-browsing scenario, Alice will usually have downloaded a browser which already has appropriate trust relationships with the main CAs built in. She can now perform safe handshakes with lots of companies, and as long as she (or her browser) checks that they provide certificates signed by a CA that she trusts, she’s relatively safe.

But there’s a twist. The certificates that the CA issues to Bobcorp (and others) typically have an expiration date on them. This isn’t just to provide the CA with a recurring revenue stream – though I’m sure that’s a nice benefit – but also in case Bobcorp’s situation changes: what if it has gone bankrupt, for instance, or changed its country of business?[1] So after a period of time (typically in the time frame of a year or two, but maybe less or more), Bobcorp needs to reapply to get a new certificate.

What if Bobcorp forgets? Well, when Alice visits Bobcorp’s site and the browser notices an expired certificate, it should want her not to proceed, and she shouldn’t give them any information until it’s renewed. This sounds like a pain, and it is: Bobcorp and its customers are going to be severely inconvenienced. Somebody within Bobcorp whose job it was to renew the certificate is going to be in trouble.

Life is even worse in the case where no actual people are involved. If, instead of Alice, we have an automated system A, and instead of Bob, we have an automated system, B. A still needs to trust that it’s talking to the real B in case an evil system E is pretending to be B, so certificates are still required. In this case, if, B’s certificate expires, A should quite rightly refuse to connect to it. This seems to have been what happened to cause the mobile data outage that O2 is blaming on Ericsson, one of its suppliers. There was no easy way to fix the problem, or tell the many, many A-type systems that may have been trying to communicate with the B system(s) to carry on regardless. And so, for want of a nail, the kingdom was lost.

The lesson? Avoid single points of failure, think about fall-back modes. And be ready to move to remedy unexpected errors. Quickly.


1 – there are also various mechanisms to revoke, or cancel, certificates but hey they are typically complex, ill-implemented in many cases, and consequently little-used.

Playing a game

I’m at an EU Cybercrime Summit today and yesterday.  This may not sound very exciting, and, maybe considering this, the organisers arranged a game for us to play yesterday.  It was a simulation of a couple of different, but connected scenarios, and there was a web interface via which we could all interact with the engine.

The first scenario revolved around managing an attack on a piece of critical national infrastructure: we were acting as the CISO, and trying to work out our best course of action in terms of managing responses and communicating with various external agencies.  The second scenario had us as the head of a national cybersecurity agency, watching and trying to manage an emerging set of issues, including a Meltdown/Spectre-type vulnerability and various piggy-backed attacks.

At each stage, there was some explanation, and then a serious of options for us to choose, along with a countdown, adding an element of tension to the process as we had to submit our answers[1].  Sometimes we had to choose a single option – “How serious is this issue? Not an issue; minor; substantial; significant; major; national crisis; international crisis” – and sometimes we could choose two or more – “Who will you inform about this?  Internal only; trusted parties; national cybersecurity bodies; national intelligence; international parties.”  We were encouraged to discuss our choices with our neighbours before we made them, and once all the answers were in, bar charts were displayed on the screen in front of us (and on our devices) showing how everbody had voted.

At the beginning of the process, we had been asked to enter some basic information about ourselves such as sector (public, industry, academic, etc.) and expertise (security, policy, justice, etc.), and for some of the bar charts, a further breakdown as given, showing how the sector and expertise voting had gone.  Experts at the front gave their opinions of the “correct” answers, and reactions as to how people had voted.

I’d not participated in a game/simulation like this before, and had no idea either how it would go, nor how beneficial it might be.  To my surprise, I enjoyed it and found it both interesting and educational.  The scenarios were broad enough that they were unlikely to be many people in the room who had expertise across all of the different issues, I enjoyed discussing my thoughts with a neighbour who I’d only met earlier that day, and then discussing with him whether we agreed with the experts’ views.

In short, it was a very useful exercise, and I’m wondering how I could apply it to my work in different contexts.  It was educational, fun, provided opportunities for forming relationships – we found ourselves discussing scenarios and issues with others around us – and allows for further analysis after the evene.t


1 – I’m not often one to link to external products, but this one seemed good, and has a free pricing tier: it was called Mentimeter. The game was run by AIT (Austrian Institute of Technology), Center for Digital Safety & Security.

I’m turning off your security.

“Don’t worry, I know what I’m doing.”

Today’s security story is people turning security off.  For me, the fact that it’s even a story is the story.  This particular story is covered in The Register, who explain (to nobody’s surprise) that some of the patches to fix issues identified in CPU’s (think Spectre, Meltdown, etc.) can actually slow down the applications running on them.  The problem is that, in some cases, they don’t slow them down a little bit, but rather a lot.  By which I mean up to 50%.  And if you’ve bought expensive hardware – or rented it [1] – then you’d generally prefer it if it runs your applications/programs/workloads quickly, rather than just half as fast as they might run.

And so you turn off the security patches.  Your decision: fine.

No, stop: this isn’t what has happened.

The mythical “you”, the person running the workload, isn’t the person who makes the decision, in most cases, because it’s been made for you.  This is the real story.

Linus Torvalds, and a bunch of other experts in the Linux kernel[2], have decided that although the patch that could make your workloads secure is available, the functionality that does it should be “off” by default.  They reason – quite correctly, in my opinion – that the vast majority of people running workloads, won’t easily be able to turn this functionality on themselves

They also reason – again, correctly, in my opinion – that most people will care more about how quickly their workloads run than about how secure they are.  I’m not happy about this, but that’s the way it is.

What I worry about is the final step in the logic to making the decision.  I’m going to quote Linus:

“Have you seen any actual realistic attacks for normal human users?” he asked. “Things where the kernel should actually care? The JavaScript thing is for the browser to fix up, not for the kernel to say ‘now everything should run up to 50 per cent slower.'”

I get the reasoning behind this, but I don’t like it.  To give some context, somebody came up with an example attack which could compromise certain workloads, and Linus points out that there are better ways to fix this attack than fixing it in the kernel. My concerns are two-fold:

  1. although there may be better places to fix that particular attack, a kernel-level fix is likely to fix an entire class of attacks, meaning better protection for users who are using any application which might include an attack vector.
  2. pointing out that there haven’t been any attacks yet not only ignores the fact that there is a future out there[3] but also points malicious actors in the direction of a likely attack vector.

Now, I know that the more dedicated malicious actors are already looking for these things, but do we really need to advertise?

What’s my fix?

I don’t have one, or at least not an easy one.

Somebody, somewhere, needs to decide whether security is turned on or off.  What I’d honestly like to see is an easier set of controls to allow people to turn on or off security, and to understand the trade-offs when they do that.  The problems with that are:

  • the trade-offs are often much more complex than just “fast and insecure” or “slow and secure”, and are really difficult to explain.
  • in order to make a sensible decision about trade-offs, people need to understand risk.  And people are awful at understanding risk.

And there’s a “chicken and egg problem”[7] here: people won’t understand risk until they are offered the chance to make decisions, but there’s little incentive to offer them complex decisions unless they understand risk.

My plea?  Where possible, expose risk, and explain what it is.  And if you’re turning off security-related functionality, make it easy to turn back on for those who need it.


1 – a quick heads-up: this is what “deploying to the cloud” actually is.

2 – what sits at the bottom of many of the workloads that are running in servers.

3 – hopefully.  If the Three Minute Warning[4] sounds while you’re reading this, you may wish to duck and cover.  You can come back to it later[6].

4 – “… sounds like this …”[5].

5 – 80s reference.

6 – or not.  See [3].

7 – for non-native English readers, this means “a problem where the solution requires two pieces, both of which are dependent on each other”.

Why security policies are worthless

A policy, to have any utility at all, needs to exist in a larger context.

“We need a policy on that.” This is a phrase that seems to act as a universal panacea to too many managers. When a problem is identified, and the blame game has been exhausted, the way to sort things out, they believe, is to create a policy. If you’ve got a policy, then everything will be fine, because everything will be clear, and everyone will obey the policy, and nothing can go wrong.

Right[1].

The problem is that policy, on its own, is worthless.

A policy, to have any utility at all, needs to exist in a larger context, or, to think of it in a different way, to sit in a chain of artefacts.  It is its place in this chain that actually gives it meaning.  Let me explain.

When that manager said that they wanted a policy, what did that actually mean?  That rather depends on how wise the manager is[2].  Hopefully, the reason that the manager identified the need for a policy was because:

  1. they noticed that something had gone wrong that shouldn’t have done and;
  2. they wanted to have a way to make sure it didn’t happen again.

The problem with policy on its own is that it doesn’t actually help with either of those points.  What use does it have, then?

Governance

Let’s look at those pieces separately.  When we say that “something had gone wrong that shouldn’t have done“, what we’re saying is that there is some sort of model for what the world should look like, giving us often general advice on our preferred state.  Sometimes this is a legal framework, sometimes it’s a regulatory framework, and sometimes it’s a looser governance model.  The sort of thing I’d expect to see at this level would be statements like:

  • patient data must be secured against theft;
  • details of unannounced mergers must not be made available to employees who are not directors of the company;
  • only authorised persons may access the military base[4].

These are high level requirements, and are statements of intent.  They don’t tell you what to do in order to make these things happen (or not happen), they just tell you that you have to do them (or not do them).  I’m going to call collections of these types of requirements “governance models”.

Processes

At the other end of the spectrum, you’ve got the actual processes (in the broader sense of the term) required to make the general intent happen.  For the examples above, these might include:

  • AES-256 encryption using OpenSSL version 1.1.1 FIPS[5], with key patient-sym-current for all data on database patients-20162019;
  • documents Indigo-1, Indigo-3, Indigo-4 to be collected after meeting and locked in cabinet Socrates by CEO or CFO;
  • guards on duty at post Alpha must report any expired passes to the base commander and refuse entry to all those producing them.

These are concrete processes that must be followed, which will hopefully allow the statements of intent that we noted above to be carried out.

Policies and audit

The problem with both the governance statements and the processes identified is that they’re both insufficient.  The governance statements don’t tell you how to do what needs to be done, and the processes are context-less, which means that you can’t tell what they relate to, or how they’re helping.  It’s also difficult to know what to do if they fail: what happens, for example, if the base commander turns up with an expired pass?

Policies are what sit in the middle.  They provide guidance as to how to implement the governance model, and provide context for the processes.  They should give you enough detail to cover all eventualities, even if that’s to say “consult the Legal Department when unsure[6]”.  What’s more, they should be auditable.  If you can’t audit your security measures, then how can you be sure that your governance model is being followed?  But governance models, as we’ve already discovered, are at the level of intent – not the sort of thing that can be audited.

Policies, then, should provide enough detail that they can be auditable, but they should also preferably be separated enough from implementation that rules can be written that are applicable in different circumstances.  Here are a few policies that we might create to follow the first governance model examples above:

  • patient data must be secured against theft;
    • Policy 1: all data at rest must be encrypted by a symmetric key of at least 256 bits or equivalent;
    • Policy 2: all storage keys must be rotated at least once a month;
    • Policy 3: in the event of a suspected key compromise, all keys at the same level in that department must be rotated.

You can argue that these policies are too detailed, or you can argue that they’re not detailed enough: both arguments are fair, and level of detail (or granularity) should depend on the context and the use to which they are being put.  However, though I’m sure that all of the example policies I’ve given could be improved, I hope that they are all:

  • auditable;
  • able to be implemented by one or more well-defined processes;
  • understandable by both those who concerned with the governance level and those involved at the process implementation and operations level.

The value of auditing

I’ve written about auditing before (Confessions of an auditor).  I think it’s really important.  Done well, it should allow you to discover whether:

  1. your processes are covering all of the eventualities of your policies;
  2. whether your policies are actually being implemented correctly.

Auditing may also address whether your policies fully meet your governance model.  Auditing well is a skill, but in order to help your auditor – whether they are good at it or bad at it – having a clearly defined set of policies is a must.  But, as I pointed out at the beginning of this article, policies for policies’ sake are worthless: put them together with governance and processes, however, and they provide technical and business value.


1 – this is sarcasm.

2 – yes, I know.  But let’s not be rude if we can avoid it.  We want to help managers, because the more clue we deliver to them, the easier our lives will be[3].

3 – and you never know: one day, even you might be a manager.

4 – which is probably not a US military base, given the spelling of “authorised”.

5 – example only…

6 – this, in my experience, is the correct answer to many questions.

The 3 things you need to know about disk encryption

Use software encryption, preferably an open-source and audited solution.

It turns out that somebody – well, lots of people, in fact – failed to implement a cryptographic standard very well.  This isn’t a surprise, I’m afraid, but it’s bad news.  I’ve written before about how important it is to be using disk encryption, but it turns out that the advice I gave wasn’t sufficient, or detailed enough.

Here’s a bit of background.  There are two ways to do disk encryption:

  1. let the disk hardware (and firmware) manage it: HDD (hard disk drive), SSD (solid state drive) and hybrid (a mix of HDD and SDD technologies) manufacturers create drives which have encryption built in.
  2. allow your Operating System (e.g. Linux[0], OSX[1], Windows[2]) to do the job: the O/S will have a little bit of itself on the disk unencrypted, which will allow it to decrypt the rest of the disk (which is encrypted) when provided with a password or key.

You’d think, wouldn’t you, that option 1 would be the safest?  It should be quick, as it’s done in hardware, and well, the companies who manufacture these disks will know that they’re doing, right?

No.

A paper (link opens a PDF file) written by some researchers in the Netherlands reveals some work that they did on several SSD drives to try to work out how good a job had been done on the encryption security.  They are all supposed to have implemented a fairly complex standard from the TCG[4] called Opal, but it seems that none of them did it right.  It turns out that someone with physical access to your hardware can, fairly trivially, decrypt what’s on your drive.  And they can do this without the password that you use to lock it or any associated key(s).  The simple lesson from this is that you shouldn’t trust hardware disk encryption.

So, software disk encryption is OK, then?

Also no.

Well, actually yes, as long as you’re not using Microsoft’s BitLocker in its default mode.  It turns out that BitLocker will just use hardware encryption if the drive its using supports it.  In other words, using BitLocker just uses hardware encryption unless you tell it not to do so.

What about other options?  Well, you can tell BitLocker not to use hardware encryption, but only for a new installation: it won’t change on an existing disk.  The best option[5] is to use a software encryption solution which is open source and audited by the wider community.  LUKS is the default for most Linux distributions.  One suggested by the papers’ authors for Windows is Veracrypt.  Can we be certain that there are no holes or mistakes in the implementation of these solutions?  No, we can’t, but the chances of security issues being found and fixed are much, much higher than for proprietary software[6].

What, then are my recommendations?

  1. Don’t use hardware disk encryption.  It’s been shown to be flawed in many implementations.
  2. Don’t use proprietary software.  For anything, honestly, particularly anything security-related, but specifically not for disk encryption.
  3. If you have to use Windows, and are using BitLocker, run with VeraCrypt on top.

 


1 – GNU Linux.

2 – I’m not even sure if this is the OS that Macs run anymore, to be honest.

3 – not my thing either, but I’m pretty sure this is what it’s call.  Couldn’t be certain of the version, though.

4 – Trusted Computing Group.

5 – as noted by the paper’s authors, and heartily endorsed by me.

6 – I’m not aware of any problems with Macintosh-based implementations, but open source is just better – read the article linked from earlier in the sentence.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.

Knowing me, knowing you: on Russian spies and identity

Who are you, and who tells me so?

Who are you, and who tells me so?  These are questions which are really important for almost any IT-related system in use today.  I’ve previously discussed the difference between identification and authentication (and three other oft-confused terms) in Explained: five misused security words, and what I want to look at in this post is the shady hinterland between identification and authentication.

There has been a lot in the news recently about the poisoning in the UK of two Russian nationals and two British nationals, leading to the tragic death of Dawn Sturgess.  I’m not going to talk about that, or even about the alleged perpetrators, but about the problem of identity – their identity – and how that relates to IT.  The two men who travelled to Salisbury, and were named by British police as the perpetrators, travelled under Russian passports.  These documents provided their identities, as far as the British authorities – including UK Border Control, who allowed them into the country – were aware, and led to their being allowed into the country.

When we set up a new user in an IT system or allow them physical access to a building, for instance, we often ask for “Government-issued ID” as the basis for authenticating the identity that they have presented, in preparation for deciding whether to authorise them to perform whatever action they have requested.  There are two issues here – one obvious, and one less so.  The first, obvious one, is that it’s not always easy to tell whether a document has actually been issued by the authority by which it appears to be have been issued – document forgers have been making a prosperous living for hundreds, if not thousands of years.  The problem, of course, is that the more tell-tale signs of authenticity you reveal to those whose job it is to check a document, the more clues you give to would-be forgers for how to improve the quality of the false versions that they create.

The second, and less obvious problem, is that just because a document has been issued by a government authority doesn’t mean that it is real.  Well, actually, it does, and there’s the issue.  Its issuance by the approved authority makes it real – that is to say “authentic” – but it doesn’t mean that it is correct.  Correctness is a different property to authenticity. Any authority may decide to issue identification documents that may be authentic, but do not truly represent the identity of the person carrying them. When we realise that a claim of identity is backed up by an authority which is issuing documents that we do not believe to be correct, that means that we should change our trust relationship with that authority.  For most entities, IDs which have been authentically issued by a government authority are quite sufficient, but it is quite plausible, for instance, that the UK Border Force (and other equivalent entities around the world) may choose to view passports issued by certain government authorities as suspect in terms of their correctness.

What impact does this have on the wider IT security community?  Well, there are times when we are accepting government-issued ID when we might want to check with relevant home nation authorities as to whether we should trust them[1].  More broadly than that, we must remember that every time that we authenticate a user, we are making a decision to trust the authority that represented that user’s identity to us.  The level of trust we place in that authority may safely be reduced as we grow to know that user, but it may not – either because our interactions are infrequent, or maybe because we need to consider that they are playing “the long game”, and are acting as so-called “sleepers”.

What does this continuous trust mean?  What it means is that if we are relying on an external supplier to provide contractors for us, we need to keep remembering that this is a trust relationship, and one which can change.  If one of those contractors turns out to have faked educational qualifications, then we need to doubt the authenticity of the educational qualifications of all of the other contractors – and possibly other aspects of the identity which the external supplier has presented to us.  This is because we have placed transitive trust in the external supplier, which we must now re-evaluate.  What other examples might there be?  The problem is that the particular aspects of identity that we care about are so varied and differ between different roles that we perform.  Sometimes, we might not care about education qualifications, but credit score, or criminal background, or blood type.  In the film Gattaca[2], identity is tied to physical and mental ability to perform a particular task.

There are various techniques available to allow us to tie a particular individual to a set of pieces of information: DNA, iris scans and fingerprints are pretty good at telling us that the person in front of us now is the person who was in front of us a year ago.  But tying that person to other information relies on trust relationships with external entities, and apart from a typically small set of inferences that we can draw from our direct experiences of this person, it’s difficult to know exactly what is truly correct unless we can trust those external entities.


1 – That assumes, of course, that we trust our home nation authorities…

2 – I’m not going to put a spoiler in here, but it’s a great film, and really makes you think about identity: go and watch it!