Meltdown and Spectre: thinking about embargoes and disclosures

The technical details behind Meltdown and Spectre are complex and fascinating – and the possible impacts wide-ranging and scary.  I’m not going to go into those here, though there are some excellent articles explaining them.  I’d point readers in particular at the following URLs (which both resolve to the same page, so there’s no need to follow both):

I’d also recommend this article on the Red Hat blog, written by my colleague Jon Masters: What are Meltdown and Spectre? Here’s what you need to know.  It includes a handy cut-out-and-keep video[1] explaining the basics.   Jon has been intimately involved in the process of creating and releasing mitigations and fixes to Meltdown and Spectre, and is very well-placed to talk about it.

All that is interesting and important, but I want to talk about the mechanics and process behind how a vulnerability like this moves from discovery through to fixing.  Although similar processes exist for proprietary software, I’m most interested in open source software, so that’s what I’m going to describe.

 

Step 1 – telling the right people

Let’s say that someone has discovered a vulnerability in a piece of open source software.  There are a number of things they can do.  The first is ignore it.  This is no good, and isn’t interesting for our story, so we’ll ignore it in turn.  Second is to keep it to themselves or try to make money out of it.  Let’s assume that the discoverer is a good type of person[2], so isn’t going to do this[3].  A third is to announce it on the project mailing list.  This might seem like a sensible thing to do, but it can actually be very damaging to the project and the community at large. The problem with this is that the discoverer has just let all the Bad Folks[tm] know about a vulnerability which hasn’t been fixed, and they can now exploit.  Even if the discoverer submits a patch, it needs to be considered and tested, and it may be that the fix they’ve suggested isn’t the best approach anyway.  For a serious security issue, the project maintainers are going to want to have a really good look at it in order to decide what the best fix – or mitigation – should be.

It’s for this reason that many open source projects have a security disclosure process.  Typically, there’s a closed mailing list to which you can send suspected vulnerabilities, often something like “security@projectname.org“.  Once someone has emailed that list, that’s where the process kicks in.

Step 2 – finding a fix

Now, before that email got to the list, somebody had to decide that they needed a list in the first place, so let’s assume that you, the project leader, has already put a process of some kind in place.  There are a few key decisions to make, of which the most important are probably:

  • are you going to keep it secret[5]?  It’s easy to default to “yes” here, for the reasons that I mentioned above, but the number of times you actually put in place restrictions on telling people about the vulnerability – usually referred to as an embargo, given its similarity to a standard news embargo – should be very, very small.  This process is going to be painful and time-consuming: don’t overuse it.  Oh, and your project’s meant to be about open source software, yes?  Default to that.
  • who will you tell about the vulnerability?  We’re assuming that you ended up answering “yes” to te previous bullet, and that you don’t want everybody to know, and also that this is a closed list.  So, who’s the most important set of people?  Well, most obvious is the set of project maintainers, and key programmers and testers – of which more below.  These are the people who will get the fix completed, but there are two other constituencies you might want to consider: major distributors and consumers of the project. Why these people, and who are they?  Well, distributors might be OEMs, Operating System Vendors or ISVs who bundle your project as part of their offering.  These may be important because you’re going to want to ensure that people who will need to consume your patch can do so quickly, and via their preferred method of updating.  What about major consumers?  Well, if an organisation has a deployed estate of hundreds or thousands of instances of a project[6], then the impact of having to patch all of those at short notice – let alone the time required to test the patch – may be very significant, so they may want to know ahead of time, and they may be quite upset if they don’t.  This privileging of major over rank-and-file users of your project is politically sensitive, and causes much hand-wringing in the community.  And, of course, the more people you tell, the more likely it is that a leak will occur, and news of the vulnerability to get out before you’ve had a chance to fix it properly.
  • who should be on the fixing team?  Just because you tell people doesn’t mean that they need to be on the team.  By preference, you want people who are both security experts and also know your project software inside-out.  Good luck with this.  Many projects aren’t security projects, and will have no security experts attached – or a few at most.  You may need to call people in to help you, and you may not even know which people to call in until you get a disclosure in the first place.
  • how long are you going to give yourself?  Many discoverers of vulnerabilities want their discovery made public as soon as possible.  This could be for a variety of reasons: they have an academic deadline; they don’t want somebody else to beat them to disclosure; they believe that it’s important to the community that the project is protected as soon as possible; they’re concerned that other people have found – are are exploiting – the vulnerability; they don’t trust the project to take the vulnerability seriously and are concerned that it will just be ignored.  This last reason is sadly justifiable, as there are projects who don’t react quickly enough, or don’t take security vulnerabilities seriously, and there’s a sorry history of proprietary software vendors burying vulnerabilities and pretending they’ll just go away[7].  A standard period of time before disclosure is 2-3 weeks, but as projects get bigger and more complex, balancing that against issues such as pressure from those large-scale users to give them more time to test, holiday plans and all the rest becomes important.  Having an agreed time and sticking to it can be vital, however, if you want to avoid deadline slip after deadline slip.  There’s another interesting issue, as well, which is relevant to the Meltdown and Spectre vulnerabilities – you can’t just patch hardware.  Where hardware is involved, the fixes may involve multiple patches to multiple projects, and not just standard software but microcode as well: this may significantly increase the time needed.
  • what incentives are you going to provide?  Some large projects are in a position to offer bug bounties, but these are few and far between.  Most disclosers want – and should expect to be granted – public credit when the fix is finally provided and announced to the public at large.  This is not to say that disclosers should necessarily be involved in the wider process that we’re describing: this can, in fact, be counter-productive, as their priorities (around, for instance, timing) may be at odds with the rest of the team.

There’s another thing you might want to consider, which is “what are we going to do if this information leaks early?” I don’t have many good answers for this one, as it will depend on numerous factors such as how close are you to a fix, how major is the problem, and whether anybody in the press picks up on it.  You should definitely consider it, though.

Step 3 – external disclosure

You’ve come up with a fix?  Well done.  Everyone’s happy?  Very, very well done[8].  Now you need to tell people.  But before you do that, you’re going to need to decide what to tell them.  There are at least three types of information that you may well want to prepare:

  • technical documentation – by this, I mean project technical documentation.  Code snippets, inline comments, test cases, wiki information, etc., so that when people who code on your project – or code against it – need to know what’s happened, they can look at this and understand.
  • documentation for techies – there will be people who aren’t part of your project, but who use it, or are interested in it, who will want to understand the problem you had, and how you fixed it.  This will help them to understand the risk to them, or to similar software or systems.  Being able to explain to these people is really important.
  • press, blogger and family documentation – this is a category that I’ve just come up with.  Certain members of the press will be quite happy with “documentation for techies”, but, particularly if your project is widely used, many non-technical members of the press – or technical members of the press writing for non-technical audiences – are going to need something that they can consume and which will explain in easy-to-digest snippets a) what went wrong; b) why it matters to their readers; and c) how you fixed it.  In fact, on the whole, they’re going to be less interested in c), and probably more interested in a hypothetical d) & e) (which are “whose fault was it and how much blame can we heap on them?” and “how much damage has this been shown to do, and can you point us at people who’ve suffered due to it?” – I’m not sure how helpful either of these is).  Bloggers may also be useful in spreading the message, so considering them is good.  And generally, I reckon that if I can explain a problem to my family, who aren’t particularly technical, then I can probably explain it to pretty much anybody I care about.

Of course, you’re now going to need to coordinate how you disseminate all of these types of information.  Depending on how big your project is, your channels may range from your project code repository through your project wiki through marketing groups, PR and even government agencies[9].

 

Conclusion

There really is no “one size fits all” approach to security vulnerability disclosures, but it’s an important consideration for any open source software project, and one of those that can be forgotten as a project grows, until suddenly you’re facing a real use case.  I hope this overview is interesting and useful, and I’d love to hear thoughts and stories about examples where security disclosure processes have worked – and when they haven’t.


2 – because only good people read this blog, right?

3 – some government security agencies have a policy of collecting – and not disclosing – security vulnerabilities for their own ends.  I’m not going to get into the rights or wrongs of this approach.[4]

4 – well, not in this post, anyway.

5 – or try, at least.

6 – or millions or more – consider vulnerabilities in smartphone software, for instance, or cloud providers’ install bases.

7 – and even, shockingly, of using legal mechanisms to silence – or try to silence – the discloser.

8 – for the record, I don’t believe you: there’s never been a project – let alone an embargo – where everyone was happy.  But we’ll pretend, shall we?

9 – I really don’t think I want to be involved in any of these types of disclosure, thank you.

Top 5 resolutions for security folks – 2018

Yesterday, I wrote some jokey resolutions for 2018 – today, as it’s a Tuesday, my regular day for posts, I decided to come up with some real ones.

1 – Embrace the open

I’m proud to have been using Linux[1] and other open source software for around twenty years now.  Since joining Red Hat in 2016, and particularly since I started writing for Opensource.com, I’ve become more aware of other areas of open-ness out there, from open data to open organisations.  There are still people out there who are convinced that open source is less secure than proprietary software.  You’ll be unsurprised to discover that I disagree.  I encourage everyone to explore how embracing the open can benefit them and their organisations.

2 – Talk about risk

I’m convinced that we talk too much about security for security’s sake, and not about risk, which is what most “normal people” think about.  There’s education needed here as well: of us, and of others.  If we don’t understand the organisations we’re part of, and how they work, we’re not going to be able to discuss risk sensibly.  In the other direction, we need to be able to talk about security a bit, in order to explain how it will mitigate risk, so we need to learn how to do this in a way that informs our colleagues, rather than alienating them.

3 – Think about systems

I don’t believe that we[2] talk enough about systems.  We spend a lot of our time thinking about functionality and features, or how “our bit” works, but not enough about how all the bits fit together. I don’t often link out to external sites or documents, but I’m going to make an exception for NIST special publication 800-160 “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems”, and I particularly encourage you to read Appendix E “Roles, responsibilities and skills: the characteristics and expectations of a systems security engineer”.  I reckon this is an excellent description of the core skills and expertise required for anyone looking to make a career in IT security.

4 – Examine the point of conferences

I go to a fair number of conferences, both as an attendee and as a speaker – and also do my share of submission grading.  I’ve written before about how annoyed I get (and I think others get) by product pitches at conferences.  There are many reasons to attend the conferences, but I think it’s important for organisers, speakers and attendees to consider what’s most important to them.  For myself, I’m going to try to ensure that what I speak about is what I think other people will be interested in, and not just what I’m focussed on.  I’d also highlight the importance of the “hallway track”: having conversations with other attendees which aren’t necessarily directly related to the specific papers or talks. We should try to consider what conferences we need to attend, and which ones to allow to fall by the wayside.

5 – Read outside the IT security discipline

We all need downtime.  One way to get that is to read – on an e-reader, online, on your phone, magazines, newspapers or good old-fashioned books.  Rather than just pick up something directly related to work, choose something which is at least a bit off the beaten track.  Whether it’s an article on a topic to do with your organisation’s business,  a non-security part of IT[3], something on current affairs, or a book on a completely unrelated topic[4], taking the time to gain a different perspective on the world is always[5] worth it.

What have I missed?

I had lots of candidates for this list, and I’m sure that I’ve missed something out that you think should be in there.  That’s what comments are for, so please share your thoughts.


1 GNU Linux.

2 the mythical IT community

3 – I know, it’s not going to be as sexy as security, but go with it.  At least once.

4 – I’m currently going through a big espionage fiction phase.  Which is neither here nor there, but hey.

5 – well, maybe almost always.

Using words that normal people understand

Normal people generally just want things to work.

Most people[1] don’t realise quite how much fun security is, or exactly how sexy security expertise makes you to other people[2].  We know that it’s engrossing, engaging and cool, they don’t.  For this reason, when security people go to the other people (let’s just call them “normal people” for the purposes of this article), and tell them that they’re doing something wrong, and that they can’t launch their product, or deploy their application, or that they must stop taking sales orders immediately, and probably for the next couple of days until this are fixed, then those normal people don’t always react with the levels of gratefulness that we feel is appropriate.

Sometimes, in fact, they will exhibit negative responses – even quite personal negative responses – to these suggestions.

The problem is this: security folks know how things should be, and that’s secure.  They’ve taken the training, they’ve attended the sessions, they’ve read the articles, they’ve skimmed the heavy books[3], and all of these sources are quite clear: everything must be secure.  And secure generally means “closed” – particularly if the security folks weren’t sufficiently involved in the design, implementation and operations processes.   Normal people, on the other hand, generally just want things to work.  There’s a fundamental disjoint between those two points of view that we’re not going to get fixed until security is the very top requirement for any project from its inception to its ending[4].

Now, normal people aren’t stupid[5].  They know that things can’t always work perfectly: but they would like them to work as well as they can.  This is the gap[7] that we need to cross.  I’ve talked about managed degradation as a concept before, in the post Service degradation: actually a good thing, and this is part of the story.  One of the things that we security people should be ready to do is explain that there are risks to be mitigated.  For security people, those risks should be mitigated by “failing closed”.  It’s easy to stop risk: you just stop system operation, and there’s no risk that it can be misused.  But for many people, there are other risks: an example being that the organisation may in fact go completely out of business because some *[8]* security person turned the ordering system off.  If they’d offered me the choice to balance the risk of stopping taking orders against the risk of losing some internal company data, would I have taken it?  Well yes, I might have.  But if I’m not offered the option, and the risk isn’t explained, then I have no choice.  These are the sorts of words that I’d like to hear if I’m running a business.

It’s not just this type of risk, though.  Coming to a project meeting two weeks before launch and announcing that the project can’t be deployed because this calls against this API aren’t being authenticated is no good at all.  To anybody.  As a developer, though, I have a different vocabulary – and different concerns – to those of the business owner.  How about, instead of saying “you need to use authentication on this API, or you can’t proceed”, the security person asks “what would happen if data that was provided on this API was incorrect, or provided by someone who wanted to disrupt system operation?”?  In my experience, most developers are interested – are invested – in the correct operation of the system that they’re running, and of the data that it processes.  Asking questions that show the possible impact of lack of security is much more likely to garner positive reactions than an initial “discussion” that basically amounts to a “no”.

Don’t get me wrong; there are times when we, as security people, need to be firm, and stick to our guns[9], but in the end, it’s the owners – of systems, or organisations, of business units, of resources – who get to make the final decision.  It’s our job to talk to them in words they can understand and ensure that they are as well informed as we can possibly make them.  Without just saying “no”.


1. by which I mean “those poor unfortunate souls who don’t read these posts, unlike you, dear and intelligent reader”.

2. my wife, sadly, seems to fall into this category.

3. which usually have a picture of a lock on the cover.

4. and good luck with that.

5. while we’ve all met our fair share of stupid normal people, I’m betting that you’ve met your fair share of stupid security people, too, so it balances out[6].

6. probably more than balances out.  Let’s leave it there.

7. chasm.

8. insert your favourite adjectival expletive here.

9. figuratively: I don’t condone bringing any weapons, including firearms, to your place of work.

Who’s saying “hello”? – agency, intent and AI

Who is saying “hello world?”: you, or the computer?

I don’t yet have one of those Google or Amazon talking speaker thingies in my house or office.  A large part of this is that I’m just not happy about the security side: I know that the respective companies swear that they’re only “listening” when you say the device’s trigger word, but even if that’s the case, I like to pretend[1] that I have at least some semblance of privacy in my life.  Another reason, however, is that I’m not sure that I like what happens to people when they pretend that there’s a person listening to them, but it’s really just a machine.

It’s not just Alexa and the OK, Google persona, however.  When I connect to an automated phone-answering service, I worry when I hear “I’ll direct your call” from a non-human.  Who is “I”?  “We’ll direct your call” is better – “we” could be the organisation with whom I’m interacting.  But “I”?  “I” is the pronoun that people use.  When I hear “I”, I’m expecting sentience: if it’s a machine I’m expecting AI – preferably fully Turing-compliant.

There’s a more important point here, though.  I’m entirely aware that there’s no sentience behind that “I”[2], but there’s an important issue about agency that we should unpack.

What, then, is “agency”?  I’m talking about the ability of an entity to act on its or another’s behalf, and I touched on this this in a previous post, “Wow: autonomous agents!“.  When somebody writes some code, what they’re doing is giving ability to the system that will run that code to do something – that’s the first part.  But the agency doesn’t really occur, I’d say, until that code is run/instantiated/executed.  At this point, I would argue, the software instance has agency.

But whose agency, exactly?  For whom is this software acting?

Here are some answers.  I honestly don’t think that any of them is right.

  1. the person who owns the hardware (you own the Alexa hardware, right?  You paid Amazon for it…  Or what about running applications on the cloud?).
  2. the person who started the software (you turned on the Alexa hardware, which started the software…  And don’t forget software which is automatically executed in response to triggers or on a time schedule.)
  3. the person who gave the software the instructions (what do you mean, “gave it the instructions”?  Wrote its config file?  Spoke to it?  Set up initial settings?  Typed in commands?  And even if you gave it instructions, do you think that your OK Google hardware is implementing your wishes, or Google’s?  For whom is it actually acting?  And what side effects (like recording your search history and deciding what to suggest in your feed) are you happy to believe are “yours”?)
  4. the person who installed the software (your phone comes with all sorts of software installed, but surely you are the one who imbues it with agency?  If not, whom are you blaming: Google (for the Android apps) or Samsung (which actually put them on the phone)?)
  5. the person who wrote the software (I think we’ve already dealt with this, but even then, is it a single person, or an organisation?  What about open source software, which is typically written, compiled and documented by many different people?  Ascribing “ownership” or “authorship” is a distinctly tricky (and intentionally tricky) issue when you discuss open source)

Another way to think of this problem is to ask: when you write and execute a program, who is saying “hello world?”: you, or the computer?

There are some really interesting questions that come out of this.  Here are a couple that come to mind, which seem to me to be closely connected.

  • In the film Wargames[3], is the automatic dialling that Matthew Broderick’s character’s computer carries out an act with agency?  Or is it when it connects to another machine?  Or when it records the details of that machine?  I don’t think anyone would argue that the computer is acting with agency once David Lightman actually gets it to complete a connection and interact with it, but what about before?
  • Google used to run automated programs against messages received as part of the Gmail service looking for information and phrases which it could use to serve ads.  They were absolutely adamant that they, Google, weren’t doing the reading: it was just a computer program.  I’m not sure how clear or safe a distinction that is.

Why does this all matter?  Well, one of the more pressing reasons is because of self-driving cars.  Whose fault is it when one goes wrong and injures or kills someone?  What about autonomous defence systems?

And here’s the question that really interests – and vexes – me: is this different when the program which is executing can learn.  I don’t even mean strong AI: just that it can change what it does based on the behaviour it “sees”, “hears” or otherwise senses.  It feels to me that there’s a substantive difference between:

a) actions carried out at the explicit (asynchronous) request of a human operator, or according to sets of rules coded into a program

AND

b) actions carried out in response to rules that have been formed by the operation of the program itself.  There is what I’d called synchronous intent within the program.

You can argue that b) has pretty much always been around, in basic forms, but it seems to me to be different when programs are being created with the expectation that humans will not necessarily be able to decode the rules, and where the intent of the human designers is to allow rulesets to be created in this way.

There is some discussion about at the moment as to how and/or whether rulesets generated by open source projects should be shared.  I think the general feeling is that there’s no requirement for them to be – in the same way that material I write using an open source text editor shouldn’t automatically be considered open source – but open data is valuable, and finding ways to share it is a good idea, IMHO.

In Wargames, that is the key difference between the system as originally planned, and what it ends up doing: Joshua has synchronous intent.

I really don’t think this is all bad: we need these systems, and they’re going to improve our lives significantly.  But I do feel that it’s important that you and I start thinking hard about what is acting for whom, and how.

Now, if you wouldn’t mind opening the Pod bay doors, HAL…[5]

 


1. and yes, I know it’s a pretense.

2. yet…

3. go on – re-watch it: you know you want to[4].

4. and if you’ve never watched it, then stop reading this article and go and watch it NOW.

5. I think you know the problem just as well as I do, Dave.

On passwords and accounts

This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.

Once a year or so, one of the big UK tech magazines or websites[1] does a survey where they send a group of people to one of the big London train stations and ask travellers for their password[3].  The deal is that every traveller who gives up their password gets a pencil, a chocolate bar or similar.

I’ve always been sad that I’ve never managed to be at the requisite station for one of these polls.  I would love to get a free pencil – or even better a chocolate bar[4] – for lying about a password.  Or even, frankly, for giving them one of my actual passwords, which would be completely useless to them without some identifying information about me.  Which I obviously wouldn’t give them.  Or again, would pretend to give them, but lie.

The point of this exercise[5] is supposed to be to expose the fact that people are very bad about protecting their passwords.  What it actually identifies is that a good percentage of the British travelling public are either very bad about protecting their passwords, or are entirely capable of making informed (or false) statements in order to get a free pencil or chocolate bar[4]. Good on the British travelling public, say I. 

Now, everybody agrees that passwords are on their way out, as they have been on their way out for a good 15-20 years, so that’s nice.  People misuse them, reuse them, don’t change them often enough, etc., etc..  But it turns out that it’s not the passwords that are the real problem.  This week, more than one British MP admitted – seemingly without any realisation that they were doing  anything wrong – that they share their passwords with their staff, or just leave their machines unlocked so that anyone on their staff can answer their email or perform other actions on their behalf.

This isn’t a password problem.  It’s a misunderstanding-of-what-accounts-are-for problem.

People seem to think that, in a corporate or government setting, the point of passwords is to stop people looking at things they shouldn’t.

That’s wrong.  The point of passwords is to allow different accounts for different people, so that the appropriate people can exercise the appropriate actions, and be audited as having done so.  It is, basically, a matching of authority to responsibility – as I discussed in last week’s post Explained: five misused security words – with a bit of auditing thrown in.

Now, looking at things you shouldn’t is one action that a person may have responsibility for, certainly, but it’s not the main thing.  But if you misuse accounts in the way that has been exposed in the UK parliament, then worse things are going to happen.  If you willingly bypass accounts, you are removing the ability of those who have a responsibility to ensure correct responsibility-authority pairings to track and audit actions.  You are, in fact, setting yourself up with excuses before the fact, but also making it very difficult to prove wrongdoing by other people who may misuse an account.  A culture that allows such behaviour is one which doesn’t allow misuse to be tracked.  This is bad enough in a company or a school – but in our seat of government?  Unacceptable.  You just have to hope that there are free pencils.  Or chocolate bars[4].


1. I can’t remember which, and I’m not going to do them the service of referencing them, or even looking them up, for reasons that should be clear once you read the main text.[2]

2. I’m trialling a new form or footnote referencing. Please let me know whether you like it.

3. I guess their email password, but again, I can’t remember and I’m not going to look it up.

4. Or similar.

5. I say “point”…

Explained: five misused security words

Untangling responsibility, authority, authorisation, authentication and identification.

I took them out of the title, because otherwise it was going to be huge, with lots of polysyllabic words.  You might, therefore, expect a complicated post – but that’s not my intention*.  What I’d like to do it try to explain these five important concepts in security, as they’re often confused or bound up with one another.  They are, however, separate concepts, and it’s important to be able to disentangle what each means, and how they might be applied in a system.  Today’s words are:

  • responsibility
  • authority
  • authorisation
  • authentication
  • identification.

Let’s start with responsibility.

Responsibility

Confused with: function; authority.

If you’re responsible for something, it means that you need to do it, or if something goes wrong.  You can be responsible for a product launching on time, or for the smooth functioning of a team.  If we’re going to ensure we’re really clear about it, I’d suggest using it only for people.  It’s not usually a formal description of a role in a system, though it’s sometimes used as short-hand for describing what a role does.  This short-hand can be confusing.  “The storage module is responsible for ensuring that writes complete transactionally” or “the crypto here is responsible for encrypting this set of bytes” is just a description of the function of the component, and doesn’t truly denote responsibility.

Also, just because you’re responsible for something doesn’t mean that you can make it happen.  One of the most frequent confusions, then, is with authority.  If you can’t ensure that something happens, but it’s your responsibility to make it happen, you have responsibility without authority***.

Authority

Confused with: responsibility, authorisation.

If you have authority over something, then you can make it happen****.  This is another word which is best restricted to use about people.  As noted above, it is possible to have authority but no responsibility*****.

Once we start talking about systems, phrases like “this component has the authority to kill these processes” really means “has sufficient privilege within the system”, and should best be avoided. What we may need to check, however, is whether a component should be given authorisation to hold a particular level of privilege, or to perform certain tasks.

Authorisation

Confused with: authority; authentication.

If a component has authorisation to perform a certain task or set of tasks, then it has been granted power within the system to do those things.  It can be useful to think of roles and personae in this case.  If you are modelling a system on personae, then you will wish to grant a particular role authorisation to perform tasks that, in real life, the person modelled by that role has the authority to do.  Authorisation is an instantiation or realisation of that authority.  A component is granted the authorisation appropriate to the person it represents.  Not all authorisations can be so easily mapped, however, and may be more granular.  You may have a file manager which has authorisation to change a read-only permission to read-write: something you might struggle to map to a specific role or persona.

If authorisation is the granting of power or capability to a component representing a person, the question that precedes it is “how do I know that I should grant that power or capability to this person or component?”.  That process is authentication – authorisation should be the result of a successful authentication.

Authentication

Confused with: authorisation; identification.

If I’ve checked that you’re allowed to perform and action, then I’ve authenticated you: this process is authentication.  A system, then, before granting authorisation to a person or component, must check that they should be allowed the power or capability that comes with that authorisation – that are appropriate to that role.  Successful authentication leads to authorisation.  Unsuccessful authentication leads to blocking of authorisation******.

With the exception of anonymous roles, the core of an authentication process is checking that the person or component is who he, she or it says they are, or claims to be (although anonymous roles can be appropriate for some capabilities within some systems).  This checking of who or what a person or component is authentication, whereas the identification is the claim and the mapping of an identity to a role.

Identification

Confused with: authentication.

I can identify that a particular person exists without being sure that the specific person in front of me is that person.  They may identify themselves to me – this is identification – and the checking that they are who they profess to be is the authentication step.  In systems, we need to map a known identity to the appropriate capabilities, and the presentation of a component with identity allows us to apply the appropriate checks to instantiate that mapping.

Bringing it all together

Just because you know whom I am doesn’t mean that you’re going to let me do something.  I can identify my children over the telephone*******, but that doesn’t mean that I’m going to authorise them to use my credit card********.  Let’s say, however, that I might give my wife my online video account password over the phone, but not my children.  How might the steps in this play out?

First of all, I have responsibility to ensure that my account isn’t abused.  I also have authority to use it, as granted by the Terms and Conditions of the providing company (I’ve decided not to mention a particular service here, mainly in case I misrepresent their Ts&Cs).

“Hi, darling, it’s me, your darling wife*********. I need the video account password.” Identification – she has told me who she claims to be, and I know that such a person exists.

“Is it really you, and not one of the kids?  You’ve got a cold, and sound a bit odd.”  This is my trying to do authentication.

“Don’t be an idiot, of course it’s me.  Give it to me or I’ll pour your best whisky down the drain.”  It’s her.  Definitely her.

“OK, darling, here’s the password: it’s il0v3myw1fe.”  By giving her the password, I’ve  performed authorisation.

It’s important to understand these different concepts, as they’re often conflated or confused, but if you can’t separate them, it’s difficult not only to design systems to function correctly, but also to log and audit the different processes as they occur.


*we’ll have to see how well I manage, however.  I know that I’m prone to long-windedness**

**ask my wife.  Or don’t.

***and a significant problem.

****in a perfect world.  Sometimes people don’t do what they ought to.

*****this is much, much nicer than responsibility without authority.

******and logging.  In both cases.  Lots of logging.  And possibly flashing lights, security guards and sirens on failure, if you’re into that sort of thing.

*******most of the time: sometimes they sound like my wife.  This is confusing.

********neither should you assume that I’m going to let my wife use it, either.*********

*********not to suggest that she can’t use a credit card: it’s just that we have separate ones, mainly for logging purposes.

**********we don’t usually talk like this on the phone.