Jargon – a force for good or ill?

Distinguish an electrical engineer from a humanities graduate: ask them how many syllables are in the word “coax”.

I was involved in an interesting discussion with colleagues recently about the joys or otherwise or jargon.  It stemmed from a section I wrote in a recent article, How to talk to security people: a guide for the rest of us.  In it, I suggested that jargon “has at least two uses:

  1. as an exclusionary mechanism for groups to keep non-members in the dark;
  2. as a short-hand to exchange information between ‘in-the-know’ people so that they don’t need to explain everything in exhaustive detail every time.”

Given the discussion that arose, I thought it was worth delving more deeply into this question.  It’s more than an idle interest, as well, as I think that there are important lessons around our use of jargon that impact on how we interact with our colleagues and peers, and which deserve some careful thought.  These lessons apply particularly to my chosen field of security.

Before we start, there should be some definition.  It’s always nice to have two conflicting versions, so here we go:

  1. Special words or expressions used by a profession or group that are difficult for others to understand. (Oxford dictionaries – https://en.oxforddictionaries.com/definition/jargon)
  2. Without qualifier, denotes informal ‘slangy’ language peculiar to or predominantly found among hackers. (The Jargon File – http://catb.org/jargon/html/distinctions.html)

I should start by pointing out that the Jargon File (which was published in paper form in at least two versions as The Hacker’s Dictionary (ed. Steele) and the New Hacker’s Dictionary (ed. Raymond)) has a pretty special place in my heart.  When I decided that I wanted to “take up” geekery properly[1][2], I read my copy of the the New Hacker’s Dictionary from cover to cover, several times, and when a new edition came out, I bought that and did the same.  In fact, for many of the more technical readers of this blog, I suspect that a fair amount of your cultural background is expressed within the covers (paper or virtual) of the same, even if you’re not aware of it.  If you’re interested in delving deeper and like the feel of paper in your hands, then I encourage you to purchase a copy, but be careful to get the right one: there are some expensive versions which seem just to be print-outs of the Jargon File, rather than properly typeset and edited versions[3].

But let’s get onto the meat of this article: is jargon a force for good or ill?

The case against jargon – ambiguity

You would think that jargon would serve to provide agreed terms within a particular discipline, and help ambiguity around contexts.  It may be a surprise, then, that first problem that we often run into with jargon is name-space clashes.  Consider the following.  There’s an old joke about how to distinguish an electrical engineer from a humanities[3] graduate: ask them how many syllables are in the word “coax”.  The point here, of course, is that they come from different disciplines.  But there are lots of words – and particularly abbreviations – which have different meanings or expansions depending on context, and where disciplines and contexts may collide.  What do these mean to you[5]?

  • scheduling (kernel level CPU allocation to processes OR placement of workloads by an orchestration component)
  • comms (I/O in a computer system OR marketing/analyst communications)
  • layer (OSI model OR IP suite layer OR other architectural abstraction layer such as host or workload)
  • SME (subject matter expert OR small/medium enterprise)
  • SMB (small/medium business OR small message block)
  • TLS (Transport Layer Security OR Times Literary Supplement)
  • IP (Internet Protocol OR Intellectual Property OR Intellectual Property as expressed as a silicon component block)
  • FFS (For Further Study OR …[6])

One of the interesting things is that quite a lot of my background is betrayed by the various options that present themselves to me: I wonder how many of the readers of this will have thought of the Times Literary Supplement, for example. I’m also more likely to think of SME as the term relating to organisations, because that’s the favoured form in Europe, whereas I believe that the US tends to SMB.  I’m sure that your experiences will all be different – which rather makes my point for me.

That’s the first problem.  In a context where jargon is often praised as a way of short-cutting lengthy explanations, it can actually be a significant ambiguating force.

The case against jargon – exclusion

Intentionally or not – and sometimes it is intentional – groups define themselves through use of specific terminology.  Once this terminology becomes opaque to those outside the group, it becomes “jargon”, as per our first definition above.  “Good” use of jargon generally allows those within the group to converse using shared context around concepts that do not need to be explained in detail every time they are used.  An example would be a “smoke test” – a quick test to check that basic functionality is performing correctly (see the Jargon File’s definition for more).  If everyone within the group understands what this means, then why go into more detail?  But if you are joined at a stand-up meeting[7] by a member of marketing who wants to know whether a particular build is ready for release, and you say “well, no – it’s only been smoke-tested so far”, then it’s likely that you’ll need to explain.

Another enduring and complex piece of jargon is the use of “free” in relation to software.  In fact, the term is so ambiguous that different terms have evolved to describe some variants – “open source”, “FOSS” – and even phrases such as “free as in speech, not as in beer”.

The problem is that there are occasions when jargon can be excluding from others, whether that usage is intended or not.  There have been times for most of us, I’m sure, when we want to show that we’re part of group and so use terms that we know another person won’t understand.  On other occasions, the term may be so ingrained in our practice that we use it without thinking, and the other person is unintentionally excluded.  I would argue that we need to be careful to avoid both of these usages.  Intentional exclusion is rarely helpful, but unintentional exclusion can be just as damaging – in ways more so, as it is typically unremarked and is therefore difficult to remedy.

The security world seems particularly prone to this behaviour, I feel.  It may be that many security folks have a sound enough grounding in other spheres – especially those in which they’re working with others – that the overlap in the other direction is less noticeable.  The other issue is that there’s a certain elitism with security folks which is both laudable – you want the best people, with the best training, to be leading the field – and lamentable – elitism tends to lead to exclusion of members of other groups.

A quick paragraph in praise of jargon

It’s not all bad, however.  We need jargon, as I noted above, to enable us to discuss concepts, and the use of terms in normal language – like scheduling – as jargon leads to some interesting metaphors which guide us in our practice[8].  We absolutely need shared practice, and for that we need shared language – and some of that language is bound, over time, to become jargon.  But consider a lexicon, or a FAQ, or other ways to allow your colleagues to participate: be inclusive, not exclusive.


1 – “properly” – really?  Although I’m not sure “improperly” is any better.

2 – oh, remember that I actually studied English Literature and Theology at university, so this was a conscious decision to embrace a rather different culture.

3 – or “Liberal Arts”.

4 – the most recent “real” edition of which I’m aware is Raymond, Eric S., 1996, The New Hacker’s Dictionary, 3rd ed., MIT University Press, Cambridge, Mass.

5 – I’ve added the first options that spring to mind when I come across them – I’m aware there are almost certainly others.

6 – believe me, when I saw this abbreviation in a research paper for the first time, I was most confused, and actually had to look it up.

7 – oh, look: jargon…

8 – though metaphors can themselves be constraining as they tend to push us to thinking in a particular way, even if that way isn’t entirely applicable in this context.

The gift that keeps on giving: passwords

A Father’s Day special

There’s an old saying: “if you give a man a fish, he’ll eat for a day, but if you teach a man to fish, he’ll eat for a lifetime.”  There are some cruel alternatives with endings like “he’ll buy a silly hat and sit outside in the rain”, but the general idea is that it’s better to teach someone something rather than just giving them something.

With Father’s Day coming up this Sunday in many parts of the world, I’d like to suggest the same for passwords.  Many people’s password practices are terrible.  There are three things that people really don’t get about passwords:

  1. what they should look like
  2. how they should be stored
  3. how they should be communicated.

Let’s go through each of these in turn, and I’ll try to give brief tips that you can pass onto your father (or, indeed, mother, broader family, friends or colleagues) to help them with password safety.

What should passwords look like?

There’s a famous xkcd comic called password strength which aims to help you find a useful password.  This is great advice if you only have a few passwords, but about twenty years ago I got above ten, and then started re-using passwords for certain levels of security.  This was terrible at the time, and even worse now.  Look at the the number of times a week we see news about information being lost when companies or organisations are hacked.  If you share passwords between accounts, there’s a decent chance that your login details for one will be exposed, which means that all your other accounts that share that set are compromised.

I know some people who used to have permutations of passwords.  Let’s say the base was “p4ssw0rd”: they would then add a suffix for the website or account, such as “p4ssw0rdNetflix”.  This might be fine if we believed that all passwords are stored in hashed form, but, well, we know they’re not, so don’t do this, either.  Extrapolating from one account to another is too easy.

What does a good password look like, then?  Here’s one: “W9#!=_twXhRb”  And another?  This one is 16 characters long: “*Wdb_%|#N^X6CR_b”  What are the chances of a human guessing these?  Pretty slim.  And a computer?  Not much better, to be honest.  They are randomly generated by software, and as long as I use a different one for each account, I’m pretty safe against password-guessing attacks.

“But,” you say, “how am I supposed to remember them?  I’ve got dozens of accounts, and I can’t remember one of those, let alone fifty!”

How should you store passwords?

Well, you shouldn’t try to remember passwords, in the same way that you shouldn’t try to generate them.  Oh, there will be a handful that you might remember – maybe low-importance ones like the wifi key to your home AP – but most of them you should trust to a password manager.  These are nifty pieces of software that will generate and then remember hundreds of passwords for you.  Some of them will even automatically fill website fields for you if you ask them to.  The best ones are open source, which means that people have pored over their code (hopefully) to check they’re trustworthy, and that if you’re not entirely sure, then you can pore of their code, too.  And make changes and improvements and generally improve the world bit by bit.

You will need to remember one password, though, and that’s the one to unlock the password manager.  Make it really, really strong: it’s about the only one you mustn’t lose (though most websites will help you reset a password if you forget it, so it’s just a matter of going through each of the several hundred until they’re done…).  Use the advice from the xkcd cartoon, or another strong password algorithm that’s easy to remember.

To make things more safe, store the (password protected) key store somewhere that is not easily accessed by other people – not a shared drive at work, for instance, but maybe on your phone or on some cloud-based storage that you can get to if you lose your phone.  Always set the password manager to auto-lock itself after some time, in case you leave your computer logged on, or your phone gets stolen.

How to communicate passwords

Would you send a password via email?  What about by SMS?  Is post[2] better?  Is it acceptable to reveal a password over the phone in a crowded train carriage[4]?  Would you give your laptop password to a random person conducting a survey on a railway station for the prize of a chocolate bar?

In an ideal world, we would never share passwords, but there are times when we need to – and times when it’s worthwhile for material rewards[5].  There are some accounts which are shared – TV or film streaming accounts for the family – or that you’ve set up for somebody else, or which somebody urgently needs to access because you’re on holiday, for instance.  So you may need to give out passwords from time to time.  What’s the best mechanism?  What’s the worst?

This may sound surprising, but I’d generally say that the worst (marginally) is post.  What you’re trying to avoid happening is a Bad Person[tm] from marrying two pieces of information: the username and the password.  If someone has access to your post, then there’s a good chance that they might be able to work out enough information about you that they can guess the account name.  The others?  Well, they’re OK as long as you’re not also sending the username via the same channel.  That, in fact, is the key test: you should never provide the two pieces of information in such a way that a person with access to one channel can put them together.   So, telling someone a password in a crowded train carriage may be rude in relation to all of the other people in the carriage[6], but it may be very secure in terms of account safety.

The reason I posed the question about the survey is that every few months a survey company in the UK asks people at mainline railway stations to tell them their password in exchange for a chocolate bar, and then write a headline about how awful it is that many people will give them their password.  This is a stupid headline, for a stupid survey, for two reasons:

  1. I’d happily lie and tell them a false password in order to get a free chocolate bar AND
  2. even if I gave them the correct password, how are they going to marry that with my account details?

Conclusion

If you’re the sort of person reading there’s a fairly high chance that you’re the sort of person who’s asked to clear up the mess what family, friends or colleagues get their accounts compromised[7].  Here are four rules for password security:

  1. don’t reuse passwords – use a different one for every single account
  2. don’t think up your own passwords – get a password manager to generate them for you
  3. use a password manager to store your passwords – if they’re strong enough in the first place, you won’t be able to remember them
  4. never send usernames and passwords over the same channel – you want to avoid the situation where an attacker has access to both and can use them.

I’ll add a fifth one for luck: feel free to use underhand tactics to get chocolate bars from people performing poorly-designed surveys on railway stations.


1 – I thought about changing the order, as they do impact on each other, but it made my head hurt, so I stopped.

2 – note for younger readers: there used to be something called “snail mail”.  It’s nearly dead[3].

3 – unless you forget to turn on “electronic statements” for your bank account.  Then you’ll get loads of it.

4 – whatever the answer to this is from a security point of view, the correct answer is “no”, because a) you’re going to annoy me by shouting it repeatedly down the phone because reception is so bad on the train that the recipient can’t hear it and b) because reception is so bad on the train that the recipient can’t hear it (see b)).

5 – I like chocolate.

6 – I’m not a big fan of phone conversations in railway carriages, to be honest.

7 – Or you’ve been sent a link to this because you are one of those family, friends or colleagues, and the person who sent you the link is sick and tired of doing all of your IT dirty work for you.

The most important link: unsubscribe me

No more (semi-)unsolicited emails from that source.

Over the past few days, the much-vaunted[1] GDPR has come into force.  In case you missed this[2], GDPR is a set of rules around managing user data that all organisations with data about European citizens must follow for those citizens.  Which basically means that it’s cheaper to apply the same rules across all of your users.

Here’s my favourite GDPR joke[3].

Me: Do you know a good GDPR consultant?

Colleague: Yes.

Me: Can you give me their email address.

Colleague: No.

The fact that this is the best of the jokes out there (there’s another one around Santa checking lists which isn’t that bad either) tells you something about how fascinating the whole subject is.

So I thought that I’d talk about something different today.  I’m sure that over the past few weeks, because of the new GDPR regulations,  you’ve received a flurry[4] of emails that fall into one of two categories:

  1. please click here to let us know what uses we can make of your data (the proactive approach);
  2. we’ve changed our data usage and privacy policy: please check here to review it (the reactive approach).

I’ve come across[5] suggestions that the proactive approach is overkill, and generally not required, but I can see what people are doing it: it’s easier to prove that you’re doing the right thing.  The reactive approach means that it’s quicker just to delete the email, which is at least a kind of win.

What I’ve found interesting, however, is the number of times that I’ve got an email of type 1 from a company, and I’ve thought: “You have my data?  Really?”  It turns out that more companies have information about me than I’d thought[6], and this has allowed me to click through and actually tell them that I want them to delete my data completely, and unsubscribe me from their email lists.  This then led me to thinking, “you know what, although I bought something from this company five years ago, or had an interest in something they were selling, at least, I now have no interest in them at all, or in receiving marketing emails from them,” and then performing the same function: telling them to delete and unsubscribe me.

But it didn’t stop there.  I’ve decided to have a clean out.  Now, when an email comes in from a company, I take a moment to decide whether:

  • I care about them or their product; OR
  • I’m happy for them to have my information in the first place.

If the answer to either of these questions is “no”, then I scroll down.  There, at the bottom of each mail, should be a link which says something like “subscription details” or “unsubscribe me”.  This has, I believe, been a legal requirement in many jurisdictions for quite a few years.  The whole process is quite liberating: I click on the link, and I’m either magically unsubscribed, or sometimes I have to scroll down the page a little to choose the relevant option, and “Bang!”, I’m done.  No more (semi-)unsolicited emails from that source.

I see this as a security issue: the fewer companies that have data about me, the fewer chances of misuse, and the lower the change of leakage.  One warning, however: phishing.  As I admitted in this blog last week, I got phished recently  (I got phished this week: what did I do?), and as more people take to unsubscribing by default, I can see this link actually being used for nefarious purposes, so do be careful before you click on it that it actually goes to where you think it should.  This can be difficult, because companies often use a third-party provider to manage their email services.  Be careful, then, that you don’t get duped into entering account details: there should be no need to log into your account to be deleted from a service.  If you want to change your mailing preferences for a company, then that may require you to log into your account: never do this from an email, always type go to the organisation’s website directly.


1 – I’ve always wanted to write that.

2 – well done, by the way.

3 – I’d provide attribution, but I’m not sure where it originated.

4 – or maybe a slurry?

5 – again, I can’t remember where.

6 – though I’m not that surprised.

How to talk to security people: a guide for the rest of us

I wrote an article a few months back called Using words that normal people understand.  It turns out that you[1] liked it a lot.  Well, lots of you read it, so I’m going to assume that it was at least OK.

It got me thinking, however, that either there are lots of security people who read this blog who work with “normal” people and needed to be able to talk to them better, or maybe there are lots of “normal” people who read this blog and wondered what security people had been going on about all this time.  This article is for the latter group (and maybe the former, too)[2].

Security people are people, too

The first thing to remember about security people is that they’re actual living, breathing, feeling humans, too[3].  They don’t like being avoided or ostracised by their colleagues.  Given the chance, most of them would like to go out for a drink with you.  But, unluckily, they’ve become so conditioned to saying “no” to any request you make that they will never be able to accept the offer[4].

Try treating them as people.  We have a very good saying within the company for which I work: “assume positive intent”.  Rather than assuming that they’re being awkward on purpose, trying assuming that they’re trying to help, even if they’re not very good at showing it.  This can be hard, but go with it.

Get them involved early

Get them really involved, not just on the official team email list, but actually invited to the meetings.  In person if they’re local, on a video-chat if they’re not.  And encourage everybody to turn on their webcam.  Face contact really helps.

And don’t wait for a couple of months, when the team is basically already formed.  Not a couple of weeks, or days.  Get your security folks involved at team formation: there’s something special about being involved in a team from the very beginning of a project.

How would you do it?

In fact, you know what?  Go further.  Ask the security person[5] on the team “how would you do it?”

This shuts down a particular response – “you can’t” – and encourages another – “I’m being asked for my professional opinion”.  It also sets up an open question with a positive answer implicit.  Tone is important, though, as is timing.  Do this at the beginning of the process, rather than at the end.  What you’re doing here is a psychological trick: you’re encouraging perception of joint ownership of the problem, and hopefully before particular approaches – which may not be security-friendly – have been designed into the project.

Make them part of the team

There are various ways to help form a team.  I prefer inclusive, rather than exclusive mechanisms: a shared set of pages that are owned by the group and visible to others, rather than a closed conference room with paper all over the windows so that nobody else can look in, for instance.  One of the most powerful techniques, I believe, is social interaction.  Not everybody will be able to make an after-work trip to a bar or pub, but doughnuts and pizza[6] for the team, or a trip out for tea or coffee once or twice a week can work wonders.  What is specific to security people in this, you ask?

Well, in many organisations, the security folks are in a separate team, sometimes in a separate area in a separate floor in a separate building[8].  They may default to socialising only with this group.  Discourage this.  To be clear: don’t discourage them from socialising with that group, but from socialising only with that group.  It’s possible to be part of two teams – you need to encourage that realisation.

Hopefully, as the team’s dynamics form and the security person feels more part of it, the answer to the earlier question (“how would you do it?”) will won’t be “well, I would do it this way”, but “well, I think we should do it this way”.

Realise that jargon has its uses

Some of the interactions that you have with your tame[9] security person will involve jargon.  There will be words – possibly phrases, possibly concepts or entire domains of knowledge – that you don’t recognise.  This is intimidating, and can be a defence mechanism on the part of the utterer.  Jargon has at least two uses:

  1. as an exclusionary mechanism for groups to keep non-members in the dark;
  2. as a short-hand to exchange information between “in-the-know” people so that they don’t need to explain everything in exhaustive detail every time.

Be aware that you use jargon as well – whether you’re in testing, finance, development or operations – and that this may be confusing to others.  Do security folks have a reputation for the first of those uses?  Yes, they do.  Maybe it’s ill-founded and maybe it’s not.  So ask for an explanation.  In either case, if you ask for an explanation and follow it as well you can, and then ask for an explanation of why it’s important in this case, you’re showing that you care.  And who doesn’t like showing off their knowledge to an at least apparently interested party?  Oh, and you’re probably learning something useful at the same time.

Sometimes what’s needed is not a full explanation, but the second bit: an explanation of why it’s important, and then a way to make it happen.  Do you need to know the gory details of proofs around Byzantine Fault-Tolerance?  Probably not.  Do you need to know why it’s important for your system?   That’s more likely.  Do you need to know the options for implementing it?  Definitely.

What else?

As I started to write this post, I realised that there were lots of other things I could say.  But once I got further through it, I also realised that there were definitely going to be other things that you[10] could add.  Please consider writing a comment on this post.  And once you’ve considered it, please do it.  Shared knowledge is open knowledge, and open is good.


1 – my hypothetical “readership”

2 – by using the pronoun “us” in the title, I’ve lumped myself in with the “normal people”.  I’m aware that this may not be entirely justified.  Or honest.

3 – I’m generalising, obviously, but let’s go with it.

4 – this is a joke, but I’m quite pleased with it.

5 – I suppose there might be more than one, but let’s not get ahead of ourselves.

6 – make sure it’s both, please.  Some people don’t like pizza[7].

7 – me.  I find it difficult to believe that some people don’t like doughnuts, but you never know.

8 – and, if they have their way, on separate email and telephone systems.

9 – well, partially house-trained, at least.

10 – dear reader.

Moving to DevOps, what’s most important? 

Technology, process or culture? (Clue: it’s not the first two)

You’ve been appointed the DevOps champion in your organisation: congratulations.  So, what’s the most important issue that you need to address?

It’s the technology – tools and the toolchain – right?  Everybody knows that unless you get the right tools for the job, you’re never going to make things work.  You need integration with your existing stack – though whether you go with tight or loose integration will be an interesting question – a support plan (vendor, 3rd party or internal), and a bug-tracking system to go with your source code management system.  And that’s just the start.

No!  Don’t be ridiculous: it’s clearly the process that’s most important.  If the team doesn’t agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you’ll never be able institute a consistent, repeatable working pattern.

In fact, although both the technology and the process are important, there’s a third component which is equally important, but typically even harder to get right: culture.  Yup, it’s that touch-feely thing that we techies tend to struggle with[1].

Culture

I was visiting a medium-sized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO.  We were ushered into the CEO’s office and waited for a while as the two of them finished participating in the daily stand-up.  They apologised for being a minute or two late, but far from being offended, I was impressed.  Here was an organisation where the culture of participation was clearly infused all the way up to the top.

Not that culture can be imposed from the top – nor can you rely on it percolating up from the bottom[3] – but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it.  If you can get management to buy into the process – and to be seen to buy in – you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it.

So let’s say that management believes that you should give DevOps a go.  Where do you start?

Developers, tick?[5]

Developers may well be your easiest target group.  Developers are often keen to try new things, and to find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies.  DevOps has arguably been mainly driven by the development community. But you shouldn’t assume that all developers will be keen to embrace this change.  For some, the way things have always been done – your Rick Parfitts of dev, if you will[7] – is fine.  Finding ways to help them work efficiently in the new world is part of your job, not just theirs.  If you have superstar developers who aren’t happy with change, you risk alienating them and losing them if you try to force them into your brave new world.  What’s worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren’t going to change if it makes their lives more difficult and reduces their productivity.

Maybe you’re not going to be able to move all the systems and people to DevOps immediately.  Maybe you’re going to need to choose which apps start with, and who will be your first DevOps champions.  Maybe it’s time to move slowly.

Not maybe: definitely

No – I lied.  You’re definitely going to need to move slowly.  Trying to change everything at once is a recipe for disaster.

This goes for all elements of the change – which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose – bar one.  For all of those elements, if you try to move everything in one go, you will fail.  You’ll fail for a number of reasons.  You’ll fail for reasons I can’t imagine, and, more importantly, for reasons you can’t imagine, but some of the reasons will include:

  • people – most people – don’t like change;
  • technologies don’t like change (you can’t just switch and expect everything to work still);
  • applications don’t like change (things worked before, or at least failed in known ways: you want to change everything in one go?  Well, they’ll all fail in new and exciting[9] ways;
  • users don’t like change;
  • use cases don’t like change.

The one exception

You noticed that, above, I wrote “bar one”, when discussing which elements you shouldn’t choose to change all in one go?  Well done.

What’s that exception?  It’s the initial team.  When you choose your initial application to change, and you’re thinking about choosing the team to make that change, select the members carefully, and select a complete set.  This is important.  If you choose just developers, just test folks, or just security folks, or just ops folks, or just management, then you won’t actually have proved anything at all.  If you leave out one functional group from your list, you won’t actually have proved anything at all.  Well, you might have proved to a small section of your community that it kind of works, but you’ll have missed out on a trick.  And that trick is that if you choose keen people from across your functional groups, it’s much harder to fail.

Say that your first attempt goes brilliantly.  How are you going to convince other people to replicate your success and adopt DevOps?  Well, the company newsletter, of course.  And that will convince how many people, exactly?  Yes, that number[12].  If, on the other hand, you have team members from across the functional parts or the organisation, then when you succeed, they’ll tell their colleagues, and you’ll get more buy-in next time.

If, conversely, it fails, well, if you’ve chosen your team wisely, and they’re all enthusiastic, and know that “fail often, fail fast” is good, then they’ll be ready to go again.

So you need to choose enthusiasts from across your functional groups.  They can work on the technologies and the process, and once that’s working, it’s the people who will create that cultural change.  You can just sit back and enjoy.  Until the next crisis, of course.


1 – OK, you’re right.  It should be “with which we techies tend to struggle”[2]

2 – you thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn’t you?  Read it again: I put “tend to”.  That’s the best you’re getting.

3 – is percolating a bottom-up process?  I don’t drink coffee[4], so I wouldn’t know.

4 – do people even use percolators to make coffee anymore?  Feel free to let me know in the comments. I may pretend interest if you’re lucky.

5 – for US readers (and some other countries, maybe?), please substitute “tick” for “check” here[6].

6 – for US techie readers, feel free to perform “s/tick/check/;”.

7 – this is a Status Quo[8] reference for which I’m extremely sorry.

8 – for Millennial readers, please consult your favourite online reference engine or just roll your eyes and move on.

9 – for people who say, “but I love excitement”, trying being on call at 2am on a Sunday morning at end of quarter when your Chief Financial Officer calls you up to ask why all of last month’s sales figures have been corrupted with the letters “DEADBEEF”[10].

10 – for people not in the know, this is a string often used by techies as test data because a) it’s non-numerical; b) it’s numerical (in hexadecimal); c) it’s easy to search for in debug files and d) it’s funny[11].

11 – though see [9].

12 – it’s a low number, is all I’m saying.

Meltdown and Spectre: thinking about embargoes and disclosures

The technical details behind Meltdown and Spectre are complex and fascinating – and the possible impacts wide-ranging and scary.  I’m not going to go into those here, though there are some excellent articles explaining them.  I’d point readers in particular at the following URLs (which both resolve to the same page, so there’s no need to follow both):

I’d also recommend this article on the Red Hat blog, written by my colleague Jon Masters: What are Meltdown and Spectre? Here’s what you need to know.  It includes a handy cut-out-and-keep video[1] explaining the basics.   Jon has been intimately involved in the process of creating and releasing mitigations and fixes to Meltdown and Spectre, and is very well-placed to talk about it.

All that is interesting and important, but I want to talk about the mechanics and process behind how a vulnerability like this moves from discovery through to fixing.  Although similar processes exist for proprietary software, I’m most interested in open source software, so that’s what I’m going to describe.

 

Step 1 – telling the right people

Let’s say that someone has discovered a vulnerability in a piece of open source software.  There are a number of things they can do.  The first is ignore it.  This is no good, and isn’t interesting for our story, so we’ll ignore it in turn.  Second is to keep it to themselves or try to make money out of it.  Let’s assume that the discoverer is a good type of person[2], so isn’t going to do this[3].  A third is to announce it on the project mailing list.  This might seem like a sensible thing to do, but it can actually be very damaging to the project and the community at large. The problem with this is that the discoverer has just let all the Bad Folks[tm] know about a vulnerability which hasn’t been fixed, and they can now exploit.  Even if the discoverer submits a patch, it needs to be considered and tested, and it may be that the fix they’ve suggested isn’t the best approach anyway.  For a serious security issue, the project maintainers are going to want to have a really good look at it in order to decide what the best fix – or mitigation – should be.

It’s for this reason that many open source projects have a security disclosure process.  Typically, there’s a closed mailing list to which you can send suspected vulnerabilities, often something like “security@projectname.org“.  Once someone has emailed that list, that’s where the process kicks in.

Step 2 – finding a fix

Now, before that email got to the list, somebody had to decide that they needed a list in the first place, so let’s assume that you, the project leader, has already put a process of some kind in place.  There are a few key decisions to make, of which the most important are probably:

  • are you going to keep it secret[5]?  It’s easy to default to “yes” here, for the reasons that I mentioned above, but the number of times you actually put in place restrictions on telling people about the vulnerability – usually referred to as an embargo, given its similarity to a standard news embargo – should be very, very small.  This process is going to be painful and time-consuming: don’t overuse it.  Oh, and your project’s meant to be about open source software, yes?  Default to that.
  • who will you tell about the vulnerability?  We’re assuming that you ended up answering “yes” to te previous bullet, and that you don’t want everybody to know, and also that this is a closed list.  So, who’s the most important set of people?  Well, most obvious is the set of project maintainers, and key programmers and testers – of which more below.  These are the people who will get the fix completed, but there are two other constituencies you might want to consider: major distributors and consumers of the project. Why these people, and who are they?  Well, distributors might be OEMs, Operating System Vendors or ISVs who bundle your project as part of their offering.  These may be important because you’re going to want to ensure that people who will need to consume your patch can do so quickly, and via their preferred method of updating.  What about major consumers?  Well, if an organisation has a deployed estate of hundreds or thousands of instances of a project[6], then the impact of having to patch all of those at short notice – let alone the time required to test the patch – may be very significant, so they may want to know ahead of time, and they may be quite upset if they don’t.  This privileging of major over rank-and-file users of your project is politically sensitive, and causes much hand-wringing in the community.  And, of course, the more people you tell, the more likely it is that a leak will occur, and news of the vulnerability to get out before you’ve had a chance to fix it properly.
  • who should be on the fixing team?  Just because you tell people doesn’t mean that they need to be on the team.  By preference, you want people who are both security experts and also know your project software inside-out.  Good luck with this.  Many projects aren’t security projects, and will have no security experts attached – or a few at most.  You may need to call people in to help you, and you may not even know which people to call in until you get a disclosure in the first place.
  • how long are you going to give yourself?  Many discoverers of vulnerabilities want their discovery made public as soon as possible.  This could be for a variety of reasons: they have an academic deadline; they don’t want somebody else to beat them to disclosure; they believe that it’s important to the community that the project is protected as soon as possible; they’re concerned that other people have found – are are exploiting – the vulnerability; they don’t trust the project to take the vulnerability seriously and are concerned that it will just be ignored.  This last reason is sadly justifiable, as there are projects who don’t react quickly enough, or don’t take security vulnerabilities seriously, and there’s a sorry history of proprietary software vendors burying vulnerabilities and pretending they’ll just go away[7].  A standard period of time before disclosure is 2-3 weeks, but as projects get bigger and more complex, balancing that against issues such as pressure from those large-scale users to give them more time to test, holiday plans and all the rest becomes important.  Having an agreed time and sticking to it can be vital, however, if you want to avoid deadline slip after deadline slip.  There’s another interesting issue, as well, which is relevant to the Meltdown and Spectre vulnerabilities – you can’t just patch hardware.  Where hardware is involved, the fixes may involve multiple patches to multiple projects, and not just standard software but microcode as well: this may significantly increase the time needed.
  • what incentives are you going to provide?  Some large projects are in a position to offer bug bounties, but these are few and far between.  Most disclosers want – and should expect to be granted – public credit when the fix is finally provided and announced to the public at large.  This is not to say that disclosers should necessarily be involved in the wider process that we’re describing: this can, in fact, be counter-productive, as their priorities (around, for instance, timing) may be at odds with the rest of the team.

There’s another thing you might want to consider, which is “what are we going to do if this information leaks early?” I don’t have many good answers for this one, as it will depend on numerous factors such as how close are you to a fix, how major is the problem, and whether anybody in the press picks up on it.  You should definitely consider it, though.

Step 3 – external disclosure

You’ve come up with a fix?  Well done.  Everyone’s happy?  Very, very well done[8].  Now you need to tell people.  But before you do that, you’re going to need to decide what to tell them.  There are at least three types of information that you may well want to prepare:

  • technical documentation – by this, I mean project technical documentation.  Code snippets, inline comments, test cases, wiki information, etc., so that when people who code on your project – or code against it – need to know what’s happened, they can look at this and understand.
  • documentation for techies – there will be people who aren’t part of your project, but who use it, or are interested in it, who will want to understand the problem you had, and how you fixed it.  This will help them to understand the risk to them, or to similar software or systems.  Being able to explain to these people is really important.
  • press, blogger and family documentation – this is a category that I’ve just come up with.  Certain members of the press will be quite happy with “documentation for techies”, but, particularly if your project is widely used, many non-technical members of the press – or technical members of the press writing for non-technical audiences – are going to need something that they can consume and which will explain in easy-to-digest snippets a) what went wrong; b) why it matters to their readers; and c) how you fixed it.  In fact, on the whole, they’re going to be less interested in c), and probably more interested in a hypothetical d) & e) (which are “whose fault was it and how much blame can we heap on them?” and “how much damage has this been shown to do, and can you point us at people who’ve suffered due to it?” – I’m not sure how helpful either of these is).  Bloggers may also be useful in spreading the message, so considering them is good.  And generally, I reckon that if I can explain a problem to my family, who aren’t particularly technical, then I can probably explain it to pretty much anybody I care about.

Of course, you’re now going to need to coordinate how you disseminate all of these types of information.  Depending on how big your project is, your channels may range from your project code repository through your project wiki through marketing groups, PR and even government agencies[9].

 

Conclusion

There really is no “one size fits all” approach to security vulnerability disclosures, but it’s an important consideration for any open source software project, and one of those that can be forgotten as a project grows, until suddenly you’re facing a real use case.  I hope this overview is interesting and useful, and I’d love to hear thoughts and stories about examples where security disclosure processes have worked – and when they haven’t.


2 – because only good people read this blog, right?

3 – some government security agencies have a policy of collecting – and not disclosing – security vulnerabilities for their own ends.  I’m not going to get into the rights or wrongs of this approach.[4]

4 – well, not in this post, anyway.

5 – or try, at least.

6 – or millions or more – consider vulnerabilities in smartphone software, for instance, or cloud providers’ install bases.

7 – and even, shockingly, of using legal mechanisms to silence – or try to silence – the discloser.

8 – for the record, I don’t believe you: there’s never been a project – let alone an embargo – where everyone was happy.  But we’ll pretend, shall we?

9 – I really don’t think I want to be involved in any of these types of disclosure, thank you.

Explained: five misused security words

Untangling responsibility, authority, authorisation, authentication and identification.

I took them out of the title, because otherwise it was going to be huge, with lots of polysyllabic words.  You might, therefore, expect a complicated post – but that’s not my intention*.  What I’d like to do it try to explain these five important concepts in security, as they’re often confused or bound up with one another.  They are, however, separate concepts, and it’s important to be able to disentangle what each means, and how they might be applied in a system.  Today’s words are:

  • responsibility
  • authority
  • authorisation
  • authentication
  • identification.

Let’s start with responsibility.

Responsibility

Confused with: function; authority.

If you’re responsible for something, it means that you need to do it, or if something goes wrong.  You can be responsible for a product launching on time, or for the smooth functioning of a team.  If we’re going to ensure we’re really clear about it, I’d suggest using it only for people.  It’s not usually a formal description of a role in a system, though it’s sometimes used as short-hand for describing what a role does.  This short-hand can be confusing.  “The storage module is responsible for ensuring that writes complete transactionally” or “the crypto here is responsible for encrypting this set of bytes” is just a description of the function of the component, and doesn’t truly denote responsibility.

Also, just because you’re responsible for something doesn’t mean that you can make it happen.  One of the most frequent confusions, then, is with authority.  If you can’t ensure that something happens, but it’s your responsibility to make it happen, you have responsibility without authority***.

Authority

Confused with: responsibility, authorisation.

If you have authority over something, then you can make it happen****.  This is another word which is best restricted to use about people.  As noted above, it is possible to have authority but no responsibility*****.

Once we start talking about systems, phrases like “this component has the authority to kill these processes” really means “has sufficient privilege within the system”, and should best be avoided. What we may need to check, however, is whether a component should be given authorisation to hold a particular level of privilege, or to perform certain tasks.

Authorisation

Confused with: authority; authentication.

If a component has authorisation to perform a certain task or set of tasks, then it has been granted power within the system to do those things.  It can be useful to think of roles and personae in this case.  If you are modelling a system on personae, then you will wish to grant a particular role authorisation to perform tasks that, in real life, the person modelled by that role has the authority to do.  Authorisation is an instantiation or realisation of that authority.  A component is granted the authorisation appropriate to the person it represents.  Not all authorisations can be so easily mapped, however, and may be more granular.  You may have a file manager which has authorisation to change a read-only permission to read-write: something you might struggle to map to a specific role or persona.

If authorisation is the granting of power or capability to a component representing a person, the question that precedes it is “how do I know that I should grant that power or capability to this person or component?”.  That process is authentication – authorisation should be the result of a successful authentication.

Authentication

Confused with: authorisation; identification.

If I’ve checked that you’re allowed to perform and action, then I’ve authenticated you: this process is authentication.  A system, then, before granting authorisation to a person or component, must check that they should be allowed the power or capability that comes with that authorisation – that are appropriate to that role.  Successful authentication leads to authorisation.  Unsuccessful authentication leads to blocking of authorisation******.

With the exception of anonymous roles, the core of an authentication process is checking that the person or component is who he, she or it says they are, or claims to be (although anonymous roles can be appropriate for some capabilities within some systems).  This checking of who or what a person or component is authentication, whereas the identification is the claim and the mapping of an identity to a role.

Identification

Confused with: authentication.

I can identify that a particular person exists without being sure that the specific person in front of me is that person.  They may identify themselves to me – this is identification – and the checking that they are who they profess to be is the authentication step.  In systems, we need to map a known identity to the appropriate capabilities, and the presentation of a component with identity allows us to apply the appropriate checks to instantiate that mapping.

Bringing it all together

Just because you know whom I am doesn’t mean that you’re going to let me do something.  I can identify my children over the telephone*******, but that doesn’t mean that I’m going to authorise them to use my credit card********.  Let’s say, however, that I might give my wife my online video account password over the phone, but not my children.  How might the steps in this play out?

First of all, I have responsibility to ensure that my account isn’t abused.  I also have authority to use it, as granted by the Terms and Conditions of the providing company (I’ve decided not to mention a particular service here, mainly in case I misrepresent their Ts&Cs).

“Hi, darling, it’s me, your darling wife*********. I need the video account password.” Identification – she has told me who she claims to be, and I know that such a person exists.

“Is it really you, and not one of the kids?  You’ve got a cold, and sound a bit odd.”  This is my trying to do authentication.

“Don’t be an idiot, of course it’s me.  Give it to me or I’ll pour your best whisky down the drain.”  It’s her.  Definitely her.

“OK, darling, here’s the password: it’s il0v3myw1fe.”  By giving her the password, I’ve  performed authorisation.

It’s important to understand these different concepts, as they’re often conflated or confused, but if you can’t separate them, it’s difficult not only to design systems to function correctly, but also to log and audit the different processes as they occur.


*we’ll have to see how well I manage, however.  I know that I’m prone to long-windedness**

**ask my wife.  Or don’t.

***and a significant problem.

****in a perfect world.  Sometimes people don’t do what they ought to.

*****this is much, much nicer than responsibility without authority.

******and logging.  In both cases.  Lots of logging.  And possibly flashing lights, security guards and sirens on failure, if you’re into that sort of thing.

*******most of the time: sometimes they sound like my wife.  This is confusing.

********neither should you assume that I’m going to let my wife use it, either.*********

*********not to suggest that she can’t use a credit card: it’s just that we have separate ones, mainly for logging purposes.

**********we don’t usually talk like this on the phone.