The 3 things you need to know about disk encryption

Use software encryption, preferably an open-source and audited solution.

It turns out that somebody – well, lots of people, in fact – failed to implement a cryptographic standard very well.  This isn’t a surprise, I’m afraid, but it’s bad news.  I’ve written before about how important it is to be using disk encryption, but it turns out that the advice I gave wasn’t sufficient, or detailed enough.

Here’s a bit of background.  There are two ways to do disk encryption:

  1. let the disk hardware (and firmware) manage it: HDD (hard disk drive), SSD (solid state drive) and hybrid (a mix of HDD and SDD technologies) manufacturers create drives which have encryption built in.
  2. allow your Operating System (e.g. Linux[0], OSX[1], Windows[2]) to do the job: the O/S will have a little bit of itself on the disk unencrypted, which will allow it to decrypt the rest of the disk (which is encrypted) when provided with a password or key.

You’d think, wouldn’t you, that option 1 would be the safest?  It should be quick, as it’s done in hardware, and well, the companies who manufacture these disks will know that they’re doing, right?

No.

A paper (link opens a PDF file) written by some researchers in the Netherlands reveals some work that they did on several SSD drives to try to work out how good a job had been done on the encryption security.  They are all supposed to have implemented a fairly complex standard from the TCG[4] called Opal, but it seems that none of them did it right.  It turns out that someone with physical access to your hardware can, fairly trivially, decrypt what’s on your drive.  And they can do this without the password that you use to lock it or any associated key(s).  The simple lesson from this is that you shouldn’t trust hardware disk encryption.

So, software disk encryption is OK, then?

Also no.

Well, actually yes, as long as you’re not using Microsoft’s BitLocker in its default mode.  It turns out that BitLocker will just use hardware encryption if the drive its using supports it.  In other words, using BitLocker just uses hardware encryption unless you tell it not to do so.

What about other options?  Well, you can tell BitLocker not to use hardware encryption, but only for a new installation: it won’t change on an existing disk.  The best option[5] is to use a software encryption solution which is open source and audited by the wider community.  LUKS is the default for most Linux distributions.  One suggested by the papers’ authors for Windows is Veracrypt.  Can we be certain that there are no holes or mistakes in the implementation of these solutions?  No, we can’t, but the chances of security issues being found and fixed are much, much higher than for proprietary software[6].

What, then are my recommendations?

  1. Don’t use hardware disk encryption.  It’s been shown to be flawed in many implementations.
  2. Don’t use proprietary software.  For anything, honestly, particularly anything security-related, but specifically not for disk encryption.
  3. If you have to use Windows, and are using BitLocker, run with VeraCrypt on top.

 


1 – GNU Linux.

2 – I’m not even sure if this is the OS that Macs run anymore, to be honest.

3 – not my thing either, but I’m pretty sure this is what it’s call.  Couldn’t be certain of the version, though.

4 – Trusted Computing Group.

5 – as noted by the paper’s authors, and heartily endorsed by me.

6 – I’m not aware of any problems with Macintosh-based implementations, but open source is just better – read the article linked from earlier in the sentence.

On being acquired – a personal view

It’s difficult to think of a better fit than IBM.

First off, today is one of those days when I need to point you at the standard disclaimer that the views expressed in this post are my own, and not necessarily those of my employers.  That said, I think that many of them probably align, but better safe than sorry[1].  Another note: I believe that all of the information in this article is public knowledge.

The news came out two days ago (last Sunday, 2018-10-28) that Red Hat, my employer, is being acquired by IBM for $34bn.  I didn’t know about it the deal in advance (I’m not that exalted within the company hierarchy, which is probably a good thing, as all those involved needed to keep very tight-lipped about it, and that would have been hard), so the first intimation I got was when people started sharing stories from various news sites on internal chat discussions.  They (IBM) are quite clear about the fact that they are acquiring us for the people, which means that each of us (including me!) is worth around $2.6m, based on our current headcount.  Sadly, I don’t think it works quite like this, and certainly nobody has (yet) offered to pay me that amount[2].  IBM have also said that they intend to keep Red Hat operating as a separate entity within IBM.

How do I feel?  My initial emotion was shock.  It’s always a surprise when you get news that you weren’t expecting, and the message that we’d carried for a long time was the Red Hat would attempt to keep ploughing its own furrow[3] for as long as possible.  But I’d always known that, as a public company, we were available to be bought, if the money was good enough.  It appears on this occasion that it was.  And that emotion turned to interest as to what was going to happen next.

And do you know what?  It’s difficult to think of a better fit than IBM.  I’m not going to enumerate the reasons that I feel that other possible acquirers would have been worse, but here are some of the reasons that IBM, at least in this arrangement, is good:

  • they “get” open source, and have a long history of encouraging its use;
  • they seem to understand that Red Hat has a very distinctive culture, and want to encourage that, post-acquisition;
  • they have a hybrid cloud strategy and products, Red Hat has a hybrid cloud strategy and products: they’re fairly well-aligned;
  • we’re complementary in a number of sectors and markets;
  • they’re a much bigger player than us, and suddenly, we’ll have access to more senior people in new and exciting companies.

What about the impact on me, though?  Well, IBM takes security seriously.  IBM has some fantastic research and academic connections.  The group in which I work has some really bright and interesting people in it, and it’s difficult to imagine IBM wanting to break it up.  A number of the things I’m working on will continue to align with both Red Hat’s direction and IBM’s.  The acquisition will take up to a year to complete – assuming no awkward regulatory hurdles along the way – and not much is going to change in the day-to-day.  Except that I hope to get even better access to my soon-to-be-colleagues working in similar fields to me, but within IBM.

Will there be issues along the way?  Yes.  Will there be uncertainty?  Yes.  But do I trust that the leadership within Red Hat and IBM have an honest commitment to making things work in a way that will benefit Red Hatters?  Yes.

And am I looking to jump ship?  Oh, no.  Far too much interesting stuff to be doing.  We’ve got an interesting few months and years ahead of us.  My future looked red, until Sunday night.  Then maybe blue.  But now I’m betting on something somewhere between the two: go Team Purple.


1 – because, well, lawyers, the SEC, etc., etc.

2 – if it does, then, well, could somebody please contact me?

3 – doing its own thing independently.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.

On conversation and the benefits of boasting

On Monday and Tuesday this week I’m attending DevSecCon in Boston – a city which is much more pleasant when it’s not raining or snowing, which it often seems to be doing while I’m here.  There are a bunch of interesting talks[1] and workshops, and I was asked, at the last minute, to facilitate an “Open Space Discussion” at the end of the first day (as two people hadn’t arrived as expected).  Facilitating discussions is about not talking all the time, but encouraging other people to talk[2]: my approach to this is to tell a story, and then encourage them to share stories.

People enjoy listening to stories, and people enjoy telling stories, and there is a type of story that is particularly useful and important in the world of work: “war-stories”.  Within the IT industry, at least, this refers to stories about experiences – usually bad experiences – from our day-to-day working lives.  They are often used to illustrate a point or lend experiential weight to an opinion being put forward. But they are also great learning experiences.

What I learned yesterday – or re-learned – is the immense value of conversation with our peers in a neutral setting, with no formal bounds or difference in “rank”.  We had at least one participant who was only two years out of college, participants with 25-30 years of experience, a CISO of a major healthcare provider, a CEO, DevOps engineers, customer-facing people, security people, non-security people, people with Humanities[4] degrees, people with Computer Science degrees.  We were about twelve people, and everybody contributed, to greater or lesser degrees.  I hope that we managed to maintain a conversation where age and numbers of years in the industry were unimportant, but the experiences shared were.

And I learned about other people’s opinions, their viewpoints, their experiences, their tips for what works – and doesn’t work – and made, I hope, some new friends.  Certainly some new peers.  What we talked about isn’t vitally important to this article[5]: the important thing was the conversation, and the stories they told that brought their shared wisdom to the table.  I felt, by the end of the session, that we had added something to the commonwealth of knowledge within the industry

I was looking for a way to close the session as we were moving to the end, and hit upon something which seemed to work: I encouraged everybody to spend 30 seconds or so to tell the group about an incident in their career that they are proud of.  We got some great stories, and not only did we learn from them, but I think it’s really important that we get the chance to express our pride in the things that we’ve done.  We rarely get the chance to boast, or to let people outside our general circle know why we think we should be valued.  There’s nothing wrong with being proud of the things we’ve done, but we’re often – usually – discouraged from doing so.  It was great to have people share their various experiences of personal expertise, and to think about how they would use them to further their career.  I didn’t force everybody to speak – and was thanked by one of the silent participants later – and it’s important to realise that not everybody will be happy doing so.  But I think that the rapport that we’d built as a group meant that more people were happy to contribute something than would have considered it at the beginning of the session.  I left with a respect for all of the participants, and a realisation of the importance of shared experience.

 


1 – I gave a talk based on my blog article Why I love technical debtI found it interesting…

2 – based on this definition, it may surprise regular readers – and people who know me IRL[3] – that I’d even consider participating, let alone facilitating.

3 – does anybody use this term anymore?

4 – Liberal Arts/Social Sciences.

5 – but included:

  • the impact of different open source licences
  • how legal teams engage with open source questions
  • how to encourage more conversation between technical and legal folks
  • the importance of systems engineering
  • how to talk to customers and vendors
  • how to build teams through social participation[6]
  • the NIST 800 series and other models to consider security
  • risk: how to talk about it, measure it, discuss it with other functions within the organisation.

6 – the word “beer” came up.  From somebody else, on this occasion.

 

Is homogeneity bad for security?

Can it really be good for security to have such a small number of systems out there?

For the last three years, I’ve attended the Linux Security Summit (though it’s not solely about Linux, actually), and that’s where I am for the first two days of this week – the next three days are taken up with the Open Source Summit.  This year, both are being run both in North America and in Europe – and there was a version of the Open Source Summit in Asia, too.  This is all good, of course: the more people, and the more diversity we have in the community, the stronger we’ll be.

The question of diversity came up at the Linux Security Summit today, but not in the way you might necessarily expect.  As with most of the industry, this very technical conference (there’s a very strong Linux kernel developer bias) is very under-represented by women, ethnic minorities and people with disabilities.  It’s a pity, and something we need to address, but when a question came up after someone’s talk, it wasn’t diversity of people’s background that was being questioned, but of the systems we deploy around the world.

The question was asked of a panel who were talking about open firmware and how making it open source will (hopefully) increase the security of the system.  We’d already heard how most systems – laptops, servers, desktops and beyond – come with a range of different pieces of firmware from a variety of different vendors.  And when we talk about a variety, this can easily hit over 100 different pieces of firmware per system.  How are you supposed to trust a system with some many different pieces?  And, as one of the panel members pointed out, many of the vendors are quite open about the fact that they don’t see themselves as security experts, and are actually asking the members of open source projects to design APIs, make recommendations about design, etc..

This self-knowledge is clearly a good thing, and the main focus of the panel’s efforts has been to try to define a small core of well-understood and better designed elements that can be deployed in a more trusted manner.   The question that was asked from the audience was in response to this effort, and seemed to me to be a very fair one.  It was (to paraphrase slightly): “Can it really be good for security to have such a small number of systems out there?”  The argument – and it’s a good one in general – is that if you have a small number of designs which are deployed across the vast majority of installations, then there is a real danger that a small number of vulnerabilities can impact on a large percentage of that install base.

It’s a similar problem in the natural world: a population with a restricted genetic pool is at risk from a successful attacker: a virus or fungus, for instance, which can attack many individuals due to their similar genetic make-up.

In principle, I would love to see more diversity of design within computing, and particular security, but there are two issues with this:

  1. management: there is a real cost to managing multiple different implementations and products, so organisations prefer to have a smaller number of designs, reducing the number of the tools to manage them, and the number of people required to be trained.
  2. scarcity of resources: there is a scarcity of resources within IT security.  There just aren’t enough security experts around to design good security into systems, to support them and then to respond to attacks as vulnerabilities are found and exploited.

To the first issue, I don’t see many easy answers, but to the second, there are three responses:

  1. find ways to scale the impact of your resources: if you open source your code, then the number of expert resources available to work on it expands enormously.  I wrote about this a couple of years ago in Disbelieving the many eyes hypothesis.  If your code is proprietary, then the number of experts you can leverage is small: if it is open source, you have access to almost the entire worldwide pool of experts.
  2. be able to respond quickly: if attacks on systems are found, and vulnerabilities identified, then the ability to move quickly to remedy them allows you to mitigate significantly the impact on the installation base.
  3. design in defence in depth: rather than relying on one defence to an attack or type of attack, try to design your deployment in such a way that you have layers of defence. This means that you have some time to fix a problem that arises before catastrophic failure affects your deployment.

I’m hesitant to overplay the biological analogy, but the second and third of these seem quite similar to defences we see in nature.  The equivalent to quick response is to have multiple generations in a short time, giving a species the opportunity to develop immunity to a particular attack, and defence in depth is a typical defence mechanism in nature – think of human’s ability to recognise bad meat by its smell, taste its “off-ness” and then vomit it up if swallowed.  I’m not quite sure how this particular analogy would map to the world of IT security (though some of the practices you see in the industry can turn your stomach), but while we wait to have a bigger – and more diverse pool of security experts, let’s keep being open source, let’s keep responding quickly, and let’s make sure that we design for defence in depth.

 

Jargon – a force for good or ill?

Distinguish an electrical engineer from a humanities graduate: ask them how many syllables are in the word “coax”.

I was involved in an interesting discussion with colleagues recently about the joys or otherwise or jargon.  It stemmed from a section I wrote in a recent article, How to talk to security people: a guide for the rest of us.  In it, I suggested that jargon “has at least two uses:

  1. as an exclusionary mechanism for groups to keep non-members in the dark;
  2. as a short-hand to exchange information between ‘in-the-know’ people so that they don’t need to explain everything in exhaustive detail every time.”

Given the discussion that arose, I thought it was worth delving more deeply into this question.  It’s more than an idle interest, as well, as I think that there are important lessons around our use of jargon that impact on how we interact with our colleagues and peers, and which deserve some careful thought.  These lessons apply particularly to my chosen field of security.

Before we start, there should be some definition.  It’s always nice to have two conflicting versions, so here we go:

  1. Special words or expressions used by a profession or group that are difficult for others to understand. (Oxford dictionaries – https://en.oxforddictionaries.com/definition/jargon)
  2. Without qualifier, denotes informal ‘slangy’ language peculiar to or predominantly found among hackers. (The Jargon File – http://catb.org/jargon/html/distinctions.html)

I should start by pointing out that the Jargon File (which was published in paper form in at least two versions as The Hacker’s Dictionary (ed. Steele) and the New Hacker’s Dictionary (ed. Raymond)) has a pretty special place in my heart.  When I decided that I wanted to “take up” geekery properly[1][2], I read my copy of the the New Hacker’s Dictionary from cover to cover, several times, and when a new edition came out, I bought that and did the same.  In fact, for many of the more technical readers of this blog, I suspect that a fair amount of your cultural background is expressed within the covers (paper or virtual) of the same, even if you’re not aware of it.  If you’re interested in delving deeper and like the feel of paper in your hands, then I encourage you to purchase a copy, but be careful to get the right one: there are some expensive versions which seem just to be print-outs of the Jargon File, rather than properly typeset and edited versions[3].

But let’s get onto the meat of this article: is jargon a force for good or ill?

The case against jargon – ambiguity

You would think that jargon would serve to provide agreed terms within a particular discipline, and help ambiguity around contexts.  It may be a surprise, then, that first problem that we often run into with jargon is name-space clashes.  Consider the following.  There’s an old joke about how to distinguish an electrical engineer from a humanities[3] graduate: ask them how many syllables are in the word “coax”.  The point here, of course, is that they come from different disciplines.  But there are lots of words – and particularly abbreviations – which have different meanings or expansions depending on context, and where disciplines and contexts may collide.  What do these mean to you[5]?

  • scheduling (kernel level CPU allocation to processes OR placement of workloads by an orchestration component)
  • comms (I/O in a computer system OR marketing/analyst communications)
  • layer (OSI model OR IP suite layer OR other architectural abstraction layer such as host or workload)
  • SME (subject matter expert OR small/medium enterprise)
  • SMB (small/medium business OR small message block)
  • TLS (Transport Layer Security OR Times Literary Supplement)
  • IP (Internet Protocol OR Intellectual Property OR Intellectual Property as expressed as a silicon component block)
  • FFS (For Further Study OR …[6])

One of the interesting things is that quite a lot of my background is betrayed by the various options that present themselves to me: I wonder how many of the readers of this will have thought of the Times Literary Supplement, for example. I’m also more likely to think of SME as the term relating to organisations, because that’s the favoured form in Europe, whereas I believe that the US tends to SMB.  I’m sure that your experiences will all be different – which rather makes my point for me.

That’s the first problem.  In a context where jargon is often praised as a way of short-cutting lengthy explanations, it can actually be a significant ambiguating force.

The case against jargon – exclusion

Intentionally or not – and sometimes it is intentional – groups define themselves through use of specific terminology.  Once this terminology becomes opaque to those outside the group, it becomes “jargon”, as per our first definition above.  “Good” use of jargon generally allows those within the group to converse using shared context around concepts that do not need to be explained in detail every time they are used.  An example would be a “smoke test” – a quick test to check that basic functionality is performing correctly (see the Jargon File’s definition for more).  If everyone within the group understands what this means, then why go into more detail?  But if you are joined at a stand-up meeting[7] by a member of marketing who wants to know whether a particular build is ready for release, and you say “well, no – it’s only been smoke-tested so far”, then it’s likely that you’ll need to explain.

Another enduring and complex piece of jargon is the use of “free” in relation to software.  In fact, the term is so ambiguous that different terms have evolved to describe some variants – “open source”, “FOSS” – and even phrases such as “free as in speech, not as in beer”.

The problem is that there are occasions when jargon can be excluding from others, whether that usage is intended or not.  There have been times for most of us, I’m sure, when we want to show that we’re part of group and so use terms that we know another person won’t understand.  On other occasions, the term may be so ingrained in our practice that we use it without thinking, and the other person is unintentionally excluded.  I would argue that we need to be careful to avoid both of these usages.  Intentional exclusion is rarely helpful, but unintentional exclusion can be just as damaging – in ways more so, as it is typically unremarked and is therefore difficult to remedy.

The security world seems particularly prone to this behaviour, I feel.  It may be that many security folks have a sound enough grounding in other spheres – especially those in which they’re working with others – that the overlap in the other direction is less noticeable.  The other issue is that there’s a certain elitism with security folks which is both laudable – you want the best people, with the best training, to be leading the field – and lamentable – elitism tends to lead to exclusion of members of other groups.

A quick paragraph in praise of jargon

It’s not all bad, however.  We need jargon, as I noted above, to enable us to discuss concepts, and the use of terms in normal language – like scheduling – as jargon leads to some interesting metaphors which guide us in our practice[8].  We absolutely need shared practice, and for that we need shared language – and some of that language is bound, over time, to become jargon.  But consider a lexicon, or a FAQ, or other ways to allow your colleagues to participate: be inclusive, not exclusive.


1 – “properly” – really?  Although I’m not sure “improperly” is any better.

2 – oh, remember that I actually studied English Literature and Theology at university, so this was a conscious decision to embrace a rather different culture.

3 – or “Liberal Arts”.

4 – the most recent “real” edition of which I’m aware is Raymond, Eric S., 1996, The New Hacker’s Dictionary, 3rd ed., MIT University Press, Cambridge, Mass.

5 – I’ve added the first options that spring to mind when I come across them – I’m aware there are almost certainly others.

6 – believe me, when I saw this abbreviation in a research paper for the first time, I was most confused, and actually had to look it up.

7 – oh, look: jargon…

8 – though metaphors can themselves be constraining as they tend to push us to thinking in a particular way, even if that way isn’t entirely applicable in this context.

The gift that keeps on giving: passwords

A Father’s Day special

There’s an old saying: “if you give a man a fish, he’ll eat for a day, but if you teach a man to fish, he’ll eat for a lifetime.”  There are some cruel alternatives with endings like “he’ll buy a silly hat and sit outside in the rain”, but the general idea is that it’s better to teach someone something rather than just giving them something.

With Father’s Day coming up this Sunday in many parts of the world, I’d like to suggest the same for passwords.  Many people’s password practices are terrible.  There are three things that people really don’t get about passwords:

  1. what they should look like
  2. how they should be stored
  3. how they should be communicated.

Let’s go through each of these in turn, and I’ll try to give brief tips that you can pass onto your father (or, indeed, mother, broader family, friends or colleagues) to help them with password safety.

What should passwords look like?

There’s a famous xkcd comic called password strength which aims to help you find a useful password.  This is great advice if you only have a few passwords, but about twenty years ago I got above ten, and then started re-using passwords for certain levels of security.  This was terrible at the time, and even worse now.  Look at the the number of times a week we see news about information being lost when companies or organisations are hacked.  If you share passwords between accounts, there’s a decent chance that your login details for one will be exposed, which means that all your other accounts that share that set are compromised.

I know some people who used to have permutations of passwords.  Let’s say the base was “p4ssw0rd”: they would then add a suffix for the website or account, such as “p4ssw0rdNetflix”.  This might be fine if we believed that all passwords are stored in hashed form, but, well, we know they’re not, so don’t do this, either.  Extrapolating from one account to another is too easy.

What does a good password look like, then?  Here’s one: “W9#!=_twXhRb”  And another?  This one is 16 characters long: “*Wdb_%|#N^X6CR_b”  What are the chances of a human guessing these?  Pretty slim.  And a computer?  Not much better, to be honest.  They are randomly generated by software, and as long as I use a different one for each account, I’m pretty safe against password-guessing attacks.

“But,” you say, “how am I supposed to remember them?  I’ve got dozens of accounts, and I can’t remember one of those, let alone fifty!”

How should you store passwords?

Well, you shouldn’t try to remember passwords, in the same way that you shouldn’t try to generate them.  Oh, there will be a handful that you might remember – maybe low-importance ones like the wifi key to your home AP – but most of them you should trust to a password manager.  These are nifty pieces of software that will generate and then remember hundreds of passwords for you.  Some of them will even automatically fill website fields for you if you ask them to.  The best ones are open source, which means that people have pored over their code (hopefully) to check they’re trustworthy, and that if you’re not entirely sure, then you can pore of their code, too.  And make changes and improvements and generally improve the world bit by bit.

You will need to remember one password, though, and that’s the one to unlock the password manager.  Make it really, really strong: it’s about the only one you mustn’t lose (though most websites will help you reset a password if you forget it, so it’s just a matter of going through each of the several hundred until they’re done…).  Use the advice from the xkcd cartoon, or another strong password algorithm that’s easy to remember.

To make things more safe, store the (password protected) key store somewhere that is not easily accessed by other people – not a shared drive at work, for instance, but maybe on your phone or on some cloud-based storage that you can get to if you lose your phone.  Always set the password manager to auto-lock itself after some time, in case you leave your computer logged on, or your phone gets stolen.

How to communicate passwords

Would you send a password via email?  What about by SMS?  Is post[2] better?  Is it acceptable to reveal a password over the phone in a crowded train carriage[4]?  Would you give your laptop password to a random person conducting a survey on a railway station for the prize of a chocolate bar?

In an ideal world, we would never share passwords, but there are times when we need to – and times when it’s worthwhile for material rewards[5].  There are some accounts which are shared – TV or film streaming accounts for the family – or that you’ve set up for somebody else, or which somebody urgently needs to access because you’re on holiday, for instance.  So you may need to give out passwords from time to time.  What’s the best mechanism?  What’s the worst?

This may sound surprising, but I’d generally say that the worst (marginally) is post.  What you’re trying to avoid happening is a Bad Person[tm] from marrying two pieces of information: the username and the password.  If someone has access to your post, then there’s a good chance that they might be able to work out enough information about you that they can guess the account name.  The others?  Well, they’re OK as long as you’re not also sending the username via the same channel.  That, in fact, is the key test: you should never provide the two pieces of information in such a way that a person with access to one channel can put them together.   So, telling someone a password in a crowded train carriage may be rude in relation to all of the other people in the carriage[6], but it may be very secure in terms of account safety.

The reason I posed the question about the survey is that every few months a survey company in the UK asks people at mainline railway stations to tell them their password in exchange for a chocolate bar, and then write a headline about how awful it is that many people will give them their password.  This is a stupid headline, for a stupid survey, for two reasons:

  1. I’d happily lie and tell them a false password in order to get a free chocolate bar AND
  2. even if I gave them the correct password, how are they going to marry that with my account details?

Conclusion

If you’re the sort of person reading there’s a fairly high chance that you’re the sort of person who’s asked to clear up the mess what family, friends or colleagues get their accounts compromised[7].  Here are four rules for password security:

  1. don’t reuse passwords – use a different one for every single account
  2. don’t think up your own passwords – get a password manager to generate them for you
  3. use a password manager to store your passwords – if they’re strong enough in the first place, you won’t be able to remember them
  4. never send usernames and passwords over the same channel – you want to avoid the situation where an attacker has access to both and can use them.

I’ll add a fifth one for luck: feel free to use underhand tactics to get chocolate bars from people performing poorly-designed surveys on railway stations.


1 – I thought about changing the order, as they do impact on each other, but it made my head hurt, so I stopped.

2 – note for younger readers: there used to be something called “snail mail”.  It’s nearly dead[3].

3 – unless you forget to turn on “electronic statements” for your bank account.  Then you’ll get loads of it.

4 – whatever the answer to this is from a security point of view, the correct answer is “no”, because a) you’re going to annoy me by shouting it repeatedly down the phone because reception is so bad on the train that the recipient can’t hear it and b) because reception is so bad on the train that the recipient can’t hear it (see b)).

5 – I like chocolate.

6 – I’m not a big fan of phone conversations in railway carriages, to be honest.

7 – Or you’ve been sent a link to this because you are one of those family, friends or colleagues, and the person who sent you the link is sick and tired of doing all of your IT dirty work for you.