Who do you trust: your data, or your enemy’s?

Our security logs define our organisational memory.

Imagine, just imagine, that you’re head of an organisation, and you suspect that there’s been some sort of attack on your systems[1].  You don’t know for sure, but a bunch of your cleverest folks are pretty sure that there are sufficient signs to merit an investigation.  So, you agree to allow that investigation, and it’s pretty clear from the outset – and becomes yet more clear as the investigation unfolds – that there was an attack on your organisation.  And as this investigation draws close to its completion, you happen to meet the head of another organisation, which happens to be not only your competitor, but also the party that your investigators are certain was behind the attack.  That person – the leader of your competitor – tells you that they absolutely didn’t perform an attack: no, sirree.  Who do you believe?  Your people, who have been laboring[2] away for months, or your competitor?

What a ridiculous question: of course you’d believe your own people over your competitor, right?

So, having set up such an absurd scenario[4], let’s look at a scenario which is actually much more likely.  Your systems have been acting strangely, and there seems to be something going on.  Based on the available information, you believe that you’ve been attacked, but you’re not sure.  Your experts think it’s pretty likely, so you approve an investigation.  And then one of your investigatory team come to you to tell you some really bad news about the data.  “What’s the problem?” you ask.  “Is there no data?”

“No,” they reply, “it’s worse than that.  We’ve got loads of data, but we don’t know which is real.”

Our logs are our memory

There is a literary trope – one of my favourite examples is Margery Allingham’s The Traitor’s Purse – where a character realises that he or she is somebody other than who they think they are when they start to question their memories.  We are defined by our memories, and the same goes for organisational security.  Our security logs define our organisational memory.  If you cannot prove that the data you are looking at is[5] correct, then you cannot be sure what led to the state you are in now.

Let’s give a quick example.  In my organisation, I am careful to log every time I upgrade a piece of software, but I begin to wonder whether a particular application is behaving as expected.  Maybe I see some traffic to an external IP address which I’ve never seen before, for instance.  Well, the first thing I’m going to do is to check the logs to see whether somebody has updated the software with a malicious version.  Assuming that somebody has installed a malicious version, there are three likely outcomes at this point:

  1. you check the logs, and it’s clear that a non-authorised version was installed.  This isn’t good, because you now know that somebody has unauthorised access to your system, but at least you can do something about it.
  2. at least some of the logs are missing.  This is less good, because you really can’t be sure what’s gone on, but you know have a pretty strong suspicion that somebody with unauthorised access has deleted some logs to cover up their tracks, which means that you have a starting point for remediation.
  3. there’s nothing wrong.  All the logs look fine.  You’re now really concerned, as you can’t be sure of your own data – your organisation’s memories.  You may be looking at correct data, or you may be looking at incorrect data: data which has been written by your enemy.  Attackers can – and do – go into log files and change the data in them to obscure the fact that they have performed particular actions.  It’s actually one of the first steps that a competent attacker will perform.

In the scenario as defined, things probably aren’t too bad: you can check the checksum or hash of the installed software, and check that it’s the version you expect.  But what if the attacker has also changed the version of your checksum- or hash-checker so that, for packages associated with this particular application, they always return what you expect to see?  This is not a theoretical attack, and nor is it the only way approach that attackers have to try to muddy the waters as to what they’ve done. And if it has happened, they you have no way of confirming that you have the correct version of the application.  You can try updating the checksum- or hash-checker, but what if the attacker has meddled with the software installer to ensure that it always installs their version…?

It’s a slippery slope, and bar wiping the entire system and reinstalling[6], there’s little you can do to be certain that you’ve cleared things up properly.  And in some cases, it may be too late: what if the attacker’s main aim was to exfiltrate some of your proprietary data, for example?

Lessons to learn, actions to take

Well, the key lesson is that your logs are important, and they need to be protected.  If you can’t trust your logs, then it can be very, very difficult not only to identify the extent of an attack, but also to remediate it or, in the worst case, even to know that you’ve been attacked at all.

And what can you do?  Well, there are many techniques that you can employ, and the best combination will depend on a number of questions, including your regulatory regime, your security posture, and what attackers you decide to defend against.  I’d start with a fairly simple combination:

  • move your most important logs off-system.  Where possible, host logs on different systems to the ones that are doing the reporting.  If you’re an attacker, it’s more difficult to jump from one system to another than it is to make changes to logs on a system which you’ve already compromised;
  • make your logs write-only.  This sounds crazy – how are you supposed to check logs if they can’t be read?  What this really means is that you separate read and write privileges on your logs so that only those with a need to read them can do so.  Writing is less worrisome – though there are attacks here, including filesystem starvation – because if you can’t see what you need to change, then it’s almost impossible to do so.  If you’re an attacker, you might be able to wipe some logs – see our case 2 above – but obscuring what you’ve actually done is more difficult.

Exactly what steps you take are up to you, but remember: if you can’t trust your logs, you can’t trust your data, and if you can’t trust your data, you don’t know what has happened.  That’s what your enemy wants: confusion.


1 – like that’s ever going to happen.

2- on this, very rare, occasion, I’m going to countenance[2] a US spelling.  I think you can guess why.

3 – contenance?

4 – I know, I know.

5 – are?

6 – you checked the firmware, right?  Hmm – maybe safer just to buy completely new hardware.

 

7 steps to security policy greatness

… we’ve got a good chance of doing the right thing, and keeping the auditors happy.

Security policies are something that everybody knows they should have, but which are deceptively simple to get wrong.  A set of simple steps to check when you’re implementing them is something that I think it’s important to share.  You know when you come up a set of steps, you suddenly realise that you’ve got a couple of vowels in there and you think, “wow, I can make an acronym!”?[1] This was one of those times.  The problem was that when I looked at the different steps, I decided that DDEAVMM doesn’t actually have much of a ring to it.  I’ve clearly still got work to do before I can name major public sector projects, for instance[2].  However, I still think it’s worth sharing, so let’s go through them in order.  Order, as for many sets of steps, is important here.

I’m going to give an example and walk through the steps for clarity.  Let’s say that our CISO, in his or her infinite wisdom, has decided that they don’t want anybody outside our network to be able to access our corporate website via port 80.  This is the policy that we need to implement.

1. Define

The first thing I need is a useful definition.  We nearly have that from the request[3] noted above, when our CISO said “I don’t want anybody outside our network to be able to access our corporate website via port 80”.  So, let’s make that into a slightly more useful definition.

“Access to IP address mycorporate.network.com on port 80 must be blocked to all hosts outside the 10.0.x.x to 10.3.x.x network range.”

I’m assuming that we already know that our main internal network is within 10.0.x.x to 10.3.x.x.  It’s not exactly a machine readable definition, but actually, that’s the point: we’re looking for a definition which is clear and human understandable.  Let’s assume that we’re happy with this, at least for now.

2. Design

Next, we need to design a way to implement this.  There are lots of ways of doing this – from iptables[4] to a full-blown, commercially supported firewall[6] – and I’m not fluent in any of them these days, so I’m going to assume that somebody who is better equipped than I has created a rule or set of rules to implement the policy defined in step 1.

We also need to define some tests – we’ll come back to these in step 5.

3. Evaluate

But we need to check.  What if they’ve mis-implemented it?    What if they misunderstood the design requirement?  It’s good practice – in fact, it’s vital – to do some evaluation of the design to ensure it’s correct.  For this rather simple example, it should be pretty to check by eye, but we might want to set up a test environment to evaluate that it meets the policy definition or take other steps to evaluate its correctness.  And don’t forget: we’re not checking that the design does what the person/team writing it thinks it should do: we’re checking that it meets the definition.  It’s quite possible that at this point we’ll realise that the definition was incorrect.  Maybe there’s another subnet – 10.5.x.x, for instance – that the security policy designer didn’t know about.  Or maybe the initial request wasn’t sufficiently robust, and our CISO actually wanted to block all access on any port other than 443 (for HTTPS), which should be allowed.  Now is a very good time to find that out.

4. Apply

We’ve ascertained that the design does what it should do – although we may have iterated a couple of times on exactly “what it should do” means – so now we can implement it.  Whether that’s ssh-ing into a system, uploading a script, using some sort of GUI or physically installing a new box, it’s done.

Excellent: we’re finished, right?

No, we’re not.  The problem is that often, that’s exactly what people think.  Let’s move to our next step: arguably the most important, and the most often forgotten or ignored.

5. Validate

I really care about this one: it’s arguably the point of this post.  Once you’ve implemented a security policy, you need to validate that it actually does what it’s supposed to do.  I’ve written before about this, in my post If it isn’t tested, it doesn’t work, and it’s absolutely true of security policy implementations.  You need to check all of the different parts of the design.  This, you might hope, would be really easy, but even in the simple case that we’re working with, there are lots of tests you should be doing.  Let’s say that we took the two changes mentioned in step 3.  Here are some tests that I would want to be doing, with the expected result:

  • FAIL: access port 80 from an external IP address
  • FAIL: access port 8080 from an external IP address
  • FAIL: access port 80 from a 10.4.x.x address
  • PASS: access port 443 from an external IP address
  • UNDEFINED: access port 80 from a 10.5.x.x
  • UNDEFINED: access port 80 from a 10.0.x.x address
  • UNDEFINED: access port 443 from a 10.0.x.x address
  • UNDEFINED: access port 80 from an external IP address but with a VPN connection into 10.0.x.x

Of course, we’d want to be performing these tests on a number of other ports, and from a variety of IP addresses, too.

What’s really interesting about the list is the number of expected results that are “UNDEFINED”.  Unless we have a specific requirement, we just can’t be sure what’s expected.  We can guess, but we can’t be sure.  Maybe we don’t care?  I particularly like the last one, because the result we get may lead us much deeper into our IT deployment than we might expect.

The point, however, is that we need to check that the actual results meet our expectations, and maybe even define some new requirements if we want to remove some of the “UNDEFINED”.  We may be fine to leave some the expected results as “UNDEFINED”, particularly if they’re out of scope for our work or our role.  Obviously, if the results don’t meet our expectations, then we also need to make some changes and apply them and then re-validate.  We also need to record the final results.

When we’ve got more complex security policy – multiple authentication checks, or complex SDN[7] routing rules – then our tests are likely to be much, much more complex.

6. Monitor

We’re still not done.  Remember those results we got in our previous tests?  Well, we need to monitor our system and see if there’s any change.  We should do this on a fairly frequent basis.  I’m not going to say “regular”, because regular practices can lead to sloppiness, and also leave windows of opportunities open to attackers.  We also need to perform checks whenever we make a change to a connected system.  Oh, if I had a dollar for every time I’ve heard “oh, this won’t affect system X at all.”…[8]

One interesting point is that we should also note when results whose expected value remains “UNDEFINED” change.  This may be a sign that something in our system has changed, it may be a sign that a legitimate change has been made in a connected system, or it may be a sign of some sort of attack.  It may not be quite as important as a change in one of our expected “PASS” or “FAIL” results, but it certainly merits further investigation.

7. Mitigate

Things will go wrong.  Some of them will be our fault, some of them will be our colleagues’ fault[10], some of them will be accidental, or due to hardware failure, and some will be due to attacks.  In all of these cases, we need to act to mitigate the failure.  We are in charge of this policy, so even if the failure is out of our control, we want to make sure that mitigating mechanism are within our control.  And once we’ve completed the mitigation, we’re going to have to go back at least to step 2 and redesign our implementation.  We might even need to go back to step 1 and redefine what our definition should be.

Final steps

There are many other points to consider, and one of the most important is the question of responsibility, touched on in step 7 (and which is particularly important during holiday seasons), special circumstances and decommissioning, but if we can keep these steps in mind when we’re implementing – and running – security policies, we’ve got a good chance of doing the right thing, and keeping the auditors happy, which is always worthwhile.


1 – I’m really sure it’s not just me.  Ask your colleagues.

2 – although, back when Father Ted was on TV, a colleague of mine and I came up with a database result which we named a “Fully Evaluated Query”.  It made us happy, at least.

3 – if it’s from the CISO, and he/she is my boss, then it’s not a request, it’s an order, but you get the point.

4 – which would be my first port[5] of call, but might not be the appropriate approach in this context.

5 – sorry, that was unintentional.

6 – just because it’s commercially support doesn’t mean it has to be proprietary: open source is your friend, boys and girls, ladies and gentlemen.

7 – Software-Defined Networking.

8 – I’m going to leave you hanging, other than to say that, at current exchange rates, and assuming it was US dollars I was collecting, then I’d have almost exactly 3/4 of that amount in British pounds[9].

9 – other currencies are available, but please note that I’m not currently accepting bitcoin.

10 – one of the best types.

Happy 4th of July – from Europe

If I were going to launch a cyberattack on the US, I would do it on the 4th July.

There’s a piece of received wisdom from the years of the Cold War that if the Russians[1] had ever really wanted to start a land war in Europe, they would have done it on Christmas Day, when all of the US soldiers in Germany were partying and therefore unprepared for an attack.  I’m not sure if this is actually fair – I’m sure that US commanders had considered this eventuality – but it makes for a good story.

If I were going to launch a cyberattack on the US, I would do it on the 4thof  July.  Now, to be entirely clear, I have no intentions of performing any type of attack – cyber or not – on our great ally across the Pond[2]: not today (which is actually the 3rd July) or tomorrow.  Quite apart from anything else, I’m employed by[5] a US company, and I also need to travel to the US on business quite frequently.  I’d prefer to be able to continue both these activities without undue attention from the relevant security services.

The point, however, is that the 4th of July would be a good time to do it.  How do I know this?  I know it because it’s one of my favourite holidays.  This may sound strange to those of you who follow or regularly read this blog, who will know – from my spelling, grammar and occasional snide humour[6] – that I’m a Brit, live in the UK, and am proud of my Britishness.  The 4th of July is widely held, by residents and citizens of the USA, to be a US holiday, and, specifically, one where they get to cock a snook at the British[7].  But I know, and my European colleagues know – in fact, I suspect that the rest of the world outside the US knows – that if you are employed by, are partners of, or otherwise do business with the US, then the 4th of July is a holiday for you as well.

It’s the day when you don’t get emails.  Or phone calls.  There are no meetings arranged.

It’s the day when you can get some work done.  Sounds a bit odd for a holiday, but that’s what most of us do.

Now, I’m sure that, like the US military in the Cold War, some planning has taken place, and there is a phalanx of poor, benighted sysadmins ready to ssh into servers around the US in order to deal with any attacks that come in and battle with the unseen invaders.  But I wonder if there are enough of them, and I wonder whether the senior sysadmins, the really experienced ones who are most likely to be able to repulse the enemy, haven’t ensured that it’s their junior colleagues who are the ones on duty so that they – the senior ones – can get down to some serious barbecuing and craft beer consumption[8].  And I wonder what the chances are of getting hold of the CISO or CTO when urgent action is required.

I may be being harsh here: maybe everything’s completely under control across all organisations throughout the USA, and nobody will take an extra day or two of holiday this week.  In fact, I suspect that many sensible global organisations – even those based in the US – have ensured that they’ve readied Canadian, Latin American, Asian or European colleagues to deal with any urgent issues that come up.  I really, really hope so.  For now, though, I’m going to keep my head down and hope that the servers I need to get all that work done on my favourite holiday stay up and responsive.

Oh, and roll on Thanksgiving.


1 – I suppose it should really be “the Soviet Union”, but it was also “the Russians”: go figure.

2 – the Atlantic ocean – this is British litotes[3].

3 – which is, like, a million times better than hyperbole[4].

4 – look them up.

5 – saying “I work for” sets such a dangerous precendent, don’t you think?

6 – litotes again.

7 – the probably don’t cock a snook, actually, as that’s quite a British phrase.

8 – I’m assuming UNIX or Linux sysadmins: therefore most likely bearded, and most likely craft beer drinkers.  Your stereotypes may vary.

 

Jargon – a force for good or ill?

Distinguish an electrical engineer from a humanities graduate: ask them how many syllables are in the word “coax”.

I was involved in an interesting discussion with colleagues recently about the joys or otherwise or jargon.  It stemmed from a section I wrote in a recent article, How to talk to security people: a guide for the rest of us.  In it, I suggested that jargon “has at least two uses:

  1. as an exclusionary mechanism for groups to keep non-members in the dark;
  2. as a short-hand to exchange information between ‘in-the-know’ people so that they don’t need to explain everything in exhaustive detail every time.”

Given the discussion that arose, I thought it was worth delving more deeply into this question.  It’s more than an idle interest, as well, as I think that there are important lessons around our use of jargon that impact on how we interact with our colleagues and peers, and which deserve some careful thought.  These lessons apply particularly to my chosen field of security.

Before we start, there should be some definition.  It’s always nice to have two conflicting versions, so here we go:

  1. Special words or expressions used by a profession or group that are difficult for others to understand. (Oxford dictionaries – https://en.oxforddictionaries.com/definition/jargon)
  2. Without qualifier, denotes informal ‘slangy’ language peculiar to or predominantly found among hackers. (The Jargon File – http://catb.org/jargon/html/distinctions.html)

I should start by pointing out that the Jargon File (which was published in paper form in at least two versions as The Hacker’s Dictionary (ed. Steele) and the New Hacker’s Dictionary (ed. Raymond)) has a pretty special place in my heart.  When I decided that I wanted to “take up” geekery properly[1][2], I read my copy of the the New Hacker’s Dictionary from cover to cover, several times, and when a new edition came out, I bought that and did the same.  In fact, for many of the more technical readers of this blog, I suspect that a fair amount of your cultural background is expressed within the covers (paper or virtual) of the same, even if you’re not aware of it.  If you’re interested in delving deeper and like the feel of paper in your hands, then I encourage you to purchase a copy, but be careful to get the right one: there are some expensive versions which seem just to be print-outs of the Jargon File, rather than properly typeset and edited versions[3].

But let’s get onto the meat of this article: is jargon a force for good or ill?

The case against jargon – ambiguity

You would think that jargon would serve to provide agreed terms within a particular discipline, and help ambiguity around contexts.  It may be a surprise, then, that first problem that we often run into with jargon is name-space clashes.  Consider the following.  There’s an old joke about how to distinguish an electrical engineer from a humanities[3] graduate: ask them how many syllables are in the word “coax”.  The point here, of course, is that they come from different disciplines.  But there are lots of words – and particularly abbreviations – which have different meanings or expansions depending on context, and where disciplines and contexts may collide.  What do these mean to you[5]?

  • scheduling (kernel level CPU allocation to processes OR placement of workloads by an orchestration component)
  • comms (I/O in a computer system OR marketing/analyst communications)
  • layer (OSI model OR IP suite layer OR other architectural abstraction layer such as host or workload)
  • SME (subject matter expert OR small/medium enterprise)
  • SMB (small/medium business OR small message block)
  • TLS (Transport Layer Security OR Times Literary Supplement)
  • IP (Internet Protocol OR Intellectual Property OR Intellectual Property as expressed as a silicon component block)
  • FFS (For Further Study OR …[6])

One of the interesting things is that quite a lot of my background is betrayed by the various options that present themselves to me: I wonder how many of the readers of this will have thought of the Times Literary Supplement, for example. I’m also more likely to think of SME as the term relating to organisations, because that’s the favoured form in Europe, whereas I believe that the US tends to SMB.  I’m sure that your experiences will all be different – which rather makes my point for me.

That’s the first problem.  In a context where jargon is often praised as a way of short-cutting lengthy explanations, it can actually be a significant ambiguating force.

The case against jargon – exclusion

Intentionally or not – and sometimes it is intentional – groups define themselves through use of specific terminology.  Once this terminology becomes opaque to those outside the group, it becomes “jargon”, as per our first definition above.  “Good” use of jargon generally allows those within the group to converse using shared context around concepts that do not need to be explained in detail every time they are used.  An example would be a “smoke test” – a quick test to check that basic functionality is performing correctly (see the Jargon File’s definition for more).  If everyone within the group understands what this means, then why go into more detail?  But if you are joined at a stand-up meeting[7] by a member of marketing who wants to know whether a particular build is ready for release, and you say “well, no – it’s only been smoke-tested so far”, then it’s likely that you’ll need to explain.

Another enduring and complex piece of jargon is the use of “free” in relation to software.  In fact, the term is so ambiguous that different terms have evolved to describe some variants – “open source”, “FOSS” – and even phrases such as “free as in speech, not as in beer”.

The problem is that there are occasions when jargon can be excluding from others, whether that usage is intended or not.  There have been times for most of us, I’m sure, when we want to show that we’re part of group and so use terms that we know another person won’t understand.  On other occasions, the term may be so ingrained in our practice that we use it without thinking, and the other person is unintentionally excluded.  I would argue that we need to be careful to avoid both of these usages.  Intentional exclusion is rarely helpful, but unintentional exclusion can be just as damaging – in ways more so, as it is typically unremarked and is therefore difficult to remedy.

The security world seems particularly prone to this behaviour, I feel.  It may be that many security folks have a sound enough grounding in other spheres – especially those in which they’re working with others – that the overlap in the other direction is less noticeable.  The other issue is that there’s a certain elitism with security folks which is both laudable – you want the best people, with the best training, to be leading the field – and lamentable – elitism tends to lead to exclusion of members of other groups.

A quick paragraph in praise of jargon

It’s not all bad, however.  We need jargon, as I noted above, to enable us to discuss concepts, and the use of terms in normal language – like scheduling – as jargon leads to some interesting metaphors which guide us in our practice[8].  We absolutely need shared practice, and for that we need shared language – and some of that language is bound, over time, to become jargon.  But consider a lexicon, or a FAQ, or other ways to allow your colleagues to participate: be inclusive, not exclusive.


1 – “properly” – really?  Although I’m not sure “improperly” is any better.

2 – oh, remember that I actually studied English Literature and Theology at university, so this was a conscious decision to embrace a rather different culture.

3 – or “Liberal Arts”.

4 – the most recent “real” edition of which I’m aware is Raymond, Eric S., 1996, The New Hacker’s Dictionary, 3rd ed., MIT University Press, Cambridge, Mass.

5 – I’ve added the first options that spring to mind when I come across them – I’m aware there are almost certainly others.

6 – believe me, when I saw this abbreviation in a research paper for the first time, I was most confused, and actually had to look it up.

7 – oh, look: jargon…

8 – though metaphors can themselves be constraining as they tend to push us to thinking in a particular way, even if that way isn’t entirely applicable in this context.

Fallback is not your friend – learning from TLS

The problem with standards is that people standardise on them.

People still talk about “SSL encryption”, which is so last century.  SSL – Secure Sockets Layer – was superseded by TLS – Transport Layer Security – in 1999 (which is when the TLS 1.0 specification was published).  TLS 1.1 came out in 2006.  That’s 12 years ago, which is, like, well, ages and ages in Internet time.  The problem with standards, however, is that people standardise on them.  It turns out that TLS v1.0 and v1.1 have some security holes in them, which mean that you shouldn’t use them, but people not only did, but they wrote their specifications and implementations to do so – and to continue doing so.  In fact, as reported by The Register, the IETF is leading a call for TLS v1.0 and v1.1 to be fully deprecated: in other words, to tell people that they definitely, really, absolutely shouldn’t be using them.

Given that TLS v1.2 came out in August 2008, nearly ten years ago, and that TLS v1.3 is edging towards publication now, surely nobody is still using TLS v1.0 or v1.1 anyway, right?

Answer – it doesn’t matter whether they are or not

This sounds a bit odd as an answer, but bear with me.  Let’s say that you have implemented a new website, a new browser, or a client for some TLS-supporting server.  You’re a good and knowledgeable developer[1], or you’re following the designs from your good and knowledgeable architect or the tests from your good and knowledgeable test designer[2].  You are, of course, therefore using the latest version of TLS, which is v1.2.  You’ll be upgrading to v1.3 just as soon as it’s available[3].  You’re safe, right?  None of those nasty vulnerabilities associated with v1.0 or v1.1 can come and bite you, because all of the other clients or servers to which you’ll be connecting will be using v1.2 as well.

But what if they aren’t?  What about legacy clients or old server versions?  Wouldn’t it just be better to allow them to say “You know what?  I don’t speak v1.2, so let’s speak a version that I do support, say v1.0 instead?”

STOP.

This is fallback, and although it seems like the sensible, helpful thing to do, once you support it – and it’s easy, as many TLS implementations allow it – then you’ve got problems.  You’ve just made a decision to sacrifice security for usability – and worse, what you’re actually doing is opening the door for attackers not just to do bad things if they come across legitimate connections, but actually to pretend to be legitimate, force you to downgrade the protocol version to one they can break, and then do bad things.  Of course, those bad things will depend on a variety of factors, including the protocol version, the data being passed, cipher suites in use, etc., but this is a classic example of extending your attack surface when you don’t need to.

The two problems are:

  1. it feels like the right thing to do;
  2. it’s really easy to implement.

The right thing to do?

Ask yourself the question: do I really need to accept connections from old clients or make connections to new servers?  If I do, should I not at least throw an error[4]?  You should consider whether this is really required, or just feels like the right thing to do, and it’s a question that should be considered by the security folks on your project team[5].  Would it not be safer to upgrade the old clients and servers?  In lots of cases, the answer is yes – or at least ensure that you deprecate in one release, and then remove support in the next.

The easy implementation?

According to a survey performed by Qualys SSL Labs in May 2018 this was a breakdown of 150,000 popular websites in the world, here are versions of TLS that are supported:

  • TLS v1.2 – 92.3%
  • TLS v1.1 – 83.5%
  • TLS v1.0 – 83.4%
  • SSL v3.0 – 10.7%
  • SSL v2.0 – 2.8%

Yes, you’re reading that correctly: over ten percent of the most popular 150,000 websites in the world support SSL v3.0.  You don’t even want to think about that, believe me.  Now, there may be legitimate reasons for some of those websites to support legacy browsers, but you can bet your hat that a good number of them have just left support in for some of these versions just because it’s more effort to turn off than it is to leave on.  This is lazy programming, and it’s insecure practice.  Don’t do it.

Conclusion

TLS isn’t the only protocol that supports fallback[8], and there can be good reasons for allowing it.  But when the protocol you’re using is a security protocol, and the new version fixes security problems in the old version(s), then you should really avoid it unless you absolutely have to allow it.


1 – the only type of developer to read this blog.

2 – I think you get the idea.

3 – of course you will.

4 – note – throwing security-based errors that users understand and will act on is notoriously difficult.

5 – if you don’t have any security folks on your design team, find some[6].

6 – or at least one, for pity’s sake[7].

7 – or nominate yourself and do some serious reading…

8 – I remember having to hand-code SOCKS v2.0 to SOCKS v1.0 auto-fallback into a client implementation way back in 1996 or so, because the SOCKS v2.0 protocol didn’t support it.

The gift that keeps on giving: passwords

A Father’s Day special

There’s an old saying: “if you give a man a fish, he’ll eat for a day, but if you teach a man to fish, he’ll eat for a lifetime.”  There are some cruel alternatives with endings like “he’ll buy a silly hat and sit outside in the rain”, but the general idea is that it’s better to teach someone something rather than just giving them something.

With Father’s Day coming up this Sunday in many parts of the world, I’d like to suggest the same for passwords.  Many people’s password practices are terrible.  There are three things that people really don’t get about passwords:

  1. what they should look like
  2. how they should be stored
  3. how they should be communicated.

Let’s go through each of these in turn, and I’ll try to give brief tips that you can pass onto your father (or, indeed, mother, broader family, friends or colleagues) to help them with password safety.

What should passwords look like?

There’s a famous xkcd comic called password strength which aims to help you find a useful password.  This is great advice if you only have a few passwords, but about twenty years ago I got above ten, and then started re-using passwords for certain levels of security.  This was terrible at the time, and even worse now.  Look at the the number of times a week we see news about information being lost when companies or organisations are hacked.  If you share passwords between accounts, there’s a decent chance that your login details for one will be exposed, which means that all your other accounts that share that set are compromised.

I know some people who used to have permutations of passwords.  Let’s say the base was “p4ssw0rd”: they would then add a suffix for the website or account, such as “p4ssw0rdNetflix”.  This might be fine if we believed that all passwords are stored in hashed form, but, well, we know they’re not, so don’t do this, either.  Extrapolating from one account to another is too easy.

What does a good password look like, then?  Here’s one: “W9#!=_twXhRb”  And another?  This one is 16 characters long: “*Wdb_%|#N^X6CR_b”  What are the chances of a human guessing these?  Pretty slim.  And a computer?  Not much better, to be honest.  They are randomly generated by software, and as long as I use a different one for each account, I’m pretty safe against password-guessing attacks.

“But,” you say, “how am I supposed to remember them?  I’ve got dozens of accounts, and I can’t remember one of those, let alone fifty!”

How should you store passwords?

Well, you shouldn’t try to remember passwords, in the same way that you shouldn’t try to generate them.  Oh, there will be a handful that you might remember – maybe low-importance ones like the wifi key to your home AP – but most of them you should trust to a password manager.  These are nifty pieces of software that will generate and then remember hundreds of passwords for you.  Some of them will even automatically fill website fields for you if you ask them to.  The best ones are open source, which means that people have pored over their code (hopefully) to check they’re trustworthy, and that if you’re not entirely sure, then you can pore of their code, too.  And make changes and improvements and generally improve the world bit by bit.

You will need to remember one password, though, and that’s the one to unlock the password manager.  Make it really, really strong: it’s about the only one you mustn’t lose (though most websites will help you reset a password if you forget it, so it’s just a matter of going through each of the several hundred until they’re done…).  Use the advice from the xkcd cartoon, or another strong password algorithm that’s easy to remember.

To make things more safe, store the (password protected) key store somewhere that is not easily accessed by other people – not a shared drive at work, for instance, but maybe on your phone or on some cloud-based storage that you can get to if you lose your phone.  Always set the password manager to auto-lock itself after some time, in case you leave your computer logged on, or your phone gets stolen.

How to communicate passwords

Would you send a password via email?  What about by SMS?  Is post[2] better?  Is it acceptable to reveal a password over the phone in a crowded train carriage[4]?  Would you give your laptop password to a random person conducting a survey on a railway station for the prize of a chocolate bar?

In an ideal world, we would never share passwords, but there are times when we need to – and times when it’s worthwhile for material rewards[5].  There are some accounts which are shared – TV or film streaming accounts for the family – or that you’ve set up for somebody else, or which somebody urgently needs to access because you’re on holiday, for instance.  So you may need to give out passwords from time to time.  What’s the best mechanism?  What’s the worst?

This may sound surprising, but I’d generally say that the worst (marginally) is post.  What you’re trying to avoid happening is a Bad Person[tm] from marrying two pieces of information: the username and the password.  If someone has access to your post, then there’s a good chance that they might be able to work out enough information about you that they can guess the account name.  The others?  Well, they’re OK as long as you’re not also sending the username via the same channel.  That, in fact, is the key test: you should never provide the two pieces of information in such a way that a person with access to one channel can put them together.   So, telling someone a password in a crowded train carriage may be rude in relation to all of the other people in the carriage[6], but it may be very secure in terms of account safety.

The reason I posed the question about the survey is that every few months a survey company in the UK asks people at mainline railway stations to tell them their password in exchange for a chocolate bar, and then write a headline about how awful it is that many people will give them their password.  This is a stupid headline, for a stupid survey, for two reasons:

  1. I’d happily lie and tell them a false password in order to get a free chocolate bar AND
  2. even if I gave them the correct password, how are they going to marry that with my account details?

Conclusion

If you’re the sort of person reading there’s a fairly high chance that you’re the sort of person who’s asked to clear up the mess what family, friends or colleagues get their accounts compromised[7].  Here are four rules for password security:

  1. don’t reuse passwords – use a different one for every single account
  2. don’t think up your own passwords – get a password manager to generate them for you
  3. use a password manager to store your passwords – if they’re strong enough in the first place, you won’t be able to remember them
  4. never send usernames and passwords over the same channel – you want to avoid the situation where an attacker has access to both and can use them.

I’ll add a fifth one for luck: feel free to use underhand tactics to get chocolate bars from people performing poorly-designed surveys on railway stations.


1 – I thought about changing the order, as they do impact on each other, but it made my head hurt, so I stopped.

2 – note for younger readers: there used to be something called “snail mail”.  It’s nearly dead[3].

3 – unless you forget to turn on “electronic statements” for your bank account.  Then you’ll get loads of it.

4 – whatever the answer to this is from a security point of view, the correct answer is “no”, because a) you’re going to annoy me by shouting it repeatedly down the phone because reception is so bad on the train that the recipient can’t hear it and b) because reception is so bad on the train that the recipient can’t hear it (see b)).

5 – I like chocolate.

6 – I’m not a big fan of phone conversations in railway carriages, to be honest.

7 – Or you’ve been sent a link to this because you are one of those family, friends or colleagues, and the person who sent you the link is sick and tired of doing all of your IT dirty work for you.

In praise of the CIA

CIA is not sufficient to ensure security within a system.

In the wake of the widespread failure of the Visa processing network on Friday last week [1] (see The Register for more details), I thought it might be time to revisit that useful aide memoire, C.I.A.:

  • Confidentiality
  • Integrity
  • Availability.

This isn’t the first time I’ve written about this trio, and I doubt that it’ll be the last.  However, this particular incident seems like a perfect example to examine the least-regarded of the three – availability – and also to cogitate somewhat on how the CIA is necessary, but not sufficient[3].

Availability

As far as we can tell, the problem with the Visa payment system came down to a hardware failure.  As someone who used to work as a software engineer, I can tell you that this is by far the best type of failure, because there’s very little you can do about it once you’ve diagnosed it[6], which means that it quickly becomes SEP[7].  Be that as it may, the result of this hardware problem was that a large percentage of the network was unable to access Visa processing capabilities correctly.  Though ATMs[8] generally worked, it seems, payment using card readers generally didn’t.

How is this a security problem?  Well, one way to answer that question is to say that if security is about reducing risk to your business, then as this caused significant damage to Visa’s revenue stream – not to mention its reputation – then the risk materialised, and there was a security failure.  I would be interested to know, however, how many organisations have their security teams in charge of ensuring up-time and availability of their systems in terms of guarding against vulnerabilities such as hardware failures.  My suspicion is that the scope of availability-safeguarding by security teams is generally to the extent of managing denial of service or other malicious attacks.

I would argue that more organisations should consider this part of the security team’s mandate, to be honest, because the impacts are very similar, and many of the mitigations will be the same.  Of course, if you’re already an integrated Ops team – or even moving to a DevOps or DevSecOps model – then well done you: I’m sure you’re 100% safe from anything similar befalling you[10].

Consistency and correctness

As I mentioned above, there’s a criticism which is often levelled at the CIA triad, which is that confidentiality, integrity and availability are not, on their own, sufficient to design and run a system.

The Visa incident is a perfect example of why this is the case.  It appears that the outage was not complete, as even at card readers, some amount of information was going through when a transaction was attempted.  This meant that for some (attempted) transactions, at least, debits were appearing on accounts even when they were not being recorded as credits at the vendor’s side.  What does this mean?  In simple terms, money was coming out of people’s accounts, but not going to the people they were trying to pay.  I’m not an expert on retail banking, but I believe that this is pretty much the opposite of what how a financial transaction is supposed to work.

You can’t really blame this on a lack of confidentiality or a lack or integrity.  Nor is it really to do with a lack of availability – it may have been a side effect of the same cause as the availability failures, but that doesn’t mean that it caused them[11].

These problems can be characterised in two ways: as a lack of consistency and/or a lack of correctness.  In a system, data should be consistent across the system, so when a debit shows up with no corresponding credit, there is a a failure of consistency.  This lack of consistency highlights a lack of consistency: in fact, the very point of double-entry book-keeping is to allow these sorts of errors to be spotted.

What this tells us is not only that CIA is not sufficient to ensure security within a system but also that there exist other mechanisms – some very ancient – that allow us to manage our systems and to mitigate failures.


1 – at time of writing.  If you’re reading this after, say, the 10th or 11th of June 2018, then it was longer ago than that[2].

2 – unless there’s been another outage, in which case it may be time to start taking out cash and stuffing it into your mattress.

3 – in no sense is this a comment on the Central Intelligence Agency.  I am unqualified to discuss that particular, august[4] body.  Nor would I consider it in my best interests to do so[5].

4 – it was actually founded in the month of September, according to the Interwebs.

5 – or the best interests of my readers.

6 – which can, admittedly, take quite a long time, as you’re probably looking in the wrong places if you, like me, generally assume that the bug is in your own code.

7 – Somebody Else’s Problem (hat tip to the late, great Douglas Adams).

8 – “cash points” (US), “holes in the wall” (UK)[9].

9 – yes, we really do call them this: “I need to get some money from the hole in the wall”.  It’s descriptive and accurate: what more do you want?

10 – no, I know you’re not, and you know you’re not, but this will make everybody else feel that little bit more nervous, and you can feel a little bit more smug, which is always nice, isn’t it?

11 – cue a link to one of my most favourite comics of all time: https://xkcd.com/552/.