Fallback is not your friend – learning from TLS

The problem with standards is that people standardise on them.

People still talk about “SSL encryption”, which is so last century.  SSL – Secure Sockets Layer – was superseded by TLS – Transport Layer Security – in 1999 (which is when the TLS 1.0 specification was published).  TLS 1.1 came out in 2006.  That’s 12 years ago, which is, like, well, ages and ages in Internet time.  The problem with standards, however, is that people standardise on them.  It turns out that TLS v1.0 and v1.1 have some security holes in them, which mean that you shouldn’t use them, but people not only did, but they wrote their specifications and implementations to do so – and to continue doing so.  In fact, as reported by The Register, the IETF is leading a call for TLS v1.0 and v1.1 to be fully deprecated: in other words, to tell people that they definitely, really, absolutely shouldn’t be using them.

Given that TLS v1.2 came out in August 2008, nearly ten years ago, and that TLS v1.3 is edging towards publication now, surely nobody is still using TLS v1.0 or v1.1 anyway, right?

Answer – it doesn’t matter whether they are or not

This sounds a bit odd as an answer, but bear with me.  Let’s say that you have implemented a new website, a new browser, or a client for some TLS-supporting server.  You’re a good and knowledgeable developer[1], or you’re following the designs from your good and knowledgeable architect or the tests from your good and knowledgeable test designer[2].  You are, of course, therefore using the latest version of TLS, which is v1.2.  You’ll be upgrading to v1.3 just as soon as it’s available[3].  You’re safe, right?  None of those nasty vulnerabilities associated with v1.0 or v1.1 can come and bite you, because all of the other clients or servers to which you’ll be connecting will be using v1.2 as well.

But what if they aren’t?  What about legacy clients or old server versions?  Wouldn’t it just be better to allow them to say “You know what?  I don’t speak v1.2, so let’s speak a version that I do support, say v1.0 instead?”

STOP.

This is fallback, and although it seems like the sensible, helpful thing to do, once you support it – and it’s easy, as many TLS implementations allow it – then you’ve got problems.  You’ve just made a decision to sacrifice security for usability – and worse, what you’re actually doing is opening the door for attackers not just to do bad things if they come across legitimate connections, but actually to pretend to be legitimate, force you to downgrade the protocol version to one they can break, and then do bad things.  Of course, those bad things will depend on a variety of factors, including the protocol version, the data being passed, cipher suites in use, etc., but this is a classic example of extending your attack surface when you don’t need to.

The two problems are:

  1. it feels like the right thing to do;
  2. it’s really easy to implement.

The right thing to do?

Ask yourself the question: do I really need to accept connections from old clients or make connections to new servers?  If I do, should I not at least throw an error[4]?  You should consider whether this is really required, or just feels like the right thing to do, and it’s a question that should be considered by the security folks on your project team[5].  Would it not be safer to upgrade the old clients and servers?  In lots of cases, the answer is yes – or at least ensure that you deprecate in one release, and then remove support in the next.

The easy implementation?

According to a survey performed by Qualys SSL Labs in May 2018 this was a breakdown of 150,000 popular websites in the world, here are versions of TLS that are supported:

  • TLS v1.2 – 92.3%
  • TLS v1.1 – 83.5%
  • TLS v1.0 – 83.4%
  • SSL v3.0 – 10.7%
  • SSL v2.0 – 2.8%

Yes, you’re reading that correctly: over ten percent of the most popular 150,000 websites in the world support SSL v3.0.  You don’t even want to think about that, believe me.  Now, there may be legitimate reasons for some of those websites to support legacy browsers, but you can bet your hat that a good number of them have just left support in for some of these versions just because it’s more effort to turn off than it is to leave on.  This is lazy programming, and it’s insecure practice.  Don’t do it.

Conclusion

TLS isn’t the only protocol that supports fallback[8], and there can be good reasons for allowing it.  But when the protocol you’re using is a security protocol, and the new version fixes security problems in the old version(s), then you should really avoid it unless you absolutely have to allow it.


1 – the only type of developer to read this blog.

2 – I think you get the idea.

3 – of course you will.

4 – note – throwing security-based errors that users understand and will act on is notoriously difficult.

5 – if you don’t have any security folks on your design team, find some[6].

6 – or at least one, for pity’s sake[7].

7 – or nominate yourself and do some serious reading…

8 – I remember having to hand-code SOCKS v2.0 to SOCKS v1.0 auto-fallback into a client implementation way back in 1996 or so, because the SOCKS v2.0 protocol didn’t support it.

The gift that keeps on giving: passwords

A Father’s Day special

There’s an old saying: “if you give a man a fish, he’ll eat for a day, but if you teach a man to fish, he’ll eat for a lifetime.”  There are some cruel alternatives with endings like “he’ll buy a silly hat and sit outside in the rain”, but the general idea is that it’s better to teach someone something rather than just giving them something.

With Father’s Day coming up this Sunday in many parts of the world, I’d like to suggest the same for passwords.  Many people’s password practices are terrible.  There are three things that people really don’t get about passwords:

  1. what they should look like
  2. how they should be stored
  3. how they should be communicated.

Let’s go through each of these in turn, and I’ll try to give brief tips that you can pass onto your father (or, indeed, mother, broader family, friends or colleagues) to help them with password safety.

What should passwords look like?

There’s a famous xkcd comic called password strength which aims to help you find a useful password.  This is great advice if you only have a few passwords, but about twenty years ago I got above ten, and then started re-using passwords for certain levels of security.  This was terrible at the time, and even worse now.  Look at the the number of times a week we see news about information being lost when companies or organisations are hacked.  If you share passwords between accounts, there’s a decent chance that your login details for one will be exposed, which means that all your other accounts that share that set are compromised.

I know some people who used to have permutations of passwords.  Let’s say the base was “p4ssw0rd”: they would then add a suffix for the website or account, such as “p4ssw0rdNetflix”.  This might be fine if we believed that all passwords are stored in hashed form, but, well, we know they’re not, so don’t do this, either.  Extrapolating from one account to another is too easy.

What does a good password look like, then?  Here’s one: “W9#!=_twXhRb”  And another?  This one is 16 characters long: “*Wdb_%|#N^X6CR_b”  What are the chances of a human guessing these?  Pretty slim.  And a computer?  Not much better, to be honest.  They are randomly generated by software, and as long as I use a different one for each account, I’m pretty safe against password-guessing attacks.

“But,” you say, “how am I supposed to remember them?  I’ve got dozens of accounts, and I can’t remember one of those, let alone fifty!”

How should you store passwords?

Well, you shouldn’t try to remember passwords, in the same way that you shouldn’t try to generate them.  Oh, there will be a handful that you might remember – maybe low-importance ones like the wifi key to your home AP – but most of them you should trust to a password manager.  These are nifty pieces of software that will generate and then remember hundreds of passwords for you.  Some of them will even automatically fill website fields for you if you ask them to.  The best ones are open source, which means that people have pored over their code (hopefully) to check they’re trustworthy, and that if you’re not entirely sure, then you can pore of their code, too.  And make changes and improvements and generally improve the world bit by bit.

You will need to remember one password, though, and that’s the one to unlock the password manager.  Make it really, really strong: it’s about the only one you mustn’t lose (though most websites will help you reset a password if you forget it, so it’s just a matter of going through each of the several hundred until they’re done…).  Use the advice from the xkcd cartoon, or another strong password algorithm that’s easy to remember.

To make things more safe, store the (password protected) key store somewhere that is not easily accessed by other people – not a shared drive at work, for instance, but maybe on your phone or on some cloud-based storage that you can get to if you lose your phone.  Always set the password manager to auto-lock itself after some time, in case you leave your computer logged on, or your phone gets stolen.

How to communicate passwords

Would you send a password via email?  What about by SMS?  Is post[2] better?  Is it acceptable to reveal a password over the phone in a crowded train carriage[4]?  Would you give your laptop password to a random person conducting a survey on a railway station for the prize of a chocolate bar?

In an ideal world, we would never share passwords, but there are times when we need to – and times when it’s worthwhile for material rewards[5].  There are some accounts which are shared – TV or film streaming accounts for the family – or that you’ve set up for somebody else, or which somebody urgently needs to access because you’re on holiday, for instance.  So you may need to give out passwords from time to time.  What’s the best mechanism?  What’s the worst?

This may sound surprising, but I’d generally say that the worst (marginally) is post.  What you’re trying to avoid happening is a Bad Person[tm] from marrying two pieces of information: the username and the password.  If someone has access to your post, then there’s a good chance that they might be able to work out enough information about you that they can guess the account name.  The others?  Well, they’re OK as long as you’re not also sending the username via the same channel.  That, in fact, is the key test: you should never provide the two pieces of information in such a way that a person with access to one channel can put them together.   So, telling someone a password in a crowded train carriage may be rude in relation to all of the other people in the carriage[6], but it may be very secure in terms of account safety.

The reason I posed the question about the survey is that every few months a survey company in the UK asks people at mainline railway stations to tell them their password in exchange for a chocolate bar, and then write a headline about how awful it is that many people will give them their password.  This is a stupid headline, for a stupid survey, for two reasons:

  1. I’d happily lie and tell them a false password in order to get a free chocolate bar AND
  2. even if I gave them the correct password, how are they going to marry that with my account details?

Conclusion

If you’re the sort of person reading there’s a fairly high chance that you’re the sort of person who’s asked to clear up the mess what family, friends or colleagues get their accounts compromised[7].  Here are four rules for password security:

  1. don’t reuse passwords – use a different one for every single account
  2. don’t think up your own passwords – get a password manager to generate them for you
  3. use a password manager to store your passwords – if they’re strong enough in the first place, you won’t be able to remember them
  4. never send usernames and passwords over the same channel – you want to avoid the situation where an attacker has access to both and can use them.

I’ll add a fifth one for luck: feel free to use underhand tactics to get chocolate bars from people performing poorly-designed surveys on railway stations.


1 – I thought about changing the order, as they do impact on each other, but it made my head hurt, so I stopped.

2 – note for younger readers: there used to be something called “snail mail”.  It’s nearly dead[3].

3 – unless you forget to turn on “electronic statements” for your bank account.  Then you’ll get loads of it.

4 – whatever the answer to this is from a security point of view, the correct answer is “no”, because a) you’re going to annoy me by shouting it repeatedly down the phone because reception is so bad on the train that the recipient can’t hear it and b) because reception is so bad on the train that the recipient can’t hear it (see b)).

5 – I like chocolate.

6 – I’m not a big fan of phone conversations in railway carriages, to be honest.

7 – Or you’ve been sent a link to this because you are one of those family, friends or colleagues, and the person who sent you the link is sick and tired of doing all of your IT dirty work for you.

In praise of the CIA

CIA is not sufficient to ensure security within a system.

In the wake of the widespread failure of the Visa processing network on Friday last week [1] (see The Register for more details), I thought it might be time to revisit that useful aide memoire, C.I.A.:

  • Confidentiality
  • Integrity
  • Availability.

This isn’t the first time I’ve written about this trio, and I doubt that it’ll be the last.  However, this particular incident seems like a perfect example to examine the least-regarded of the three – availability – and also to cogitate somewhat on how the CIA is necessary, but not sufficient[3].

Availability

As far as we can tell, the problem with the Visa payment system came down to a hardware failure.  As someone who used to work as a software engineer, I can tell you that this is by far the best type of failure, because there’s very little you can do about it once you’ve diagnosed it[6], which means that it quickly becomes SEP[7].  Be that as it may, the result of this hardware problem was that a large percentage of the network was unable to access Visa processing capabilities correctly.  Though ATMs[8] generally worked, it seems, payment using card readers generally didn’t.

How is this a security problem?  Well, one way to answer that question is to say that if security is about reducing risk to your business, then as this caused significant damage to Visa’s revenue stream – not to mention its reputation – then the risk materialised, and there was a security failure.  I would be interested to know, however, how many organisations have their security teams in charge of ensuring up-time and availability of their systems in terms of guarding against vulnerabilities such as hardware failures.  My suspicion is that the scope of availability-safeguarding by security teams is generally to the extent of managing denial of service or other malicious attacks.

I would argue that more organisations should consider this part of the security team’s mandate, to be honest, because the impacts are very similar, and many of the mitigations will be the same.  Of course, if you’re already an integrated Ops team – or even moving to a DevOps or DevSecOps model – then well done you: I’m sure you’re 100% safe from anything similar befalling you[10].

Consistency and correctness

As I mentioned above, there’s a criticism which is often levelled at the CIA triad, which is that confidentiality, integrity and availability are not, on their own, sufficient to design and run a system.

The Visa incident is a perfect example of why this is the case.  It appears that the outage was not complete, as even at card readers, some amount of information was going through when a transaction was attempted.  This meant that for some (attempted) transactions, at least, debits were appearing on accounts even when they were not being recorded as credits at the vendor’s side.  What does this mean?  In simple terms, money was coming out of people’s accounts, but not going to the people they were trying to pay.  I’m not an expert on retail banking, but I believe that this is pretty much the opposite of what how a financial transaction is supposed to work.

You can’t really blame this on a lack of confidentiality or a lack or integrity.  Nor is it really to do with a lack of availability – it may have been a side effect of the same cause as the availability failures, but that doesn’t mean that it caused them[11].

These problems can be characterised in two ways: as a lack of consistency and/or a lack of correctness.  In a system, data should be consistent across the system, so when a debit shows up with no corresponding credit, there is a a failure of consistency.  This lack of consistency highlights a lack of consistency: in fact, the very point of double-entry book-keeping is to allow these sorts of errors to be spotted.

What this tells us is not only that CIA is not sufficient to ensure security within a system but also that there exist other mechanisms – some very ancient – that allow us to manage our systems and to mitigate failures.


1 – at time of writing.  If you’re reading this after, say, the 10th or 11th of June 2018, then it was longer ago than that[2].

2 – unless there’s been another outage, in which case it may be time to start taking out cash and stuffing it into your mattress.

3 – in no sense is this a comment on the Central Intelligence Agency.  I am unqualified to discuss that particular, august[4] body.  Nor would I consider it in my best interests to do so[5].

4 – it was actually founded in the month of September, according to the Interwebs.

5 – or the best interests of my readers.

6 – which can, admittedly, take quite a long time, as you’re probably looking in the wrong places if you, like me, generally assume that the bug is in your own code.

7 – Somebody Else’s Problem (hat tip to the late, great Douglas Adams).

8 – “cash points” (US), “holes in the wall” (UK)[9].

9 – yes, we really do call them this: “I need to get some money from the hole in the wall”.  It’s descriptive and accurate: what more do you want?

10 – no, I know you’re not, and you know you’re not, but this will make everybody else feel that little bit more nervous, and you can feel a little bit more smug, which is always nice, isn’t it?

11 – cue a link to one of my most favourite comics of all time: https://xkcd.com/552/.

The most important link: unsubscribe me

No more (semi-)unsolicited emails from that source.

Over the past few days, the much-vaunted[1] GDPR has come into force.  In case you missed this[2], GDPR is a set of rules around managing user data that all organisations with data about European citizens must follow for those citizens.  Which basically means that it’s cheaper to apply the same rules across all of your users.

Here’s my favourite GDPR joke[3].

Me: Do you know a good GDPR consultant?

Colleague: Yes.

Me: Can you give me their email address.

Colleague: No.

The fact that this is the best of the jokes out there (there’s another one around Santa checking lists which isn’t that bad either) tells you something about how fascinating the whole subject is.

So I thought that I’d talk about something different today.  I’m sure that over the past few weeks, because of the new GDPR regulations,  you’ve received a flurry[4] of emails that fall into one of two categories:

  1. please click here to let us know what uses we can make of your data (the proactive approach);
  2. we’ve changed our data usage and privacy policy: please check here to review it (the reactive approach).

I’ve come across[5] suggestions that the proactive approach is overkill, and generally not required, but I can see what people are doing it: it’s easier to prove that you’re doing the right thing.  The reactive approach means that it’s quicker just to delete the email, which is at least a kind of win.

What I’ve found interesting, however, is the number of times that I’ve got an email of type 1 from a company, and I’ve thought: “You have my data?  Really?”  It turns out that more companies have information about me than I’d thought[6], and this has allowed me to click through and actually tell them that I want them to delete my data completely, and unsubscribe me from their email lists.  This then led me to thinking, “you know what, although I bought something from this company five years ago, or had an interest in something they were selling, at least, I now have no interest in them at all, or in receiving marketing emails from them,” and then performing the same function: telling them to delete and unsubscribe me.

But it didn’t stop there.  I’ve decided to have a clean out.  Now, when an email comes in from a company, I take a moment to decide whether:

  • I care about them or their product; OR
  • I’m happy for them to have my information in the first place.

If the answer to either of these questions is “no”, then I scroll down.  There, at the bottom of each mail, should be a link which says something like “subscription details” or “unsubscribe me”.  This has, I believe, been a legal requirement in many jurisdictions for quite a few years.  The whole process is quite liberating: I click on the link, and I’m either magically unsubscribed, or sometimes I have to scroll down the page a little to choose the relevant option, and “Bang!”, I’m done.  No more (semi-)unsolicited emails from that source.

I see this as a security issue: the fewer companies that have data about me, the fewer chances of misuse, and the lower the change of leakage.  One warning, however: phishing.  As I admitted in this blog last week, I got phished recently  (I got phished this week: what did I do?), and as more people take to unsubscribing by default, I can see this link actually being used for nefarious purposes, so do be careful before you click on it that it actually goes to where you think it should.  This can be difficult, because companies often use a third-party provider to manage their email services.  Be careful, then, that you don’t get duped into entering account details: there should be no need to log into your account to be deleted from a service.  If you want to change your mailing preferences for a company, then that may require you to log into your account: never do this from an email, always type go to the organisation’s website directly.


1 – I’ve always wanted to write that.

2 – well done, by the way.

3 – I’d provide attribution, but I’m not sure where it originated.

4 – or maybe a slurry?

5 – again, I can’t remember where.

6 – though I’m not that surprised.

I got phished this week: what did I do?

I was a foolish – but was saved by my forward planning.

The first thing I did was not panic.  The second was to move quickly.

But what happened to get to this stage, you may ask, and how could I have been so stupid?  I’ll tell you the story.

Every day, like most people, I suspect, I get lots of emails[1].  I have a variety of email accounts, and although I’m sure that I should be more disciplined, I tend to just manage them as they come in.  First thing in the morning, though, I tend to sit down with a cup of tea and go through what’s come in an manage what I can then.  Most work emails that require more than a glance and a deletion[2] will wait until later in the day, but I like to deal with any home-related ones before breakfast.

The particular email I’m talking about came in overnight, and I was sitting down with my cup of tea[3] when I noticed an email from a company with whom I have a subscription.  The formatting was what I’d expect, and it looked fine.  It was asking me to change my payment details.

“Danger!” is what you’ll be thinking, and quite rightly.  However, I had some reasons for thinking that I might need to do this.  I’ve recently changed credit cards, and I was aware that there was quite a high likelihood that I’d used the old credit card to subscribe.  What’s more, I had a hazy recollection that I’d first subscribed to this service about this time of year, so it might well be due for renewal.

Here’s where I got even more unlucky: I told myself I’d come back to it because I didn’t have my wallet with me (not having got dressed yet).  This meant that I’d given myself a mental task to deal with the issue later in the day, and I think that this gave it a legitimacy in my head which it wouldn’t have got if I’d looked at it in the first place.  I also mentioned to my wife that I needed to do this: another step which in my head gave the task more legitimacy.

So I filed the mail as “Unread”, and went off to have a proper breakfast.  When I was dressed, I sat down and went back to the email.  I clicked on the link to update, and here’s where I did the really stupid thing: I didn’t check the URL.  What I really should have done was actually enter the URL I would have expected directly into the browser, but I didn’t.  I was in a rush, and I wanted to get it done.

I tried my account details, and nothing much happened.  I tried them again.  And then I looked at the URL in the browser bar.  That’s not right…

This was the point when I didn’t panic, but moved quickly.  I closed the page in my browser with the phishing site, and I opened a new one, into which I typed the correct URL.  I logged in with my credentials, and went straight to the account page, where I changed my password to a new, strong, machine-generated password.  I checked to see that the rest of the account details – including payment details – hadn’t been tampered with.  And I was done.

There’s something else that I did right, and this is important: I used a different set of account details (username and password) for this site to any other site to which I’m subscribed.  I use a password keeper (there are some good ones on the market, but I’d strongly advise going with an open source one: that way you or others can be pretty sure that your passwords aren’t leaking back to whoever wrote or compiled it), and I’m really disciplined about using strong passwords, and never reusing them at all.

So, I think I’m safe.  Let’s go over what I did right:

  • I didn’t panic.  I realised almost immediately what had happened, and took sensible steps.
  • I moved quickly.  The bad folks only had my credentials for a minute or so, as I immediately logged into the real site and changed my password.
  • I checked my account.   No details had been changed.
  • I used a strong, machine-generated password.
  • I hadn’t reused the same password over several sites.

A few other things worked well, though they weren’t down to me:

  1. the real site sent me an email immediately to note that I’d changed my login details.  This confirmed that it was done (and I checked the provenance of this email!).
  2. the account details on the real site didn’t list my full credit card details, so although the bad folks could have misused my subscription, they wouldn’t have had access to my credit card.

Could things have gone worse?  Absolutely.  Do I feel a little foolish?  Yes.  But hopefully my lesson is learned, and being honest will allow others to know what to do in the same situation.  And I’m really, really glad that I used a password keeper.


1 – some of them, particularly the work ones, are from people expecting me to do things.  These are the worst type.

2 – quite a few, actually – I stay subscribed to quite a few lists just to see what’s going on.

3 – I think it was a Ceylon Orange Pekoe, but I can’t remember now.

What are they attacking me for?

There are three main types of motivations: advantages to them; disadvantages to us; resources.

I wrote an article a few weeks ago called What’s a State Actor, and should I care?, and a number of readers asked if I could pull apart a number of the pieces that I presented there into separate discussions[1].  One of those pieces was the question of who is actually likely to attack me.

I presented a brief list thus:

  • insiders
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists
  • … and more.

One specific “more” that I mentioned was State Actors.  If you look around, you’ll find all manner of lists.  Other attacker types that I didn’t mention in my initial list include:

  • members of organised crime groups
  • terrorists
  • “mercenary” hackers.

I suspect that you could come up with more supersets or subsets if you tried hard enough.

This is all very well, but what’s the value in knowing who’s likely to attack you in the first place[3]?  There’s a useful dictum: “No system is secure against a sufficiently resourced and motivated attacker.”[5]  This gives us a starting point, because it causes us to ask the question

  • what motivates the attacker?

In other words: what do they want to achieve?  What, in fact, are they trying to do or get when they attack us?  This is the core theme of this article.

There are three main types of motivations:

  1. advantages to them
  2. disadvantages to us
  3. resources.

There is overlap between the three, but I think that they are sufficiently separate to warrant separate discussion.

Advantages to them

Any successful attack is arguably a disadvantage to us, the attacked, but that does not mean that the primary motivation of an attacker is necessarily to cause harm.  There are a number of other common motivations, including:

  • reputation or “bragging rights” – a successful attack may well be used to prove the skills of an attacker to other parties.
  • information to share – sometimes attackers wish to gain information about our systems to share with others, whether for gain or to enhance their reputation (see above).  Such attacks may be painted a security research, but if they occur outside an ethical framework (such as provided by academic institutions) and without consent, it is difficult to consider them anything other than hostile.
  • information to keep – attackers may gain information and keep it for themselves for later use, either against our systems or against similarly configured systems elsewhere.
  • practice/challenge – there are attacks which are undertaken solely to practice techniques or as a personal challenge (where an external challenge is made, I would categorise them under “reputation”).  Harmless as this motivation may seem to some parts of the community, such attacks still cause damage and require mitigation, and should be considered hostile.
  • for money – some attacks are undertaken at the request of others, with the primary motivation of the attacker being that money or other material recompense (though the motivation of the party commissioning that attack likely to be one of these other ones listed)[6].

Disadvantages to us

Attacks which focus on causing negative impact to the individual or organisation attacked can be listed in the following categories:

  • business impact – impact to the normal functioning of the organisation or individual attacked: causing orders to be disrupted, processes to be slowed, etc..
  • financial impact – direct impact to the financial functioning of the attacked party: fraud, for instance.
  • reputational impact – there have been many attacks where the intention has clearly been to damage the reputation of the attacked party.  Whether it is leaking information about someone’s use of a dating website, disseminating customer information or solely replacing text or images on a corporate website, the intention is the same: to damage the standing of those being attacked.  Such damage may be indirect – for instance if an attacker were to cause the failure of an oil pipeline, affecting the reputation of the owner or operator of that pipeline.
  • personal impact – subtly different from reputational or business impact, this is where the attack intends to damage the self-esteem of an individual, or their ability to function professionally, physically, personally or emotionally.  This could cover a wide range of attacks such as “doxxing” or use of vulnerabilities in insulin pumps.
  • ecosystem impact – this type of motivation is less about affecting the ability of the individual or organisation to function normally, and more about affecting the ecosystem that exists around it.  Impacting the quality control checks of a company that made batteries might impact the ability of a mobile phone company to function, for instance, or attacking a water supply might impact the ability of a fire service to respond to incidents.

Resources

The motivations for some attacks may be partly or solely to get access to resources.  These resources might include:

  • financial resources – by getting access to company accounts, attackers might be able to purchase items for themselves or others or otherwise defraud the company.
  • compute resources – access to compute resources can lead to further attacks or be used for purposes such as cryptocurrency mining.
  • storage resources – attackers may wish to store illegal or compromising material on others’ systems.
  • network resources – access to network resources allows attackers to launch attacks elsewhere or to stream information with little traceability.
  • human resources – access to some systems may allow human resources to be deployed in ways unintended by the party being attacked: deploying police officers to a scene a long distance away from a planned physical attack, for instance.
  • physical resources – access to some systems may also allow physical resources to be deployed in ways unintended by the party being attacked: sending ammunition to the wrong front in a war, might, for example, lead to military force becoming weakened.

Conclusion

It may seem unimportant to consider the motivations of those attacking us, but if we can understand what it is that they are looking for, we can decide what we should defend, and sometimes what types of defence we should put in place.  As always, I welcome comments on this article: I’m sure that I’ve missed out some points, or misrepresented others, so please do get in touch and let me know your thoughts.


1 – I considered this a kind and polite way of saying “you stuffed too much into a single article: what were you thinking?”[2]

2 – and I don’t necessarily disagree.

3 – unless you’re just trying to scare senior management[4].

4 – which may be enjoyable, but is ultimately likely to backfire if you’re doing it without evidence and for a good reason.

5 – I made a (brief) attempt to track the origins of this phrase: I’m happy to attribute if someone can find the original.

6 – hat-tip to Reddit user poopin for spotting that I’d missed this one out.

How to talk to security people: a guide for the rest of us

I wrote an article a few months back called Using words that normal people understand.  It turns out that you[1] liked it a lot.  Well, lots of you read it, so I’m going to assume that it was at least OK.

It got me thinking, however, that either there are lots of security people who read this blog who work with “normal” people and needed to be able to talk to them better, or maybe there are lots of “normal” people who read this blog and wondered what security people had been going on about all this time.  This article is for the latter group (and maybe the former, too)[2].

Security people are people, too

The first thing to remember about security people is that they’re actual living, breathing, feeling humans, too[3].  They don’t like being avoided or ostracised by their colleagues.  Given the chance, most of them would like to go out for a drink with you.  But, unluckily, they’ve become so conditioned to saying “no” to any request you make that they will never be able to accept the offer[4].

Try treating them as people.  We have a very good saying within the company for which I work: “assume positive intent”.  Rather than assuming that they’re being awkward on purpose, trying assuming that they’re trying to help, even if they’re not very good at showing it.  This can be hard, but go with it.

Get them involved early

Get them really involved, not just on the official team email list, but actually invited to the meetings.  In person if they’re local, on a video-chat if they’re not.  And encourage everybody to turn on their webcam.  Face contact really helps.

And don’t wait for a couple of months, when the team is basically already formed.  Not a couple of weeks, or days.  Get your security folks involved at team formation: there’s something special about being involved in a team from the very beginning of a project.

How would you do it?

In fact, you know what?  Go further.  Ask the security person[5] on the team “how would you do it?”

This shuts down a particular response – “you can’t” – and encourages another – “I’m being asked for my professional opinion”.  It also sets up an open question with a positive answer implicit.  Tone is important, though, as is timing.  Do this at the beginning of the process, rather than at the end.  What you’re doing here is a psychological trick: you’re encouraging perception of joint ownership of the problem, and hopefully before particular approaches – which may not be security-friendly – have been designed into the project.

Make them part of the team

There are various ways to help form a team.  I prefer inclusive, rather than exclusive mechanisms: a shared set of pages that are owned by the group and visible to others, rather than a closed conference room with paper all over the windows so that nobody else can look in, for instance.  One of the most powerful techniques, I believe, is social interaction.  Not everybody will be able to make an after-work trip to a bar or pub, but doughnuts and pizza[6] for the team, or a trip out for tea or coffee once or twice a week can work wonders.  What is specific to security people in this, you ask?

Well, in many organisations, the security folks are in a separate team, sometimes in a separate area in a separate floor in a separate building[8].  They may default to socialising only with this group.  Discourage this.  To be clear: don’t discourage them from socialising with that group, but from socialising only with that group.  It’s possible to be part of two teams – you need to encourage that realisation.

Hopefully, as the team’s dynamics form and the security person feels more part of it, the answer to the earlier question (“how would you do it?”) will won’t be “well, I would do it this way”, but “well, I think we should do it this way”.

Realise that jargon has its uses

Some of the interactions that you have with your tame[9] security person will involve jargon.  There will be words – possibly phrases, possibly concepts or entire domains of knowledge – that you don’t recognise.  This is intimidating, and can be a defence mechanism on the part of the utterer.  Jargon has at least two uses:

  1. as an exclusionary mechanism for groups to keep non-members in the dark;
  2. as a short-hand to exchange information between “in-the-know” people so that they don’t need to explain everything in exhaustive detail every time.

Be aware that you use jargon as well – whether you’re in testing, finance, development or operations – and that this may be confusing to others.  Do security folks have a reputation for the first of those uses?  Yes, they do.  Maybe it’s ill-founded and maybe it’s not.  So ask for an explanation.  In either case, if you ask for an explanation and follow it as well you can, and then ask for an explanation of why it’s important in this case, you’re showing that you care.  And who doesn’t like showing off their knowledge to an at least apparently interested party?  Oh, and you’re probably learning something useful at the same time.

Sometimes what’s needed is not a full explanation, but the second bit: an explanation of why it’s important, and then a way to make it happen.  Do you need to know the gory details of proofs around Byzantine Fault-Tolerance?  Probably not.  Do you need to know why it’s important for your system?   That’s more likely.  Do you need to know the options for implementing it?  Definitely.

What else?

As I started to write this post, I realised that there were lots of other things I could say.  But once I got further through it, I also realised that there were definitely going to be other things that you[10] could add.  Please consider writing a comment on this post.  And once you’ve considered it, please do it.  Shared knowledge is open knowledge, and open is good.


1 – my hypothetical “readership”

2 – by using the pronoun “us” in the title, I’ve lumped myself in with the “normal people”.  I’m aware that this may not be entirely justified.  Or honest.

3 – I’m generalising, obviously, but let’s go with it.

4 – this is a joke, but I’m quite pleased with it.

5 – I suppose there might be more than one, but let’s not get ahead of ourselves.

6 – make sure it’s both, please.  Some people don’t like pizza[7].

7 – me.  I find it difficult to believe that some people don’t like doughnuts, but you never know.

8 – and, if they have their way, on separate email and telephone systems.

9 – well, partially house-trained, at least.

10 – dear reader.