16 ways in which security folks are(n’t) like puppies

Following the phenomenal[1] success of my ground-breaking[2] article 16 ways in which users are(n’t) like kittens, I’ve decided to follow up with a yet more inciteful[3] article on security folks. I’m using the word “folks” to encompass all of the different types of security people who “normal”[4] people think of “those annoying people who are always going on about that security stuff, and always say ‘no’ when we want to do anything interesting, important, urgent or business-critical”. I think you’ll agree that “folks” is a more accessible shorthand term, and now that I’ve made it clear who we’re talking about, we can move away from that awkwardness to the important[5] issue at hand.

As with my previous article on cats, I’d like my readers to pretend that this is a carefully researched article, and not one that I hastily threw together at short notice because I got up a bit late today.

Note 1: in an attempt to make security folks seem a little bit useful and positive, I’ve sorted the answers so that the ones where security folks turn out actually to share some properties with puppies appear at the end. But I know that I’m not really fooling anyone.

Note 2: the picture (credit: Miriam Bursell) at the top of this article is of my lovely basset hound Sherlock, who’s well past being a puppy. But any excuse to post a picture of him is fair game in my book. Or on my blog.

Research findings

Hastily compiled table

Property Security folks Puppies
Completely understand and share your priorities
No No
Everybody likes them
No Yes
Generally fun to be around No Yes
Generally lovable No Yes
Feel just like a member of the family No Yes
Always seem very happy to see you No Yes
Are exactly who you want to see at the end of a long day No Yes
Get in the way a lot when you’re in a hurry
Yes Yes
Make a lot of noise about things you don’t care about Yes Yes
Don’t seem to do much most of the time
Yes Yes
Constantly need cleaning up after
Yes Yes
Forget what you told them just 10 minutes ago Yes Yes
Seem to spend much of their waking hours eating or drinking Yes Yes
Wake you up at night to deal with imaginary attackers Yes Yes
Can turn bitey and aggressive for no obvious reason
Yes Yes
Have tickly tummies Yes[6] Yes

1 – relatively.

2 – no, you’re right: this is just hype.

3 – this is almost impossible to prove, given quite how uninciteful the previous one was.

4 – i.e., non-security.

5 – well, let’s pretend.

6 – well, I know I do.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

Announcing Enarx

Enarxのアナウンス

If I’ve managed the process properly, this article should be posting at almost exactly the time that we show a demo at Red Hat Summit 2019 in Boston.  That demo, to be delivered by my colleague Nathaniel McCallum, will be of an early incarnation of Enarx, a project that a few of us at Red Hat have been working on for a few months now, and which we’re ready to start announcing to the world.  We have code, we have a demo, we have a github repository, we have a logo: what more could a project want?  Well, people – but we’ll get to that.

What’s the problem?

When you run software (a “workload”) on a system (a “host”) on the cloud or on your own premises, there are lots and lots of layers.  You often don’t see those layers, but they’re there.  Here’s an example of the layers that you might see in a standard cloud virtualisation architecture.  The different colours represent different entities that “own”  different layers or sets of layers.

classic-cloud-virt-arch

Here’s a similar diagram depicting a standard cloud container architecture.  As before, each different colour represents a different “owner” of a layer or set of layers.

cloud-container-arch

These owners may be of very different types, from hardware vendors to OEMs to Cloud Service Providers (CSPs) to middleware vendors to Operating System vendors to application vendors to you, the workload owner.  And for each workload that you run, on each host, the exact list of layers is likely to be different.  And even when they’re the same, the versions of the layers instances may be different, whether it’s a different BIOS version, a different bootloader, a different kernel version or whatever else.

Now, in many contexts, you might not worry about this and your Cloud Service Provider goes out of its way to abstract these layers and their version details away from you.  But this is a security blog, for security people, and that means that anybody who’s reading this probably does care.

The reason we care is not just the different versions and the different layers, but the number of different things – and different entities – that we need to trust if we’re going to be happy running any sort of sensitive workload on these types of stacks.  I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised.  This is a big stretch when it comes to running my sensitive workloads.

What’s Enarx?

Enarx is a project which is trying to address this problem of having to trust all of those layers.  We made the decision that we wanted to allow people running workloads to be able to reduce the number of layers – and owners – that they need to trust to the absolute minimum.  We plan to use Trusted Execution Environments (“TEEs” – see Oh, how I love my TEE (or do I?)), and to provide an architecture that looks a little more like this:

reduced-arch

In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware – of which Enarx is part – but you don’t need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application.  The Enarx project will provide attestation of the TEE, so that you know you’re running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath you application.

The initial code is out there – working on AMD’s SEV TEE at the moment – and enough of it works now that we’re ready to tell you about it.

Making sure that your application meets your own security requirements is down to you.  🙂

How do I find out more?

Easiest is to visit the Enarx github: https://github.com/enarx.

We’ll be adding more information there – it’s currently just code – but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we’d like to have, but I thought I’d start here for now.

We’d love to have people in the community getting involved in the project.  It’s currently quite low-level, and requires quite a lot of knowledge to get running, but we’ll work on that.  You will need some specific hardware to make it work, of course.  Oh, and if you’re an early boot or a low-level kvm hacker, we’re particularly interested in hearing from you.

I will, of course, respond to comments on this article.

 

Security at Red Hat Summit

And a little teaser on my session…

I don’t often talk about my job specifically, but I’m very proud to be employed by Red Hat, working as Chief Security Architect, a role based in the Office of the CTO[1], and sometimes it’s the right time to talk about job-related stuff.  Next week is our annual Summit, and this year it’s in Boston[2], starting on Tuesday, 2019-05-07.  If you’re coming – great!  If you’re thinking about coming – please do!  And if you’re not able to come, then rest assured that many of the sessions will be recorded so that you can watch them in the future[3].

There is going to be a lot going on at Summit this year: including, I suspect, some big announcements[4].  There will also be lots of hands-on sessions, which are always extremely popular, and a number of excellent sessions and other activities around Diversity and Inclusion, a topic about which I’m extremely passionate.  As always, though, security is a big topic at Summit, and there are 50 security topic sessions listed in the agenda[5] (here’s the session catalog[ue]):

  • 26 breakout sessions
  • 11 instructor-led labs
  • 7 mini-sessions
  • 4 birds-of-a-feather sessions (“BOFs”)
  • 2 theatre sessions

These include sessions by partners and customers, as well as by Red Hatters themselves.

Many of my colleagues in OCTO will be presenting sessions in the “Emerging Technology” track, as will I.  My session is entitled “Security: Emerging technologies and open source”, and on Tuesday, at 1545 (3.45pm) I’ll be co-presenting it with my (non-OCTO) colleague Nathaniel McCallum.  The abstract is this:

What are some of the key emerging security technologies, and what impact will they have on the open source world? And what impact could open source have on them?

In this session, we’ll look at a handful of up-and-coming hardware and software technologies—from trusted execution environments to multi-party computation—and discuss the strategic impact we can expect them to have on our world. While individual technologies will be discussed (and you can expect a sneak peek demo of one of them), the focus of this session is not a deep-dive on any of them, but rather an architectural, strategic, and business view.

I’m trying to ensure that when I talk about all of these cool technologies, I talk about why open source is important to them, and/or why they are important to open source.

Here’s the particularly exciting bit, though: what’s not clear from the abstract – as it’s a late addition – is that Nathaniel plans to present a demo.  I can’t go into details at the moment, partly because we’re keeping it as a surprise, and partly because exactly what is demoed will depend on what Nathaniel’s frantic coding manages to achieve before Tuesday afternoon.  It’s one of the early results from a project we’re running, and I can tell you: a) that it involves TEEs (trusted execution environments); and b) that it’s really exciting.  I’m hoping that we can soon make more of a noise about it, and our Summit session is the start of that.

I’m hoping that the description above will be enough to convince you to attend Summit, but in case it isn’t, bear in mind the following:

  1. there will be keynotes from Jim Whitehurst (Red Hat CEO), Satya Nadella (Microsoft CEO) and Ginni Rometty (IBM CEO)
  2. the Summit party will feature Neon Trees[6].

There are lots of other great reasons to come as well, and if you do, please track me down and say hello: it’s always great to meet readers of this blog.  See you in Boston next week!


1 – “OCTO” – which, I guess, makes me one of the Octonauts.

2 – the picture at the top of this article is of Fenway Park, a place in Boston where they play baseball, which is like cricket, only quicker.  And you’re allowed to chuck the ball.

3 – in case, for any crazy reason, you’d like to see me speaking at last year’s Summit, here’s a link to the session: Getting strategic about security

4 – this should not be interpreted as a “forward-looking statement”, as I’m not privy to any particular definite decisions as to any such announcements.  Sorry – legal stuff…

5 – I’m indebted to my colleague Lucy Kerner, who’s organised and documented much of the security pieces, and from whom I have stolen copied gratefully reused much of the information in this article.

6 – I’ve only just clocked this, and my elder daughter is going to be very, very jealous when she gets back from school to discover this information.

Trust you? I can’t trust myself.

Cognitive biases are everywhere.

William Gibson’s book Virtual Light includes a bar which goes by the name of “Cognitive Dissidents”.  I noticed this last night when I was reading to bed, and it seemed apposite, because I wanted to write about cognitive bias, and the fact that I’d noticed it so strikingly was, I realised, an example of exactly that: in this case, The Frequency Illusion, or The Baader-Meinhof Effect.  Cognitive biases are everywhere, and there are far, far more of them than you might expect.

The problem is that we think of ourselves as rational beings, and it’s quite clear from decades – in some cases, centuries – of research that we’re anything but.  We’re very likely to tell ourselves that we’re rational, and it’s such a common fallacy that The Illusion of Validity (another cognitive bias) will help us believe it.  Cognitive biases are, according to Wikipedia, “systematic patterns of deviation from norm or rationality in judgment” or put maybe more simply, “our brains managing to think things which seem sensible, but aren’t.”[1]

The Wikipedia entry above gives lots of examples of cognitive bias – lots and lots of examples – and I’m far from being an expert in the field.  The more I think about risk and how we consider risk, however, the more I’m convinced that we – security professionals and those with whom we work – need to have a better understanding of our own cognitive biases and those of the people around us.  We like to believe that we make decisions and recommendations rationally, but it’s clear from the study of cognitive bias that:

  1. we generally don’t; and
  2. that even if we do, we shouldn’t expect those to whom we present them to consider them entirely rationally.

I should be clear, before we continue, that there are opportunities for abuse here.  There are techniques beloved of advertisers and the media to manipulate our thinking to their ends which we could use to our advantage and to try to manipulate others.  One example is the The Framing Effect.  If you want your management not to fund a new anti-virus product because you have other ideas for the earmarked funding, you might say:

  • “Our current product is 80% effective!”

Whereas if you do want them to fund it, you might say:

  • “Our current product is 20% ineffective!”

People react in different ways, depending on how the same information is presented, and the way the two statements above are framed aims to manipulate your listeners to the outcome you want.  So, don’t do this, and more important, look for vendors[2] who are doing this, and call them out on it.  Here, then, are a three of the more obvious cognitive biases that you may come across:

  • Irrational escalation or Sunk cost fallacy – this is the tendency for people to keep throwing money or resources at a project, vendor or product when it’s clear that it’s no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent – when it’s actually already gone.  This often comes over as misplaced pride, or people just not wanting to let go of a pet project because they’ve become attached to it, but it’s really dangerous for security, because if something clearly isn’t effective, we should be throwing it out, not sending good money after bad.
  • Normalcy bias – this is the refusal to address a risk because it’s never happened before, and is an interesting one in security, for the simple reason that so many products and vendors are trying to make us do exactly that.  What needs to happen here is that a good risk analysis needs to be performed, and then measures put in place to deal with those which are actually high priority, not those which may not happen, or which don’t seem likely at first glance.
  • Observer-expectancy effect – this is when people who are looking for particular results find it, because they have (consciously or unconsciously) misused the data.  This is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way which confirms this expectation, rather than analysed and presented in ways which are more neutral.

I intend to address more specific cognitive biases in future articles, tying them even more closely to security concerns – if you have any particular examples or war stories, I’d love to hear them.


1 – my words

2 – or, I suppose, underhand colleagues…

AI, blockchain and security – huh? Ahh!

There’s a place for the security community to be more involved AI and blockchain.

Next week, I’m flying to the US on Tuesday, heading back on Thursday evening, arriving back in Heathrow on Friday morning, and heading (hopefully after a shower) to Cyber Security & Cloud Expo Global to present a session and then, later in the afternoon, to participate in a panel.  One of the reasons that I’m bothering to post this here is that I have a feeling that I’m going to be quite tired by the end of Friday, and it’s probably a good idea to get at least a few thoughts in order for the panel session up front[1].  This is particularly the case as there’s some holiday happening between now and then, and if I’m lucky, I’ll be able to forget about everything work-related in the meantime.

The panel has an interesting title.  It’s “How artificial intelligence and blockchain are the battlegrounds for the next security wars“.  You could say that this is buzzwordy attempt to shoehorn security into a session about two unrelated topics, but the more I think about it, the more I’m glad that the organisers have put this together.  It seems to me that although AI and blockchain are huge topics, have their own conference circuits[2] and garner huge amounts of interest from the press, there’s a danger that people whose main job and professional focus is security won’t be the most engaged in this debate.  To turn that around a bit, what I mean is that although there are people within the AI and blockchain communities considering security, I think that there’s a place for people within the security community to be more involved in considering AI and blockchain.

AI and security

“What?” you may say.  “Have you not noticed all of the AI-enabled products being sold by vendors in the security community?”

To which I reply, “pfft.”

It would be rude[3] to suggest that none of those security products have any “Artificial Intelligence” actually anywhere near them.  However, it seems to me (and I’m not alone) that most of those products are actually employing much more basic algorithms which are more accurately portrayed as “Machine Learning”.  And that’s if we’re being generous.

More importantly, however, I think that what’s really important to discuss here is security in a world where AI (or, OK, ML) is the key element of a product or service, or in a world where AI/ML is a defining feature of how we live at least part of our world.  In other words, what’s the impact of security when we have self-driving cars, AI-led hiring practices and fewer medical professionals performing examinations and diagnoses?  What does “the next battleground” mean in this context?  Are we fighting to keep these systems from overstepping their mark, or fighting to stop malicious actors from compromising or suborning them to their ends?  I don’t know, and that’s part of why I’m looking forward to my involvement on the panel.

Blockchain and security

Blockchain is the other piece, of course, and there are lots of areas for us to consider here, too.  Three that spring to my mind are:

Whether you believe that blockchain(s) is(are) going to take over the world or not, it’s a vastly compelling topic, and I think it’s important for it to be discussed outside its “hype bubble” in the context of a security conference.

Conclusion

While I’m disappointed that I’m not going to be able to see some of the other interesting folks speaking at different times of the conference, I’m rather looking forward to the day that I will be there.  I’m also somewhat relieved to have been able to use this article to consider some of the points that I suspect – and hope! – will be brought up in the panel.  As always, I welcome comments and thoughts around areas that I might not have considered (or just missed out).  And last, I’d be very happy to meet you at the conference if you’re attending.


1 – I’ve already created the slides for the talk, I’m pleased to say.

2 – my, do they have their own conference circuits…

3 – not to mention possibly actionable.  And possibly even incorrect.

3 tips to avoid security distracti… Look: squirrel!

Executive fashions change – and not just whether shoulder-pads are “in”.

There are many security issues to worry about as an organisation or business[1].  Let’s list some of them:

  • insider threats
  • employee incompetence
  • hacktivists
  • unpatched systems
  • patched systems that you didn’t test properly
  • zero-day attacks
  • state actor attacks
  • code quality
  • test quality
  • operations quality
  • underlogging
  • overlogging
  • employee-owned devices
  • malware
  • advanced persistent threats
  • data leakage
  • official wifi points
  • unofficial wifi points
  • approved external access to internal systems via VPN
  • unapproved external access to internal systems via VPN
  • unapproved external access to internal systems via backdoors
  • junior employees not following IT-mandated rules
  • executives not following IT-mandated rules

I could go on: it’s very, very easy to find lots of things that should concern us.  And it’s particularly confusing if you just go around finding lots of unconnected things which are entirely unrelated to each other and aren’t even of the same type[2]. I mean: why list “code quality” in the same list as “executives not following IT-mandated rules”?  How are you supposed to address issues which are so diverse?

And here, of course, is the problem: this is what organisations and businesses do have to address.  All of these issues may present real risks to the functioning (or at least continued profitability) of the organisations[3].  What are you supposed to do?  How are you supposed to keep track of all these things?

The first answer that I want to give is “don’t get distracted”, but that’s actually the final piece of advice, because it doesn’t really work unless you’ve already done some work up front.  So what are my actual answers?

1 – Perform risk analysis

You’re never going to be able to give your entire attention to everything, all the time: that’s not how life works.  Nor are you likely to have sufficient resources to be happy that everything has been made as secure as you would like[4].  So where do you focus your attention and apply those precious, scarce resources?  The answer is that you need to consider what poses the most risk to your organisation.  The classic way to do this is to use the following formula:

Risk = Likelihood x Impact

This looks really simple, but sadly it’s not, and there are entire books and companies dedicated to the topic.  Impact may be to reputation, to physical infrastructure, system up-time, employee morale, or one of hundreds of other items.  The difficulty of assessing the likelihood may range from simple (“the failure rate on this component is once every 20 years, and we have 500 of them”[5]) to extremely difficult (“what’s the likelihood of our CFO clicking on a phishing attack?”[6]).  Once it’s complete, however, for all the various parts of the business you can think of – and get other people from different departments in to help, as they’ll think of different risks, I can 100% guarantee – then you have an idea of what needs the most attention.  (For now: because you need to repeat this exercise of a regular basis, considering changes to risk, your business and the threats themselves.)

2 – Identify and apply measures

You have a list of risks.  What to do?  Well, a group of people – and this is important, as one person won’t have a good enough view of everything – needs to sit[7] down and work out what measures to put in place to try to reduce or at least mitigate the various risks.  The amount of resources that the organisation should be willing to apply to this will vary from risk to risk, and should generally be proportional to the risk being addressed, but won’t always be of the same kind.  This is another reason why having different people involved is important.  For example, one risk that you might be able to mitigate by spending a £50,000 (that’s about the same amount of US dollars) on a software solution might be equally well addressed by a physical barrier and a sign for a few hundred pounds.  On the other hand, the business may decide that some risks should not be mitigated against directly, but rather insured against.  Other may require training regimes and investment in t-shirts.

Once you’ve identified what measures are appropriate, and how much they are going to cost, somebody’s going to need to find money to apply them.  Again, it may be that they are not all mitigated: it may just be too expensive.  But the person who makes that decision should be someone senior – someone senior enough to take the flak should the risk come home to roost.

Then you apply your measures, and, wherever possible, you automate them and their reporting.  If something is triggered, or logged, you then know:

  1. that you need to pay attention, and maybe apply some more measures;
  2. that the measure was at least partially effective;
  3. that you should report to the business how good a job you – and all those involved – have done.

3 – Don’t get distracted

My final point is where I’ve been trying to go with this article all along: don’t get distracted.  Distractions come in many flavours, but here are three of the most dangerous.

  1. A measure was triggered, and you start paying all of your attention to that measure, or the system(s) that it’s defending.  If you do this, you will miss all of the other attacks that are going on.  In fact, here’s your opportunity to look more broadly and work out whether there are risks that you’d not considered, and attacks that are coming in right now, masked by the one you have noticed.
  2. You assume that the most expensive measures are the ones that require the most attention, and ignore the others.  Remember: the amount of resources you should be ready to apply to each risk should be proportional to the risk, but the amount actually applied may not be.  Check that the barrier you installed still works and that the sign is still legible – and if not, then consider whether you need to spend that £50,000 on software after all.  Also remember that just because a risk is small, that doesn’t mean that it’s zero, or that the impact won’t be high if it does happen.
  3. Executive fashions change – and not just whether shoulder-pads are “in”, or the key to the boardroom bathroom is now electronic, but a realisation that executives (like everybody else) are bombarded with information.  The latest concern that your C-levels read about in the business section, or hears about from their buddies on the golf course[9] may require consideration, but you need to ensure that it’s considered in exactly the same way as all of the other risks that you addressed in the first step.  You need to be firm about this – both with the executive(s), but also yourself, because although I identified this as an executive risk, the same goes for the rest of us. Humans are generally better at keeping their focus on the new, shiny thing in front of them, rather than the familiar and the mundane.

Conclusion

You can’t know everything, and you probably won’t be able to cover everything, either, but having a good understanding of risk – and maintaining your focus in the event of distractions – means that at least you’ll be covering and managing what you can know, and can be ready to address new ones as they arrive.


1 – let’s be honest: there are lots if you’re a private individual, too, but that’s for another day.

2- I did this on purpose, annoying as it may be to some readers. Stick with it.

3 – not to mention the continued employment of those tasked with stopping these issues.

4 – note that I didn’t write “everything has been made secure”: there is no “secure”.

5 – and, to be picky, this isn’t as simple as it looks either: does that likelihood increase or decrease over time, for instance?

6 – did I say “extremely difficult”?  I meant to say…

7 – you can try standing, but you’re going to get tired: this is not a short process.

8 – now that, ladles and gentlespoons, is a nicely mixed metaphor, though I did stick with an aerial theme.

9 – this is a gross generalisation, I know: not all executives play golf.  Some of them play squash/racketball instead.

5 (Professional) development tips for security folks

… write a review of “Sneakers” or “Hackers”…

To my wife’s surprise[1], I’m a manager these days.  I only have one report, true, but he hasn’t quit[2], so I assume that I’ve not messed this management thing up completely[2].  One of the “joys” of management is that you get to perform performance and development (“P&D”) reviews, and it’s that time of year at the wonderful Red Hat (my employer).  In my department, we’re being encouraged (Red Hat generally isn’t in favour of actually forcing people to do things) to move to “OKRs”, which are “Objectives and Key Results”.  Like any management tool, they’re imperfect, but they’re better than some.  You’re supposed to choose a small number of objectives (“learn a (specific) new language”), and then have some key results for each objective that can be measured somehow (“be able to check into a hotel”, “be able to order a round of drinks”) after a period of time (“by the end of the quarter”).  I’m simplifying slightly, but that’s the general idea.

Anyway, I sometimes get asked by people looking to move into security for pointers to how to get into the field.  My background and route to where I am is fairly atypical, so I’m very sensitive to the fact that some people won’t have taken Computer Science at university or college, and may be pursuing alternative tracks into the profession[3].  As a service to those, here are a few suggestions as to what they can do which take a more “OKR” approach than I provided in my previous article Getting started in IT security – an in/outsider’s view.

1. Learn a new language

And do it with security in mind.  I’m not going to be horribly prescriptive about this: although there’s a lot to be said for languages which are aimed a security use cases (Rust is an obvious example), learning any new programming language, and thinking about how it handles (or fails to handle) security is going to benefit you.  You’re going to want to choose key results that:

  • show that you understand what’s going on with key language constructs to do with security;
  • show that you understand some of what the advantages and disadvantages of the language;
  • (advanced) show how to misuse the language (so that you can spot similar mistakes in future).

2. Learn a new language (2)

This isn’t a typo.  This time, I mean learn about how other functions within your organisations talk.  All of these are useful:

  • risk and compliance
  • legal (contracts)
  • legal (Intellectual Property Rights)
  • marketing
  • strategy
  • human resources
  • sales
  • development
  • testing
  • UX (User Experience)
  • IT
  • workplace services

Who am I kidding?  They’re all useful.  You’re learning somebody else’s mode of thinking, what matters to them, and what makes them tick.  Next time you design something, make a decision which touches on their world, or consider installing a new app, you’ll have another point of view to consider, and that’s got to be good.  Key results might include:

  • giving a 15 minute presentation to the group about your work;
  • arranging a 15 minute presentation to your group about the other group’s work;
  • (advanced) giving a 15 minute presentation yourself to your group about the other group’s work.

3. Learning more about cryptography

So much of what we do as security people comes down to or includes some cryptography.  Understanding how it should be used is important, but equally, being able to understand how it shouldn’t be used is something we should all understand.  Most important, from my point of view, however, is to know the limits of your knowledge, and to be wise enough to call in a real cryptographic expert when you’re approaching those limits.  Different people’s interests and abilities (in mathematics, apart from anything else) vary widely, so here is a broad list of different possible key results to consider:

  • learn when to use asymmetric cryptography, and when to use symmetric cryptography;
  • understand the basics of public key infrastructure (PKI);
  • understand what one-way functions are, and why they’re important;
  • understand the mathematics behind public key cryptography;
  • understand the various expiry and revocation options for certificates, their advantages and disadvantages.
  • (advanced) design a protocol using cryptographic primitives AND GET IT TORN APART BY AN EXPERT[4].

4. Learn to think about systems

Nothing that we manage, write, design or test exists on its own: it’s all part of a larger system.  That system involves nasty awkwardnesses like managers, users, attackers, backhoes and tornadoes.  Think about the larger context of what you’re doing, and you’ll be a better security person for it.  Here are some suggestions for key results:

  • read a book about systems, e.g.:
    • Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson;
    • Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design, ed. Diomidis Spinellis and Georgios Gousios;
    • Building Evolutionary Architectures: Support Constant Change by Neal Ford, Rebecca Parsons & Patrick Kua[5].
  • arrange for the operations folks in your organisation to give a 15 minute presentation to your group (I can pretty much guarantee that they think about security differently to you – unless you’re in the operations group already, of course);
  • map out a system you think you know well, and then consider all the different “external” factors that could negatively impact its security;
  • write a review of “Sneakers” or “Hackers”, highlighting how unrealistic the film[6] is, and how, equally, how right on the money it is.

5. Read a blog regularly

THIS blog, of course, would be my preference (I try to post every Tuesday), but getting into the habit of reading something security-related[7] on a regular basis means that you’re going to keep thinking about security from a point of view other than your own (which is a bit of a theme for this article).  Alternatively, you can listen to a podcast, but as I don’t have a podcast myself, I clearly can’t endorse that[8].  Key results might include:

  • read a security blog once a week;
  • listen to a security podcast once a month;
  • write an article for a site such as (the brilliant) OpenSource.com[9].

Conclusion

I’m aware that I’ve abused the OKR approach somewhat by making a number of the key results non-measureable: sorry.  Exactly what you choose will depend on you, your situation, how long the objectives last for, and a multitude of other factors, so adjust for your situation.  Remember – you’re trying to develop yourself and your knowledge.


1 – and mine.

2 – yet.

3 – yes, I called it a profession.  Feel free to chortle.

4 – the bit in CAPS is vitally, vitally important.  If you ignore that, you’re missing the point.

5 – I’m currently reading this after hearing Dr Parsons speak at a conference.  It’s good.

6 – movie.

7 – this blog is supposed to meet that criterion, and quite often does…

8 – smiley face.  Ish.

9 – if you’re interested, please contact me – I’m a community moderator there.

What is a password for, anyway?

Which of my children should I use as my password?

This may look like it’s going to be one of those really short articles, because we all know what a password is for, right?  Well, I’m not sure we do.  Or, more accurately, I’m not sure that the answer is always the same, or has always been the same, so I think it’s worth spending some time looking at what passwords are used for, particularly as I’ve just seen (another) set of articles espousing the view that either a) passwords are dead; or b) multi-factor authentication is dead, and passwords are here to stay.

History

Passwords (or, as Wikipedia points out, “watchwords”) have been used in military contexts for centuries.  If you wish to pass the guard, you need to give them a word or phrase that matches what they’re expecting (“Who goes there?” “Friend” doesn’t really cut it).  Sometimes there’s a challenge and response, which allows both parties to have some level of assurance that they’re on the same side.  Whether one party is involved, or two, this is an authentication process – one side is verifying the identity of another.

Actually, it’s not quite that simple.  One side is verifying that the other party is a member of a group of people who have a particular set of knowledge (the password) in order to authorise them access to a particular area (that is being guarded).  Anyone without the password is assumed not to be in that group, and will be denied access (and may also be subject to other measures).

Let’s step forward to the first computer recorded as having had a password.  This was the Compatible Time-Sharing System (CTSS) at MIT around 1961, and its name gives you a clue as to the reason it needed a password: different people could use the computer at the same time, so it was necessary to provide a way to identify them and the jobs they were running.

Here, the reason for having a password seems a little different to our first use case.  Authentication is there not to deny or allow access to a physical area – or even a virtual area – but to allow one party to discriminate[1] between different parties.

Getting more modern

I have no knowledge of how military uses of passwords developed, other than to note that by 1983, use of passwords on military systems was well-known enough to make it into the film[2] Wargames.  Here, the use of a password is much closer to our earlier example: though the area is virtual, the idea is to restrict access to it based on a verification of the party logging on.  There are two differences, however:

  1. it is not so much access to the area that is important, and more access to the processes available within the area;
  2. each user has a different password, it seems: the ability to guess the correct password gives instant access to a particular account.

Now, it’s not clear whether the particular account is hardwired to the telephone number that’s called in the film, but there are clearly different accounts for different users.  This is what you’d expect for a system where you have different users with different types of access.

It’s worth noting that there’s no sign that the school computer accessed in Wargames has multiple users: it seems that logging in at all gives you access to a single account – which is why auditing the system to spot unauthorised usage is, well, problematic.  The school system is also more about access to data and the ability to change it, rather than specific processes[3 – SPOILER ALERT].

Things start getting interesting

In the first few decades of computing, most systems were arguably mainly occupied with creating or manipulating data associated with the organisations that owned it[4].  That could be sales data, stock data, logistics data, design data, or personnel data, for instance.  It then also started to be intellectual property data such as legal documents, patent applications and the texts of books.  Passwords allowed the owners of the systems to decide who should have access to that data, and the processes to make changes.  And then something new happened.

People started getting their own computers.  You could do your own accounts on them, write your own books.  As long at they weren’t connected to any sort of network, the only passwords you really needed were to stop your family from accessing and changing data that wasn’t theirs.  What got really interesting, though, was when those computers started getting connected to networks, which meant that they could talk to other computers, and other computers could talk to them.  People started getting involved in chatrooms and shared spaces, and putting their views and opinions on them.

It turns out (and this should be of little surprise to regular readers of this blog) that not all people are good people.  Some of them are bad.  Some of them, given the chance, would pretend to be other people, and misrepresent their views.  Passwords were needed to allow you to protect your identity in a particular area, as well as to decide who was allowed into that area in the first place.  This is new: this is about protection of the party associated with the password, rather than the party whose resources are being used.

Our data now

What does the phrase above, “protect your identity” really mean, though?  What is your identity?  It’s data that you’ve created, and, increasingly, data that’s been created about you, and is associated with that data.  That may be tax accounts data that you’ve generated for your own use, but it may equally well be your bank balance – and the ability to pay and receive money from and to an account.  It may be your exercise data, your general health data, your fertility cycle, the assignments you’ve written for your university course, your novel or pictures of your family.  Whereas passwords used to be to protect data associated with an organisation, they’re now increasingly to protect data associated with us, and that’s  a big change.  We don’t always have control over that data – GDPR and similar legal instruments are attempts to help with that problem – but each password that is leaked gives away a bit of our identity.  Sometimes being able to change that data is what is valuable – think of a bank account – sometimes just having access to it – think of your criminal record[5] – is enough, but control over that access is important to us, and not just the organisations that control us with which we interact.

This is part of the reason that ideas such as self-sovereign identity (where you get to decide who sees what of the data associated with you) are of interest to many people, of course, but they are likely to use passwords, too (at least as one method of authentication).  Neither am I arguing that passwords are a bad thing – they’re easy to understand, and people know how to use them – but I think it’s important for us to realise that they’re not performing the task they were originally intended to fulfil – or even the task they were first used for in a computing context.  There’s a responsibility on the security community to educate people about why they need to be in control of their passwords (or other authentication mechanisms), rather than relying on those who provide services to us to care about them.  In the end, it’s our data, and we’re the ones who need to care.

Now, which of my children should I use as my password: Joshua or Rache…?[6]


1 – that is “tell the difference”, rather than make prejudice-based choices.

2 – “movie”.

3 – such as, say, the ability to start a global thermonuclear war.

4 – or “them”, if you prefer your data plural.

5 – sorry – obviously nobody who reads this blog has ever run a red light.

6 – spot the popular culture references!