First aid – are you ready?

Your using the defibrillator is the best chance that the patient has of surviving.

Disclaimer: I am not a doctor, nor a medical professional. I will attempt not to give specific medical or legal advice in this article: please check your local medical and legal professionals before embarking on any course of action about which you are unsure.

This is, generally, a blog about security – that is, information security or cybersecurity – but I sometimes blog about other things. This is one of those articles. It’s still about security, if you will – the security and safety of those around you. Here’s how it came about: I recently saw a video on LinkedIn about a restaurant manager performing Abdominal Thrusts (it’s not called the Heimlich Manoeuvre any more due to trademarking) on a choking customer, quite possibly saving his life.

And I thought: I’ve done that.

And then I thought: I’ve performed CPR, and used a defibrillator, and looked after people who were drunk or concussed, and helped people having a diabetic episode, and encouraged a father to apply an epipen[1] to a confused child suffering from anaphylactic shock, and comforted a schoolchild who had just had an epileptic fit, and attended people in more than one car crash (typically referred to as an “RTC”, or “Road Traffic Collision” in the UK these days[2]).

And then I thought: I should tell people about these stories. Not to boast[3], but because if you travel a lot, or you commute to work, or you have a family, or you work in an office, or you ever go out to a party, or you play sports, or engage in hobby activities, or get on a plane or train or boat or drive anywhere, then there’s a decent chance that you may come across someone who needs your help, and it’s good – very good – if you can offer them some aid. It’s called “First Aid” for a reason: you’re not expected to know everything, or fix everything, but you’re the first person there who can provide aid, and that’s the best the patient can expect until professionals arrive.

Types of training

There are a variety of levels of first aid training that might be appropriate for you. These include:

  • family and children focussed;
  • workplace first aid;
  • hobby, sports and event first aid;
  • ambulance and local health service support and volunteering.

There’s an overlap between all of these, of course, and what you’re interested in, and what’s available to you, will vary based on your circumstances and location. There may be other constraints such as age and physical ability or criminal background checks: these will definitely be dependent on your location and individual context.

I’m what’s called, in the UK, a Community First Responder (CFR). We’re given some specific training to help provide emergency first aid in our communities. What exactly you do depends on your local ambulance trust – I’m with the East of England Ambulance Service Trust, and I have a kit with items to allow basic diagnosis and treatment which includes:

  • a defibrillator (AED) and associated pads, razors[4], shears, etc.
  • a tank of oxygen and various masks
  • some airway management equipment whose name I can never remember
  • glucogel for diabetic treatment
  • a pulsoximeter for heartrate and blood oxygen saturation measurement
  • gloves
  • bandages, plasters[6]
  • lots of forms to fill in
  • some other bits and pieces.

I also have a phone and a radio (not all CFRs get a radio, but our area is rural and has particularly bad mobile phone reception.

I’m on duty as I type this – I work from home, and my employer (the lovely Red Hat) is cool with my attending emergency calls in certain circumstances – and could be called out at any moment to an emergency in about a 10 mile/15km radius. Among the call-outs I’ve attended are cardiac arrests (“heart attacks”), fits, anaphylaxis (extreme allergic reactions), strokes, falls, diabetics with problems, drunks with problems, major bleeding, patients with difficulty breathing or chest pains, sepsis, and lots of stuff which is less serious (and which has maybe been misreported). The plan is that if it’s considered a serious condition, it looks like I can get there before an ambulance, or if the crew is likely to need more hands to help (for treating a full cardiac arrest, a good number of people can really help), then I get dispatched. I drive my own car, I’m not allowed sirens or lights, I’m not allowed to break the speed limit or go through red lights and I don’t attend road traffic collisions. I volunteer whatever hours fit around my job and broader life, I don’t get paid, and I provide my own fuel and vehicle insurance. I get anywhere from zero to four calls a day (but most often zero or one).

There are volunteers in other fields who attend events, provide sports or hobby first aid (I did some scuba diving training a while ago), and there are all sorts of types of training for workplace first aid. Most workplaces will have designated first aiders who can be called on if there’s a problem.

The minimum to know

The people I’ve just noted above – the trained ones – won’t always be available. Sometimes, you – with no training – will be the first on scene. In most jurisdictions, if you attempt first aid, the law will look kindly on you, even if you don’t get it all perfect[7]. In some jurisdictions, there’s actually an expectation that you’ll step in. What should you know? What should you do?

Here’s my view. It’s not the view of a professional, and it doesn’t take into account everybody’s circumstances. Again, it’s my view, and it’s that you should consider enough training to be able to cope with two of the most common – and serious – medical emergencies.

  1. Everybody should know how to deal with a choking patient.
  2. Everybody should know how do to CPR (Cardiopulmonary resuscitation) – chest compressions, at minimum, but with artificial respiration if you feel confident.

In the first of these cases, if someone is choking, and they continue to fail to breathe, they will die.

In the second of these cases, if someone’s heart has stopped beating, they are dead. Doing nothing means that they stay that way. Doing something gives them a chance.

There are videos and training available on the Internet, or provided by many organisations.

The minimum to try

If you come across somebody who is in cardiac arrest, call the emergency services. Dispatch someone (if you’re not alone) to try to find a defibrillator (AED) – the emergency services call centre will often help with this, or there’s an app called “GoodSam” which will locate one for you.

Use the defibrillator.

They are designed for untrained people. You open it up, and it will talk to you. Do what it says.

Even if you don’t feel confident giving CPR, use a defibrillator.

I have used a defibrillator. They are easy to use.

Use that defibrillator.

The defibrillator is not the best chance that the patient has of surviving: your using the defibrillator is the best chance that the patient has of surviving.

Conclusion

Providing first aid for someone in a serious situation doesn’t always work. Sometimes people die. In fact, in the case of a cardiac arrest (heart attack), the percentage of times that CPR is successful is low – even in a hospital setting, with professionals on hand. If you have tried, you’ve given them a chance. It is not your fault if the outcome isn’t perfect. But if you hadn’t tried, there was no chance.

Please respect and support professionals, as well. They are often busy and concerned, and may not have the time to thank you, but your help is appreciated. We are lucky, in our area, that the huge majority of EEAST ambulance personnel are very supportive of CFRs and others who help out in an emergency.

If this article has been interesting to you, and you are considering taking some training, then get to the end of the post, share it via social media(!), and then search online for something appropriate to you. There are many organisations who will provide training – some for free – and many opportunities for volunteering. You know that if a member of your family needed help, you would hope that somebody was capable and willing to provide it.

Final note – if you have been affected by anything in this article, please find some help, whether professional or just with friends. Many of the medical issues I’ve discussed are distressing, and self care is important (it’s one of the things that EEAST takes seriously for all its members, including its CFRs).


1 – a special adrenaline-administering device (don’t use somebody else’s – they’re calibrated pretty carefully to an individual).

2 – calling it an “accident” suggests it was no-one’s fault, when often, it really was.

3 – well, maybe a little bit.

4 – to shave hairy chests – no, really.

5 – to cut through clothing. And nipples chains, if required. Again, no, really.

6 – “Bandaids” for our US cousins.

7 – please check your local jurisdiction’s rules on this.

16 ways in which security folks are(n’t) like puppies

Following the phenomenal[1] success of my ground-breaking[2] article 16 ways in which users are(n’t) like kittens, I’ve decided to follow up with a yet more inciteful[3] article on security folks. I’m using the word “folks” to encompass all of the different types of security people who “normal”[4] people think of “those annoying people who are always going on about that security stuff, and always say ‘no’ when we want to do anything interesting, important, urgent or business-critical”. I think you’ll agree that “folks” is a more accessible shorthand term, and now that I’ve made it clear who we’re talking about, we can move away from that awkwardness to the important[5] issue at hand.

As with my previous article on cats, I’d like my readers to pretend that this is a carefully researched article, and not one that I hastily threw together at short notice because I got up a bit late today.

Note 1: in an attempt to make security folks seem a little bit useful and positive, I’ve sorted the answers so that the ones where security folks turn out actually to share some properties with puppies appear at the end. But I know that I’m not really fooling anyone.

Note 2: the picture (credit: Miriam Bursell) at the top of this article is of my lovely basset hound Sherlock, who’s well past being a puppy. But any excuse to post a picture of him is fair game in my book. Or on my blog.

Research findings

Hastily compiled table

Property Security folks Puppies
Completely understand and share your priorities
No No
Everybody likes them
No Yes
Generally fun to be around No Yes
Generally lovable No Yes
Feel just like a member of the family No Yes
Always seem very happy to see you No Yes
Are exactly who you want to see at the end of a long day No Yes
Get in the way a lot when you’re in a hurry
Yes Yes
Make a lot of noise about things you don’t care about Yes Yes
Don’t seem to do much most of the time
Yes Yes
Constantly need cleaning up after
Yes Yes
Forget what you told them just 10 minutes ago Yes Yes
Seem to spend much of their waking hours eating or drinking Yes Yes
Wake you up at night to deal with imaginary attackers Yes Yes
Can turn bitey and aggressive for no obvious reason
Yes Yes
Have tickly tummies Yes[6] Yes

1 – relatively.

2 – no, you’re right: this is just hype.

3 – this is almost impossible to prove, given quite how uninciteful the previous one was.

4 – i.e., non-security.

5 – well, let’s pretend.

6 – well, I know I do.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

Trust you? I can’t trust myself.

Cognitive biases are everywhere.

William Gibson’s book Virtual Light includes a bar which goes by the name of “Cognitive Dissidents”.  I noticed this last night when I was reading to bed, and it seemed apposite, because I wanted to write about cognitive bias, and the fact that I’d noticed it so strikingly was, I realised, an example of exactly that: in this case, The Frequency Illusion, or The Baader-Meinhof Effect.  Cognitive biases are everywhere, and there are far, far more of them than you might expect.

The problem is that we think of ourselves as rational beings, and it’s quite clear from decades – in some cases, centuries – of research that we’re anything but.  We’re very likely to tell ourselves that we’re rational, and it’s such a common fallacy that The Illusion of Validity (another cognitive bias) will help us believe it.  Cognitive biases are, according to Wikipedia, “systematic patterns of deviation from norm or rationality in judgment” or put maybe more simply, “our brains managing to think things which seem sensible, but aren’t.”[1]

The Wikipedia entry above gives lots of examples of cognitive bias – lots and lots of examples – and I’m far from being an expert in the field.  The more I think about risk and how we consider risk, however, the more I’m convinced that we – security professionals and those with whom we work – need to have a better understanding of our own cognitive biases and those of the people around us.  We like to believe that we make decisions and recommendations rationally, but it’s clear from the study of cognitive bias that:

  1. we generally don’t; and
  2. that even if we do, we shouldn’t expect those to whom we present them to consider them entirely rationally.

I should be clear, before we continue, that there are opportunities for abuse here.  There are techniques beloved of advertisers and the media to manipulate our thinking to their ends which we could use to our advantage and to try to manipulate others.  One example is the The Framing Effect.  If you want your management not to fund a new anti-virus product because you have other ideas for the earmarked funding, you might say:

  • “Our current product is 80% effective!”

Whereas if you do want them to fund it, you might say:

  • “Our current product is 20% ineffective!”

People react in different ways, depending on how the same information is presented, and the way the two statements above are framed aims to manipulate your listeners to the outcome you want.  So, don’t do this, and more important, look for vendors[2] who are doing this, and call them out on it.  Here, then, are a three of the more obvious cognitive biases that you may come across:

  • Irrational escalation or Sunk cost fallacy – this is the tendency for people to keep throwing money or resources at a project, vendor or product when it’s clear that it’s no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent – when it’s actually already gone.  This often comes over as misplaced pride, or people just not wanting to let go of a pet project because they’ve become attached to it, but it’s really dangerous for security, because if something clearly isn’t effective, we should be throwing it out, not sending good money after bad.
  • Normalcy bias – this is the refusal to address a risk because it’s never happened before, and is an interesting one in security, for the simple reason that so many products and vendors are trying to make us do exactly that.  What needs to happen here is that a good risk analysis needs to be performed, and then measures put in place to deal with those which are actually high priority, not those which may not happen, or which don’t seem likely at first glance.
  • Observer-expectancy effect – this is when people who are looking for particular results find it, because they have (consciously or unconsciously) misused the data.  This is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way which confirms this expectation, rather than analysed and presented in ways which are more neutral.

I intend to address more specific cognitive biases in future articles, tying them even more closely to security concerns – if you have any particular examples or war stories, I’d love to hear them.


1 – my words

2 – or, I suppose, underhand colleagues…

3 tips to avoid security distracti… Look: squirrel!

Executive fashions change – and not just whether shoulder-pads are “in”.

There are many security issues to worry about as an organisation or business[1].  Let’s list some of them:

  • insider threats
  • employee incompetence
  • hacktivists
  • unpatched systems
  • patched systems that you didn’t test properly
  • zero-day attacks
  • state actor attacks
  • code quality
  • test quality
  • operations quality
  • underlogging
  • overlogging
  • employee-owned devices
  • malware
  • advanced persistent threats
  • data leakage
  • official wifi points
  • unofficial wifi points
  • approved external access to internal systems via VPN
  • unapproved external access to internal systems via VPN
  • unapproved external access to internal systems via backdoors
  • junior employees not following IT-mandated rules
  • executives not following IT-mandated rules

I could go on: it’s very, very easy to find lots of things that should concern us.  And it’s particularly confusing if you just go around finding lots of unconnected things which are entirely unrelated to each other and aren’t even of the same type[2]. I mean: why list “code quality” in the same list as “executives not following IT-mandated rules”?  How are you supposed to address issues which are so diverse?

And here, of course, is the problem: this is what organisations and businesses do have to address.  All of these issues may present real risks to the functioning (or at least continued profitability) of the organisations[3].  What are you supposed to do?  How are you supposed to keep track of all these things?

The first answer that I want to give is “don’t get distracted”, but that’s actually the final piece of advice, because it doesn’t really work unless you’ve already done some work up front.  So what are my actual answers?

1 – Perform risk analysis

You’re never going to be able to give your entire attention to everything, all the time: that’s not how life works.  Nor are you likely to have sufficient resources to be happy that everything has been made as secure as you would like[4].  So where do you focus your attention and apply those precious, scarce resources?  The answer is that you need to consider what poses the most risk to your organisation.  The classic way to do this is to use the following formula:

Risk = Likelihood x Impact

This looks really simple, but sadly it’s not, and there are entire books and companies dedicated to the topic.  Impact may be to reputation, to physical infrastructure, system up-time, employee morale, or one of hundreds of other items.  The difficulty of assessing the likelihood may range from simple (“the failure rate on this component is once every 20 years, and we have 500 of them”[5]) to extremely difficult (“what’s the likelihood of our CFO clicking on a phishing attack?”[6]).  Once it’s complete, however, for all the various parts of the business you can think of – and get other people from different departments in to help, as they’ll think of different risks, I can 100% guarantee – then you have an idea of what needs the most attention.  (For now: because you need to repeat this exercise of a regular basis, considering changes to risk, your business and the threats themselves.)

2 – Identify and apply measures

You have a list of risks.  What to do?  Well, a group of people – and this is important, as one person won’t have a good enough view of everything – needs to sit[7] down and work out what measures to put in place to try to reduce or at least mitigate the various risks.  The amount of resources that the organisation should be willing to apply to this will vary from risk to risk, and should generally be proportional to the risk being addressed, but won’t always be of the same kind.  This is another reason why having different people involved is important.  For example, one risk that you might be able to mitigate by spending a £50,000 (that’s about the same amount of US dollars) on a software solution might be equally well addressed by a physical barrier and a sign for a few hundred pounds.  On the other hand, the business may decide that some risks should not be mitigated against directly, but rather insured against.  Other may require training regimes and investment in t-shirts.

Once you’ve identified what measures are appropriate, and how much they are going to cost, somebody’s going to need to find money to apply them.  Again, it may be that they are not all mitigated: it may just be too expensive.  But the person who makes that decision should be someone senior – someone senior enough to take the flak should the risk come home to roost.

Then you apply your measures, and, wherever possible, you automate them and their reporting.  If something is triggered, or logged, you then know:

  1. that you need to pay attention, and maybe apply some more measures;
  2. that the measure was at least partially effective;
  3. that you should report to the business how good a job you – and all those involved – have done.

3 – Don’t get distracted

My final point is where I’ve been trying to go with this article all along: don’t get distracted.  Distractions come in many flavours, but here are three of the most dangerous.

  1. A measure was triggered, and you start paying all of your attention to that measure, or the system(s) that it’s defending.  If you do this, you will miss all of the other attacks that are going on.  In fact, here’s your opportunity to look more broadly and work out whether there are risks that you’d not considered, and attacks that are coming in right now, masked by the one you have noticed.
  2. You assume that the most expensive measures are the ones that require the most attention, and ignore the others.  Remember: the amount of resources you should be ready to apply to each risk should be proportional to the risk, but the amount actually applied may not be.  Check that the barrier you installed still works and that the sign is still legible – and if not, then consider whether you need to spend that £50,000 on software after all.  Also remember that just because a risk is small, that doesn’t mean that it’s zero, or that the impact won’t be high if it does happen.
  3. Executive fashions change – and not just whether shoulder-pads are “in”, or the key to the boardroom bathroom is now electronic, but a realisation that executives (like everybody else) are bombarded with information.  The latest concern that your C-levels read about in the business section, or hears about from their buddies on the golf course[9] may require consideration, but you need to ensure that it’s considered in exactly the same way as all of the other risks that you addressed in the first step.  You need to be firm about this – both with the executive(s), but also yourself, because although I identified this as an executive risk, the same goes for the rest of us. Humans are generally better at keeping their focus on the new, shiny thing in front of them, rather than the familiar and the mundane.

Conclusion

You can’t know everything, and you probably won’t be able to cover everything, either, but having a good understanding of risk – and maintaining your focus in the event of distractions – means that at least you’ll be covering and managing what you can know, and can be ready to address new ones as they arrive.


1 – let’s be honest: there are lots if you’re a private individual, too, but that’s for another day.

2- I did this on purpose, annoying as it may be to some readers. Stick with it.

3 – not to mention the continued employment of those tasked with stopping these issues.

4 – note that I didn’t write “everything has been made secure”: there is no “secure”.

5 – and, to be picky, this isn’t as simple as it looks either: does that likelihood increase or decrease over time, for instance?

6 – did I say “extremely difficult”?  I meant to say…

7 – you can try standing, but you’re going to get tired: this is not a short process.

8 – now that, ladles and gentlespoons, is a nicely mixed metaphor, though I did stick with an aerial theme.

9 – this is a gross generalisation, I know: not all executives play golf.  Some of them play squash/racketball instead.

5 (Professional) development tips for security folks

… write a review of “Sneakers” or “Hackers”…

To my wife’s surprise[1], I’m a manager these days.  I only have one report, true, but he hasn’t quit[2], so I assume that I’ve not messed this management thing up completely[2].  One of the “joys” of management is that you get to perform performance and development (“P&D”) reviews, and it’s that time of year at the wonderful Red Hat (my employer).  In my department, we’re being encouraged (Red Hat generally isn’t in favour of actually forcing people to do things) to move to “OKRs”, which are “Objectives and Key Results”.  Like any management tool, they’re imperfect, but they’re better than some.  You’re supposed to choose a small number of objectives (“learn a (specific) new language”), and then have some key results for each objective that can be measured somehow (“be able to check into a hotel”, “be able to order a round of drinks”) after a period of time (“by the end of the quarter”).  I’m simplifying slightly, but that’s the general idea.

Anyway, I sometimes get asked by people looking to move into security for pointers to how to get into the field.  My background and route to where I am is fairly atypical, so I’m very sensitive to the fact that some people won’t have taken Computer Science at university or college, and may be pursuing alternative tracks into the profession[3].  As a service to those, here are a few suggestions as to what they can do which take a more “OKR” approach than I provided in my previous article Getting started in IT security – an in/outsider’s view.

1. Learn a new language

And do it with security in mind.  I’m not going to be horribly prescriptive about this: although there’s a lot to be said for languages which are aimed a security use cases (Rust is an obvious example), learning any new programming language, and thinking about how it handles (or fails to handle) security is going to benefit you.  You’re going to want to choose key results that:

  • show that you understand what’s going on with key language constructs to do with security;
  • show that you understand some of what the advantages and disadvantages of the language;
  • (advanced) show how to misuse the language (so that you can spot similar mistakes in future).

2. Learn a new language (2)

This isn’t a typo.  This time, I mean learn about how other functions within your organisations talk.  All of these are useful:

  • risk and compliance
  • legal (contracts)
  • legal (Intellectual Property Rights)
  • marketing
  • strategy
  • human resources
  • sales
  • development
  • testing
  • UX (User Experience)
  • IT
  • workplace services

Who am I kidding?  They’re all useful.  You’re learning somebody else’s mode of thinking, what matters to them, and what makes them tick.  Next time you design something, make a decision which touches on their world, or consider installing a new app, you’ll have another point of view to consider, and that’s got to be good.  Key results might include:

  • giving a 15 minute presentation to the group about your work;
  • arranging a 15 minute presentation to your group about the other group’s work;
  • (advanced) giving a 15 minute presentation yourself to your group about the other group’s work.

3. Learning more about cryptography

So much of what we do as security people comes down to or includes some cryptography.  Understanding how it should be used is important, but equally, being able to understand how it shouldn’t be used is something we should all understand.  Most important, from my point of view, however, is to know the limits of your knowledge, and to be wise enough to call in a real cryptographic expert when you’re approaching those limits.  Different people’s interests and abilities (in mathematics, apart from anything else) vary widely, so here is a broad list of different possible key results to consider:

  • learn when to use asymmetric cryptography, and when to use symmetric cryptography;
  • understand the basics of public key infrastructure (PKI);
  • understand what one-way functions are, and why they’re important;
  • understand the mathematics behind public key cryptography;
  • understand the various expiry and revocation options for certificates, their advantages and disadvantages.
  • (advanced) design a protocol using cryptographic primitives AND GET IT TORN APART BY AN EXPERT[4].

4. Learn to think about systems

Nothing that we manage, write, design or test exists on its own: it’s all part of a larger system.  That system involves nasty awkwardnesses like managers, users, attackers, backhoes and tornadoes.  Think about the larger context of what you’re doing, and you’ll be a better security person for it.  Here are some suggestions for key results:

  • read a book about systems, e.g.:
    • Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson;
    • Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design, ed. Diomidis Spinellis and Georgios Gousios;
    • Building Evolutionary Architectures: Support Constant Change by Neal Ford, Rebecca Parsons & Patrick Kua[5].
  • arrange for the operations folks in your organisation to give a 15 minute presentation to your group (I can pretty much guarantee that they think about security differently to you – unless you’re in the operations group already, of course);
  • map out a system you think you know well, and then consider all the different “external” factors that could negatively impact its security;
  • write a review of “Sneakers” or “Hackers”, highlighting how unrealistic the film[6] is, and how, equally, how right on the money it is.

5. Read a blog regularly

THIS blog, of course, would be my preference (I try to post every Tuesday), but getting into the habit of reading something security-related[7] on a regular basis means that you’re going to keep thinking about security from a point of view other than your own (which is a bit of a theme for this article).  Alternatively, you can listen to a podcast, but as I don’t have a podcast myself, I clearly can’t endorse that[8].  Key results might include:

  • read a security blog once a week;
  • listen to a security podcast once a month;
  • write an article for a site such as (the brilliant) OpenSource.com[9].

Conclusion

I’m aware that I’ve abused the OKR approach somewhat by making a number of the key results non-measureable: sorry.  Exactly what you choose will depend on you, your situation, how long the objectives last for, and a multitude of other factors, so adjust for your situation.  Remember – you’re trying to develop yourself and your knowledge.


1 – and mine.

2 – yet.

3 – yes, I called it a profession.  Feel free to chortle.

4 – the bit in CAPS is vitally, vitally important.  If you ignore that, you’re missing the point.

5 – I’m currently reading this after hearing Dr Parsons speak at a conference.  It’s good.

6 – movie.

7 – this blog is supposed to meet that criterion, and quite often does…

8 – smiley face.  Ish.

9 – if you’re interested, please contact me – I’m a community moderator there.