Of projects, products and (security) community

Not all open source is created (and maintained) equal.

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

Humans and (being bad at) trust

Why “signing parties” were never a good idea.

I went to a party recently, and it reminded of quite how bad humans are at trust. It was a work “mixer”, and an attempt to get people who didn’t know each other well to chat and exchange some information. We were each given two cards to hang around our necks: one on which to write our own name, and the other on which we were supposed to collect the initials of those to whom we spoke (in their own hand). At the end of the event, the plan was to hand out rewards whose value was related to the number of initials collected. Pens/markers were provided.

I gamed the system by standing by the entrance, giving out the cards, controlling the markers and ensuring that everybody signed card, hence ending up with easily the largest number of initials of anyone at the party. But that’s not the point. Somebody – a number of people, in fact – pointed out the similarities between this and “key signing parties”, and that got me thinking. For those of you not old enough – or not security-geeky enough – to have come across these, they were events which were popular in the late nineties and early parts of the first decade of the twenty-first century[1] where people would get together, typically at a tech show, and sign each other’s PGP keys. PGP keys are an interesting idea whereby you maintain a public-private key pair which you use to sign emails, assert your identity, etc., in the online world. In order for this to work, however, you need to establish that you are who you say you are, and in order for this to work, you need to convince someone of this fact.

There are two easy ways to do this:

  1. meet someone IRL[2], get them to validate your public key, and sign it with theirs;
  2. have someone who knows the person you met in step 1 agree that they can probably trust you, as the person in step 1 did, and they trust them.

This is a form of trust based on reputation, and it turns out that it is a terrible model for trust. Let’s talk about some of the reasons for it not working. There are four main ones:

  • context
  • decay
  • transitive trust
  • peer pressure.

Let’s evaluate these briefly.

Context

I can’t emphasise this enough: trust is always, always contextual (see “What is trust?” for a quick primer). When people signed other people’s key-pairs, all they should really have been saying was “I believe that the identity of this person is as stated”, but signatures and encryption based on these keys was (and is) frequently misused to make statements about, or claim access to, capabilities that were not necessarily related to identity.

I lay some of the fault of this at the US alcohol consumption policy. Many (US) Americans use their driving licence/license as a form of authorisation: I am over this age, and am therefore entitled to purchase alcohol. It was designed to prove that their were authorised to drive, and nothing more than that, but you can now get a US driving licence to prove your age even if you can’t drive, and it can be used, for instance, as security identification for getting on aircraft at airportsThis is crazy, but partly explains why there is such a confusion between identification, authentication and authorisation.

Decay

Trust, as I’ve noted before in many articles, decays. Just because I trust you now (within a particular context) doesn’t mean that I should trust you in the future (in that or any other context). Mechanisms exist within the PGP framework to expire keys, but it was (I believe) typical for someone to resign a new set of keys just because they’d signed the previous set. If they were only being used for identity, then that’s probably OK – most people rarely change their identity, after all – but, as explained above, these key pairs were often used more widely.

Transitive trust

This is the whole “trusting someone because I trust you” problem. Again, if this were only about identity, then I’d be less worried, but given people’s lack of ability to specify context, and their equal inability to communicate that to others, the “fuzziness” of the trust relationships being expressed was only going to increase with the level of transitiveness, reducing the efficacy of the system as a whole.

Peer pressure

Honestly, this occurred to me due to my gaming of the system, as described in the second paragraph at the top of this article. I remember meeting people at events and agreeing to endorse their key-pairs basically because everybody else was doing it. I didn’t really know them, though (I hope) I had at least heard of them (“oh, you’re Denny’s friend, I think he mentioned you”), and I certainly shouldn’t have been signing their key-pairs. I am certain that I was not the only person to fall into this trap, and it’s a trap because humans are generally social animals[3], and they like to please others. There was ample opportunity for people to game the system much more cynically than I did at the party, and I’d be surprised if this didn’t happen from time to time.

Stepping back a bit

To be fair, it is possible to run a model like this properly. It’s possible to avoid all of these by insisting on proper contextual trust (with multiple keys for different contexts), by re-evaluating trust relationships on a regular basis, by being very careful about trusting people just due to their trusting someone else (or refusing to do so at all), and by refusing just to agree to trust someone because you’ve met them and they “seem nice”. But I’m not aware of anyone – anyone – who kept to these rules, and it’s why I gave up on this trust model over a decade ago. I suspect that I’m going to get some angry comments from people who assert that they used (and use) the system properly, and I’m sure that there are people out there who did and do: but as a widespread system, it was only going to work if the large majority of all users treated it correctly, and given human nature and failings, that never really happened.

I’m also not suggesting that we have many better models – but we really, really need to start looking for some, as this is important, and difficult stuff.


1 – I refuse to refer to these years the “aughts”.

2 – In Real Life – this used to be an actual distinction to online.

3 – even a large enough percentage of IT folks to make this a problem.

Breaking the security chain(s)

Your environment is n-dimensional – your trust must be, too.

One of the security principles by which we[1] live[2] is that security is only as strong as the weakest link in a chain.  That link is variously identified as:

  • your employees
  • external threat actors
  • all humans
  • lack of training
  • cryptography
  • logging
  • anti-virus
  • auditing capabilities
  • the development lifecycle
  • waterfall methodology
  • passwords
  • any other authentication mechanisms
  • electrical wiring
  • hurricanes
  • earthquakes
  • and pestilence.

Actually, I don’t think I’ve ever seen the last one mentioned, but it’s only a matter of time.  However, very rarely does anybody bother to identify exactly what the chain is that it is being broken by the weakest link splintering into a thousand pieces.

There are a number of candidates that spring to mind:

  1. your application flow.  This is rather an old-fashioned way of thinking of applications: that a program is started, goes through a set of actions, and then terminates, but to think more broadly about it, any action which causes an application to behave in unexpected or unintended ways is a possible security flow, whether that is a monolithic application, a set of microservices or an app on  mobile device.
  2. your software stack.  Depending on how you think about your stack, there are likely to be at least 5, maybe a dozen or maybe even scores of layers in your software stack (for an example with just a few simple layers, see Announcing Enarx).  However you think about it, you need to trust all of those layers to do what who expect them to do.  If one of them is compromised, malicious, or just poorly implemented or maintained, then you have a security issue.
  3. your hardware stack.  There was a time, barely five years ago, when most people (excepting us[1], of course), assumed that hardware did what we thought it was supposed to do, all of the time.  In fact, we should all have known better, given the Clipper Chip and the Pentium bug (to name just to famous examples), but with Spectre, Meltdown and a growing realisation that hardware isn’t as trustworthy as was previously thought, everybody needs to decide exactly what security they can trust in which components.
  4. your operational processes.  You can have the best software and hardware in the world, but if you don’t maintain it and operate it properly, it’s going to be full of holes.  Failing to invest in operations, monitoring, logging, auditing and the rest leaves you wide open.
  5. your supply chain. There’s a growing understanding in the industry that our software and hardware supply chains are possible points of failure[3].  Whether your vendor is entirely proprietary (in which case their security is largely opaque) or open source (in which case you’ve got a chance to be able to see what’s going on), errors or maliciousness in the supply chain can scupper any hopes you had of security for your deployment.
  6. your software and hardware lifecycle.  Developing software?  Patching it?  Upgrading hardware (or software)?  Unit testing?  Unit testingg?  We all know that a failure to manage the lifecycle of our environment can lead to security problems.

The point I’m trying to make above is that there’s no single chain.  Your environment is n-dimensional – your trust must be, too.  If you don’t think about all of these contexts – and there will be more beyond the half-dozen that I’ve just noted – then you can’t have a good chance of managing security in your environment.  I honestly don’t think that there’s any single weakest link in the chain, because there are always already multiple chains in play: our job is to think about as many of them as possible, and then manage an mitigate the risks associated with each.


1 – the mythical “IT security community”.

2 – you’re right: “which we live by” would sound much more natural.

3 – and a growing industry to try to provide fixes.

“Unlawful, void, and of no effect”

The news from the UK is amazing today: the Supreme Court has ruled that the Prime Minister has failed to “prorogue” Parliament – the in other words, that the members of the House of Commons and the House of Lords are still in session. The words in the title come from the judgment that they have just handed down.

I’m travelling this week, and wasn’t expecting to write a post today, but this triggered a thought in me: what provisions are in place in your organisation to cope with abuses of power and possible illegal actions by managers and executives?

Having a whistle-blowing policy and an independent appeals process is vital. This is true for all employees, but having specific rules in place for employees who are involved in such areas as compliance and implementations involving regulatory requirements is vital. Robust procedures protect not only an employee who finds themself in a difficult position, but, in the long view, the organisation itself. They a can also act as a deterrent to managers and executives considering actions which might, in the absence of such procedures, likely go unreported.

Such procedures are not enough on their own – they fall into the category of “necessary, but not sufficient” – and a culture of ethical probity also needs to be encouraged. But without such a set of procedures, your organisation is at real risk.

How to be a no-shame generalist

There is no shame in being a generalist, and knowing when you need to consult a specialist.

There comes a time in any person’s life[1] when they realise that they’re not going to be able to do all the things they might like to do to a high level of expertise.  I used to kid myself that I could do anything if I tried hard enough and practised enough, but then I tried juggling.  It turns out that I’m never going to be able to juggle.  Not just juggle expertly.  I mean juggle at all.  My trying to juggle – with only one ball, let alone more than one – is so amusing that my family realised years ago that it was a great party trick.  “Daddy,” they’ll say, “show everyone your juggling.  It’s really funny.”  “But I can’t juggle,” I retort.  “Yes,” they respond, “that’s what’s funny[2].”

I’m also never going to be able to draw or do any art with any competence.

Or play any racquet sport with any level of skill.

Or do any gardening, painting or DIY-based household jobs with any degree of expertise[3].

Some people will retort that any old fool can be taught to do x activity (usually, it’s juggling, actually), but not only do I not believe this, but also, to be honest, there just isn’t enough time in the day to learn all the things I’d kind of like to try.

What has all this to do with security?

Specialism and education

Well, I’ve posted before that I’m a systems person, and the core of thinking about systems is that you need to look at the big picture.  In order to do that, you need to be a generalist.  There’s a phrase[5] in English: “Jack of all trades, master of none”, which is often used to condemn those who know a little about many things and are seen to dabble in them without a full understanding of any of them.  Interestingly, this version may be an abbreviation of the original, more positive:

Jack of all trades, master of none,
though oftentimes better than master of one.

The core inference, though, is that generalists aren’t as useful as specialists.  I don’t believe this.

In many educational systems, there’s a tendency to push students towards narrower and narrower fields of study.  For some, this is just what is needed, but for others – “systems people”, “synthesists” and “generalists” – this isn’t the best way to harness their talents, at least in the long term.  We need people who can see the big picture, who can take a wider view, and look beyond a single blocking issue to realise that the answer to a problem may not be a better implementation of an authentication library, but a change in the authorisation mechanism being used at the component level, for instance.

There are dangers to following this approach too far, however:

  1. it can lead to disparagement of specialists and their skills, even to a distrust of experts;
  2. it can lead to arrogance on the part of generalists.

We see the first in desperately concerning trends such as politicians thinking they know more than economists or climate scientists, anti-vaxxers ignoring the benefits of vaccination, and idiocy around chem-trails, flat-earth beliefs and moon landing conspiracies.  It happens in the world of work, as well, I’m sad to say.  There is a particular type of MBA recipient, for instance, who believes that the completion of the course and award of the degree confers on them some sort of superhuman ability to know what is is best for all organisations in all circumstances[6].

Specialise first

To come back to the world of security, my recommendation is that even if you know that your skills and interests are leading you to a career as a generalist, then you need to become a specialist first, in at least area.  You may not become an expert in that field, but you need to know it well.  Better still, strive for at least a level of competence in several fields – an ability to converse knowledgeably with true experts and to understand at least why they are making the choices and recommendations that they are.

And that leads us to the key point here: if you become a generalist, you need to acknowledge lack of expertise: it must become your modus operandi, your métier, your way of working.  You need to recognise that your strength is not in your knowing many things, but in knowing what you don’t know, and when it is time to call in the specialists.

I’m not a cryptographer, but I know enough about cryptography to realise when it’s time to call in an expert.  I’m not an expert on legal issues around cryptography, either, but know when to call on a lawyer.  Nor am I an expert on block storage, blockchain consensus, quantum key exchange protocols, CPU scheduling or compression algorithms.  The same will go for many areas which I may be called on to touch as part of my job.  I hope to have enough training and expertise within related fields – or the ability to gain it – to be able to ask sensible questions, but sometimes even that won’t be true, and the best (and most productive) interaction will be to say “I don’t know about this: please explain it to me, or at least tell me what the options are.”  This seems to me to be particularly important for security folks: there are so many overlapping disciplines, and getting one piece wrong means that your defence in depth strategy just got a whole lot shallower.

Being too lazy to look things up, too arrogant to listen to others or too short-sighted to realise that there are areas in which we are not expert are things of which we should be ashamed.

But there is no shame in being a generalist, and knowing when you need to consult a specialist.


1 – I’m extrapolating horribly here, but it’s true for me so I’m assuming it’s a universal truth.

2 – apparently the look on my face, and the things I do with my tongue, are a sight to behold.

3 – I’m constantly trying to convince my wife of these, and although she’s sceptical about some, we’re now agreed that I shouldn’t be allowed access to any power tools again if we want avoid further trips to the Accident and Emergency department at the hospital[4].

4 – it’s not only power tools.  I once nearly removed my foot with a wallpaper stripper.  I still have the scar nearly 25 years later.

5 – somewhat gendered, for which I apologise.

6 – disclaimer – I have an MBA, and met many talented and humble people on my course (and have met many since) who don’t suffer from this predicament.

Open source and – well, bad people

For most people writing open source, it – open source software – seems like an unalloyed good.  You write code, and other nice people, like you, get to add to it, test it, document it and use it.  Look what good it can do to the world!  Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source.  Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed.  The very soul of open source – the licence – is doing our work for us: improving the world.

And on the whole, I’d agree.  But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder.  Because I know, and you know, when you think about it, that not everybody who uses open source is a good person.  Crackers (that’s “bad hackers”) use open source.  Drug dealers use open source.  People traffickers use open source.  Terrorists use open source.  Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people.  These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”.  You’re happy to continue contributing to open source because many more people are helped by it than harmed by it.  The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1].  This is sometimes described as:

“Actions are right in proportion as they tend to promote overall human happiness.”

I certainly hope that open source does tend to promote overall human happiness.  The problem is that criminals are not the only people who will be using open source – your open source – code.  There will be businesses whose practices are shady, governments that  oppress their detractors, police forces that spy on the citizens they watch.  This is your code, being used to do bad things.

But what even are bad things?  This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad.  We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:

  • tobacco manufacturers;
  • petrochemical and fracking companies;
  • plastics manufacturers;
  • organisations who don’t support LGBTQ+ people;
  • gun manufacturers.

There’s quite a range here, and that’s intentional.  Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3].  He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”.  For ESR, “it’s all about freedom”.  I disagree, although I don’t feel the need to attack him for it.  But it’s clear that his view about what constitutes good is different to mine.  I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view.  Although we tend to characterise the open source community as liberal, this has never been a good generalisation.  According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:

Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.

This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community.  Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation.  Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.

I still think, though, that I’m not too worried.  I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).


1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.

2 – you should be used to this by now: UK spelling throughout.

3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page

First aid – are you ready?

Your using the defibrillator is the best chance that the patient has of surviving.

Disclaimer: I am not a doctor, nor a medical professional. I will attempt not to give specific medical or legal advice in this article: please check your local medical and legal professionals before embarking on any course of action about which you are unsure.

This is, generally, a blog about security – that is, information security or cybersecurity – but I sometimes blog about other things. This is one of those articles. It’s still about security, if you will – the security and safety of those around you. Here’s how it came about: I recently saw a video on LinkedIn about a restaurant manager performing Abdominal Thrusts (it’s not called the Heimlich Manoeuvre any more due to trademarking) on a choking customer, quite possibly saving his life.

And I thought: I’ve done that.

And then I thought: I’ve performed CPR, and used a defibrillator, and looked after people who were drunk or concussed, and helped people having a diabetic episode, and encouraged a father to apply an epipen[1] to a confused child suffering from anaphylactic shock, and comforted a schoolchild who had just had an epileptic fit, and attended people in more than one car crash (typically referred to as an “RTC”, or “Road Traffic Collision” in the UK these days[2]).

And then I thought: I should tell people about these stories. Not to boast[3], but because if you travel a lot, or you commute to work, or you have a family, or you work in an office, or you ever go out to a party, or you play sports, or engage in hobby activities, or get on a plane or train or boat or drive anywhere, then there’s a decent chance that you may come across someone who needs your help, and it’s good – very good – if you can offer them some aid. It’s called “First Aid” for a reason: you’re not expected to know everything, or fix everything, but you’re the first person there who can provide aid, and that’s the best the patient can expect until professionals arrive.

Types of training

There are a variety of levels of first aid training that might be appropriate for you. These include:

  • family and children focussed;
  • workplace first aid;
  • hobby, sports and event first aid;
  • ambulance and local health service support and volunteering.

There’s an overlap between all of these, of course, and what you’re interested in, and what’s available to you, will vary based on your circumstances and location. There may be other constraints such as age and physical ability or criminal background checks: these will definitely be dependent on your location and individual context.

I’m what’s called, in the UK, a Community First Responder (CFR). We’re given some specific training to help provide emergency first aid in our communities. What exactly you do depends on your local ambulance trust – I’m with the East of England Ambulance Service Trust, and I have a kit with items to allow basic diagnosis and treatment which includes:

  • a defibrillator (AED) and associated pads, razors[4], shears, etc.
  • a tank of oxygen and various masks
  • some airway management equipment whose name I can never remember
  • glucogel for diabetic treatment
  • a pulsoximeter for heartrate and blood oxygen saturation measurement
  • gloves
  • bandages, plasters[6]
  • lots of forms to fill in
  • some other bits and pieces.

I also have a phone and a radio (not all CFRs get a radio, but our area is rural and has particularly bad mobile phone reception.

I’m on duty as I type this – I work from home, and my employer (the lovely Red Hat) is cool with my attending emergency calls in certain circumstances – and could be called out at any moment to an emergency in about a 10 mile/15km radius. Among the call-outs I’ve attended are cardiac arrests (“heart attacks”), fits, anaphylaxis (extreme allergic reactions), strokes, falls, diabetics with problems, drunks with problems, major bleeding, patients with difficulty breathing or chest pains, sepsis, and lots of stuff which is less serious (and which has maybe been misreported). The plan is that if it’s considered a serious condition, it looks like I can get there before an ambulance, or if the crew is likely to need more hands to help (for treating a full cardiac arrest, a good number of people can really help), then I get dispatched. I drive my own car, I’m not allowed sirens or lights, I’m not allowed to break the speed limit or go through red lights and I don’t attend road traffic collisions. I volunteer whatever hours fit around my job and broader life, I don’t get paid, and I provide my own fuel and vehicle insurance. I get anywhere from zero to four calls a day (but most often zero or one).

There are volunteers in other fields who attend events, provide sports or hobby first aid (I did some scuba diving training a while ago), and there are all sorts of types of training for workplace first aid. Most workplaces will have designated first aiders who can be called on if there’s a problem.

The minimum to know

The people I’ve just noted above – the trained ones – won’t always be available. Sometimes, you – with no training – will be the first on scene. In most jurisdictions, if you attempt first aid, the law will look kindly on you, even if you don’t get it all perfect[7]. In some jurisdictions, there’s actually an expectation that you’ll step in. What should you know? What should you do?

Here’s my view. It’s not the view of a professional, and it doesn’t take into account everybody’s circumstances. Again, it’s my view, and it’s that you should consider enough training to be able to cope with two of the most common – and serious – medical emergencies.

  1. Everybody should know how to deal with a choking patient.
  2. Everybody should know how do to CPR (Cardiopulmonary resuscitation) – chest compressions, at minimum, but with artificial respiration if you feel confident.

In the first of these cases, if someone is choking, and they continue to fail to breathe, they will die.

In the second of these cases, if someone’s heart has stopped beating, they are dead. Doing nothing means that they stay that way. Doing something gives them a chance.

There are videos and training available on the Internet, or provided by many organisations.

The minimum to try

If you come across somebody who is in cardiac arrest, call the emergency services. Dispatch someone (if you’re not alone) to try to find a defibrillator (AED) – the emergency services call centre will often help with this, or there’s an app called “GoodSam” which will locate one for you.

Use the defibrillator.

They are designed for untrained people. You open it up, and it will talk to you. Do what it says.

Even if you don’t feel confident giving CPR, use a defibrillator.

I have used a defibrillator. They are easy to use.

Use that defibrillator.

The defibrillator is not the best chance that the patient has of surviving: your using the defibrillator is the best chance that the patient has of surviving.

Conclusion

Providing first aid for someone in a serious situation doesn’t always work. Sometimes people die. In fact, in the case of a cardiac arrest (heart attack), the percentage of times that CPR is successful is low – even in a hospital setting, with professionals on hand. If you have tried, you’ve given them a chance. It is not your fault if the outcome isn’t perfect. But if you hadn’t tried, there was no chance.

Please respect and support professionals, as well. They are often busy and concerned, and may not have the time to thank you, but your help is appreciated. We are lucky, in our area, that the huge majority of EEAST ambulance personnel are very supportive of CFRs and others who help out in an emergency.

If this article has been interesting to you, and you are considering taking some training, then get to the end of the post, share it via social media(!), and then search online for something appropriate to you. There are many organisations who will provide training – some for free – and many opportunities for volunteering. You know that if a member of your family needed help, you would hope that somebody was capable and willing to provide it.

Final note – if you have been affected by anything in this article, please find some help, whether professional or just with friends. Many of the medical issues I’ve discussed are distressing, and self care is important (it’s one of the things that EEAST takes seriously for all its members, including its CFRs).


1 – a special adrenaline-administering device (don’t use somebody else’s – they’re calibrated pretty carefully to an individual).

2 – calling it an “accident” suggests it was no-one’s fault, when often, it really was.

3 – well, maybe a little bit.

4 – to shave hairy chests – no, really.

5 – to cut through clothing. And nipples chains, if required. Again, no, really.

6 – “Bandaids” for our US cousins.

7 – please check your local jurisdiction’s rules on this.