On passwords and accounts

This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.

Once a year or so, one of the big UK tech magazines or websites[1] does a survey where they send a group of people to one of the big London train stations and ask travellers for their password[3].  The deal is that every traveller who gives up their password gets a pencil, a chocolate bar or similar.

I’ve always been sad that I’ve never managed to be at the requisite station for one of these polls.  I would love to get a free pencil – or even better a chocolate bar[4] – for lying about a password.  Or even, frankly, for giving them one of my actual passwords, which would be completely useless to them without some identifying information about me.  Which I obviously wouldn’t give them.  Or again, would pretend to give them, but lie.

The point of this exercise[5] is supposed to be to expose the fact that people are very bad about protecting their passwords.  What it actually identifies is that a good percentage of the British travelling public are either very bad about protecting their passwords, or are entirely capable of making informed (or false) statements in order to get a free pencil or chocolate bar[4]. Good on the British travelling public, say I. 

Now, everybody agrees that passwords are on their way out, as they have been on their way out for a good 15-20 years, so that’s nice.  People misuse them, reuse them, don’t change them often enough, etc., etc..  But it turns out that it’s not the passwords that are the real problem.  This week, more than one British MP admitted – seemingly without any realisation that they were doing  anything wrong – that they share their passwords with their staff, or just leave their machines unlocked so that anyone on their staff can answer their email or perform other actions on their behalf.

This isn’t a password problem.  It’s a misunderstanding-of-what-accounts-are-for problem.

People seem to think that, in a corporate or government setting, the point of passwords is to stop people looking at things they shouldn’t.

That’s wrong.  The point of passwords is to allow different accounts for different people, so that the appropriate people can exercise the appropriate actions, and be audited as having done so.  It is, basically, a matching of authority to responsibility – as I discussed in last week’s post Explained: five misused security words – with a bit of auditing thrown in.

Now, looking at things you shouldn’t is one action that a person may have responsibility for, certainly, but it’s not the main thing.  But if you misuse accounts in the way that has been exposed in the UK parliament, then worse things are going to happen.  If you willingly bypass accounts, you are removing the ability of those who have a responsibility to ensure correct responsibility-authority pairings to track and audit actions.  You are, in fact, setting yourself up with excuses before the fact, but also making it very difficult to prove wrongdoing by other people who may misuse an account.  A culture that allows such behaviour is one which doesn’t allow misuse to be tracked.  This is bad enough in a company or a school – but in our seat of government?  Unacceptable.  You just have to hope that there are free pencils.  Or chocolate bars[4].


1. I can’t remember which, and I’m not going to do them the service of referencing them, or even looking them up, for reasons that should be clear once you read the main text.[2]

2. I’m trialling a new form or footnote referencing. Please let me know whether you like it.

3. I guess their email password, but again, I can’t remember and I’m not going to look it up.

4. Or similar.

5. I say “point”…

What’s the point of security? 

Like it or not, your users are part of your system.

No, really: what’s the point of it? I’m currently at the Openstack Summit in Sydney, and was just thrown out of the exhibition area as it’s closed for preparations for an event. A fairly large gentleman in a dark-coloured uniform made it clear that if I didn’t have taken particular sticker on my badge, then I wasn’t allowed to stay.  Now, as Red Hat* is an exhibitor, and I’m our booth manager’s a buddy**, she immediately offered me the relevant sticker, but as I needed a sit down and a cup of tea***, I made my way to the exit, feeling only slightly disgruntled.

“What’s the point of this story?” you’re probably asking. Well, the point is this: I’m 100% certain that if I’d asked the security guard why he was throwing me out, he wouldn’t have had a good reason. Oh, he’d probably have given me a reason, and it might have seemed good to him, but I’m really pretty sure that it wouldn’t have satisfied me. And I don’t mean the “slightly annoyed, jet-lagged me who doesn’t want to move”, but the rational, security-aware me who can hopefully reason sensibly about such things. There may even have been such a reason, but I doubt that he was privy to it.

My point, then, really comes down to this. We need be able to explain the reasons for security policies in ways which:

  1. Can be expressed by those enforcing them;
  2. Can be understood by uninformed users;
  3. Can be understood and queried by informed users.

In the first point, I don’t even necessarily mean people: sometimes – often – it’s systems that are doing the enforcing. This makes things even more difficult in ways: you need to think about UX***** in ways that are appropriate for two very, very different constituencies.

I’ve written before about how frustrating it can be when you can’t engage with the people who have come up with a security policy, or implemented a particular control. If there’s no explanation of why a control is in place – the policy behind it – and it’s an impediment to the user*******, then it’s highly likely that people will either work around it, or stop using the system.

Unless. Unless you can prove that the security control is relevant, proportionate and beneficial to the user. This is going to be difficult in many cases.  But I have seen some good examples, which means that I’m hopeful that we can generally do it if we try hard enough.  The problem is that most designers of systems don’t bother trying. They don’t bother to have any description, usually, but as for making that description show the relevance, proportionality and benefits to the user? Rare, very rare.

This is another argument for security folks being more embedded in systems design because, like it or not, your users are part of your system, and they need to be included in the design.  Which means that you need to educate your UX people, too, because if you can’t convince them of the need to do security, you’re sure as heck not going to be able to convince your users.

And when you work with them on the personae and use case modelling, make sure that you include users who are like you: knowledgeable security experts who want to know why they should submit themselves to the controls that you’re including in the system.

Now, they’ve opened up the exhibitor hall again: I’ve got to go and grab myself a drink before they start kicking people out again.


*my employer.

**she’s American.

***I’m British****.

****and the conference is in Australia, which meant that I was actually able to get a decent cup of the same.

*****this, I believe, stands for “User eXperience”. I find it almost unbearably ironic that the people tasked with communicating clearly to us can’t fully grasp the idea of initial letters standing for something.

******and most security controls are.

The commonwealth of Open Source

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

“But surely Open Source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they’ve written?”

Hands up who’s heard this?*  I’ve been talking to customers – yes, they let me talk to customers sometimes – and to folks in the Field**, and this is one that comes up, it turns out, quite frequently.  I talked in a previous post (“Disbelieving the many eyes hypothesis“) about how Open Source software – particularly security software – doesn’t get to be magically more secure than proprietary software, and talked a little bit there about how I’d still go with Open Source over proprietary every time, but the way that I’ve heard the particular question – about OSS being less secure – suggests to me that we there are times when we don’t just need to a be able to explain why Open Source needs work, but also to be able to engage actively in Apologetics***.  So here goes.  I don’t expect it to be up to Newton’s or Wittgenstein’s levels of logic, but I’ll do what I can, and I’ll summarise at the bottom so that you’ve got a quick list of the points if you want it.

The arguments

First of all, we should accept that no software is perfect******.  Not proprietary software, not Open Source software.  Second, we should accept that there absolutely is good proprietary software out there.  Third, on the other hand, there is some very bad Open Source software.  Fourth, there are some extremely intelligent, gifted and dedicated architects, designers and software engineers who create proprietary software.

But here’s the rub.  Fifth – the pool of people who will work on or otherwise look at that proprietary software is limited.  And you can never hire all the best people.  Even in government and public sector organisations – who often have a larger talent pool available to them, particularly for *cough* security-related *cough* applications – the pool is limited.

Sixth – the pool of people available to look at, test, improve, break, re-improve, and roll out Open Source software is almost unlimited, and does include the best people.  Seventh – and I love this one: the pool also includes many of the people writing the proprietary software.  Eighth – many of the applications being written by public sector and government organisations are open sourced anyway these days.

Ninth – if you’re worried about running Open Source software which is unsupported, or comes from dodgy, un-provenanced sources, then good news: there are a bunch of organisations******* who will check the provenance of that code, support, maintain and patch it.  They’ll do it along the same type of business lines that you’d expect from a proprietary software provider.  You can also ensure that the software you get from them is the right software: the standard technique is for them to sign bundles of software so that you can check that what you’re installing isn’t just from some random bad person who’s taken that code and done Bad Things[tm] with it.

Tenth – and here’s the point of this post – when you run Open Source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into, and adding to, the commonwealth of knowledge and expertise and experience that is Open Source.  And which is only made greater by your doing so.  If you do this yourself, or through one of the businesses who will support that Open Source software********, you are part of this commonwealth.  Things get better with Open Source software, and you can see them getting better.  Nothing is hidden – it’s, well, “open”.  Can things get worse?  Yes, they can, but we can see when that happens, and fix it.

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

I know that I need to be careful about the use of the “commonwealth” as a Briton: it has connotations of (faded…) empire which I don’t intend it to hold in this case.  It’s probably not what Cromwell*********, had in mind when he talked about the “Commonwealth”, either, and anyway, he’s a somewhat … controversial historical figure.  What I’m talking about is a concept in which I think the words deserve concatenation – “common” and “wealth” – to show that we’re talking about something more than just money, but shared wealth available to all of humanity.

I really believe in this.  If you want to take away a religious message from this blog, it should be this**********: the commonwealth is our heritage, our experience, our knowledge, our responsibility.  The commonwealth is available to all of humanity.  We have it in common, and it is an almost inestimable wealth.

 

A handy crib sheet

  1. (Almost) no software is perfect.
  2. There is good proprietary software.
  3. There is bad Open Source software.
  4. There are some very clever, talented and devoted people who create proprietary software.
  5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
  6. The corresponding pool of people for Open Source is virtually unlimited…
  7. …and includes a goodly number of the talent pool of people writing proprietary software.
  8. Public sector and government organisations often open source their software anyway.
  9. There are businesses who will support Open Source software for you.
  10. Contribution – even usage – adds to the commonwealth.

*OK – you can put your hands down now.

**should this be capitalised?  Is there a particular field, or how does it work?  I’m not sure.

***I have a degree in English Literature and Theology – this probably won’t surprise some of the regular readers of this blog****.

****not, I hope, because I spout too much theology*****, but because it’s often full of long-winded, irrelevant Humanities (US Eng: “liberal arts”) references.

*****Emacs.  Every time.

******not even Emacs.  And yes, I know that there are techniques to prove the correctness of some software.  (I suspect that Emacs doesn’t pass many of them…)

*******hand up here: I’m employed by one of them, Red Hat, Inc..  Go have a look – fun place to work, and we’re usually hiring.

********assuming that they fully abide by the rules of the Open Source licence(s) they’re using, that is.

*********erstwhile “Lord Protector of England, Scotland and Ireland” – that Cromwell.

**********oh, and choose Emacs over vi variants, obviously.

Wow: autonomous agents!

The problem is not the autonomy. The problem isn’t even particularly with the intelligence…

Autonomous, intelligent agents offer some great opportunities for our digital lives*.  There, look, I said it.  They will book meetings for us, negotiate cheap holidays, order our children’s complete school outfit for the beginning of term, and let us know when it’s time to go to the nurse for our check-up.  Our business lives, our personal lives, our family relationships – they’ll all be revolutionised by autonomous agents.  Autonomous agents will learn our preferences, have access to our diaries, pay for items, be able to send messages to our friends.

This is all fantastic, and I’m very excited about it.  The problem is that I’ve been excited about it for nearly 20 years, when I was involved in a project around autonomous agents in Java.  It was very neat then, and it’s still very neat now***.

Of course, technology has moved on.  Some of the underlying capabilities are much more advanced now than then.  General availability of APIs, consistency of data formats, better Machine Learning (or Artificial Intelligence, if you must), less computationally expensive cryptography, and the rise of blockchains and distributed ledgers: they all bring the ability for us to build autonomous agents closer than ever before.  We talked about disintermediation back in the day, and that looked plausible.  We really can build scalable marketplaces now in ways which just weren’t as feasible two decades ago.

The problem, though, isn’t the technology.  It was never the technology.  We could have made the technology work 20 years ago, even if it wasn’t as fast, secure or wide-ranging as it could be today.  It isn’t even vested interests from the large platform players, who arguably own much of this space at the moment – though these interests are much more consolidated than they were when I was first looking at this issue.

The problem is not the autonomy.  The problem isn’t even particularly with the intelligence: you can program as much or as little in as you want, or as the technology allows.  The problem is with the agency.

How much of my life do I want to hand over to what’s basically a ‘bot?  Ignore***** the fact that these things will get hacked******, and assume we’re talking about normal, intended usage.  What does “agency” mean?  It means acting for someone: being their agent – think of what actors’ agents do, for example.  When I engage a lawyer or a builder or an accountant to do something for me, or when an actor employs an agent for that matter, we’re very clear about what they’ll be doing.  This is to protect both me and them from unintended consequences.  There’s a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent.  There are contracts, and agreed restitutions – basically punishments – for when things go wrong.  Say that an accountant buys 500 shares in a bank, and then I turn round and say that she never had the authority to do so: if we’ve set up the relationship correctly, it should be entirely clear whether or not she did, and whose responsibility it is to deal with any fall-out from that purchase.

Now think about that in terms of autonomous, intelligent agents.  Write me that contract, and make it equivalent in software and the legal system.  Tell me what happens when things go wrong with the software.  Show me how to prove that I didn’t tell the agent to buy those shares.  Explain to me where the restitution lies.

And these are arguably the simple problems.  How to I rebuild the business reputation that I’ve built up over the past 15 years when my agent posts on Twitter a tweet about how I use a competitor’s products, when I’m just trialling them for interest?  How does an agent know not to let my wife see the diary entry for my meeting with that divorce lawyer*******?  What aspects of my browsing profile are appropriate for suggesting – or even buying – online products or services with my personal or business credit card*********?  And there’s the classic “buying flowers for the mistress and having them sent to the wife” problem**********.

I don’t think we have an answer to these questions: not even close.  You know that virtual admin assistant we’ve been promised in sci-fi movies for decades now: the one with the futuristic haircut who appears as a hologram outside our office?  Holograms – nearly.  Technology behind it – pretty much.  Trust, reputation and agency?  Nowhere near.

 


*I hate this word: “digital”.  Well, not really, but it’s used far too much as a shorthand for “newest technology”**.

**”Digital businesses”.  You mean, unlike all the analogue ones?  Come on.

***this is one of those words that my kids hate me using.  There are two types of word that come into this category: old words and new words.  Either I’m showing how old I am, or I’m trying to be hip****, which is arguably worse.  I can’t win.

****yeah, they don’t say hip.  That’s one of the “old person words”.

*****for now, at least.  Let’s not forget it.

******_everything_  gets hacked*******.

*******I could say “cracked”, but some of it won’t be malicious, and hacking might be positive.

********I’m not.  This is an example.

*********this isn’t even about “dodgy” things I might have been browsing on home time.  I may have been browsing for analyst services, with the intent to buy a subscription: how sure am I that the agent won’t decide to charge these to my personal credit card when it knows that I perform other “business-like” actions like pay for business-related books myself sometimes?

**********how many times do I have to tell you, darling…?

Next generation … people

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead.  The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out.  Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”.  Or, if you’re really lucky “Next gen”.  What I’d like to talk about this week, however, is the actual next generation – that’s people.  IT people.  IT security people.  I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field.  Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture.  Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents.  However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know).  Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it.  Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise.  I mean: really?] and improve security that way by having a lot of developers.”  Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction.  And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees.  I was astonished to note that maybe half of the people there had not come before.  And heartened.  I was also pleased to note that a good number of them appeared fairly young*********.  On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

  1. we’ve got a job to do.  The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.
  2. if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”.  It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever.  Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least.  Which, if you listen to my kids, isn’t that hard.

**********who actually gets paid by cheque (or check) any more?

Diversity – redux

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.

Fair warning: this is not really a blog post about IT security, but about issues which pertain to our industry.  You’ll find social sciences and humanities – “soft sciences” – referenced.  I make no excuses (and I should declare previous form*).

Warning two: many of the examples I’m going to be citing are to do with gender discrimination and imbalances.  These are areas that I know the most about, but I’m very aware of other areas of privilege and discrimination, and I’d specifically call out LGBTQ, ethnic minority, age, disability and non-neurotypical discrimination.  I’m very happy to hear (privately or in comments) from people with expertise in other areas.

You’ve probably read the leaked internal document (a “manifesto”) from a Google staffer talking challenging affirmative action to try to address diversity, and complaining about a liberal/left-leaning monoculture at the company.  If you haven’t, you should: take the time now.  It’s well-written, with some interesting points, but I have some major problems with it that I think it’s worth addressing.  (There’s a very good rebuttal of certain aspects available from an ex-Google staffer.)  If you’re interested in where I’m coming from on this issue, please feel free to read my earlier post: Diversity in IT security: not just a canine issue**.

There are two issues that concern me specifically:

  1. no obvious attempt to acknowledge the existence of privilege and power imbalances;
  2. the attempt to advance the gender essentialism argument by alleging an overly leftist bias in the social sciences.

I’m not sure that these approaches are intentional or unconscious, but they’re both insidious, and if ignored, allow more weight to be given to the broader arguments put forward than I believe they merit.  I’m not planning to address those broader issues: there are other people doing a good job of that (see the rebuttal I referenced above, for instance).

Before I go any further, I’d like to record that I know very little about Google, its employment practices or its corporate culture: pretty much everything I know has been gleaned from what I’ve read online***.  I’m not, therefore, going to try to condone or condemn any particular practices.  It may well be that some of the criticisms levelled in the article/letter are entirely fair: I just don’t know.  What I’m interested in doing here is addressing those areas which seem to me not to be entirely open or fair.

Privilege and power imbalances

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.  “Why is there no march for heterosexual pride?”  “Why are there no men-only colleges in the UK?”  The generally accepted argument is that until there is equality in the particular sphere in which a group is campaigning, then the power imbalance and privilege afforded to the majority-represented group means that there may be a need for action to help for members the under-represented group to achieve parity.  That doesn’t mean that members of that group are necessarily unable to reach positions of power and influence within that sphere, just that, on average, the effort required will be greater than that for those in the majority-privileged group.

What does all of the above mean for women in tech, for example?  That it’s generally harder for women to succeed than it is for men.  Not always.  But on average.  So if we want to make it easier for women (in this example) to succeed in tech, we need to find ways to help.

The author of the Google piece doesn’t really address this issue.  He (and I’m just assuming it’s a man who wrote it) suggests that women (who seem to be the key demographic with whom he’s concerned) don’t need to be better represented in all parts of Google, and therefore affirmative action is inappropriate.  I’d say that even if the first part of that thesis is true (and I’m not sure it is: see below), then affirmative action may still be required for those who do.

The impact of “leftist bias”

Many of the arguments presented in the manifesto are predicated on the following thesis:

  • the corporate culture at Google**** are generally leftist-leaning
  • many social sciences are heavily populated by leftist-leaning theorists
  • these social scientists don’t accept the theory of gender essentialism (that women and men are suited to different roles)
  • THEREFORE corporate culture is overly inclined to reject gender essentialism
  • HENCE if a truly diverse culture is to be encouraged within corporate culture, leftist theories such as gender essentialism should be rejected.

There are several flaws here, one of which is that diversity means accepting views which are anti-diverse.  It’s a reflection of a similar right-leaning fallacy that in order to show true tolerance, the views of intolerant people should be afforded the same privilege of those who are aiming for greater tolerance.*****

Another flaw is the argument that just because a set of theories is espoused by a political movement to which one doesn’t subscribe that it’s therefore suspect.

Conclusion

As I’ve noted above, I’m far from happy with much of the so-called manifesto from what I’m assuming is a male Google staffer.  This post hasn’t been an attempt to address all of the arguments, but to attack a couple of the underlying arguments, without which I believe the general thread of the document is extremely weak.  As always, I welcome responses either in comments or privately.

 


*my degree is in English Literature and Theology.  Yeah, I know.

**it’s the only post on which I’ve had some pretty negative comments, which appeared on the reddit board from which I linked it.

***and is probably therefore just as far off the mark as anything else that you or I read online.

****and many other tech firms, I’d suggest.

*****an appeal is sometimes made to the left’s perceived poster child of postmodernism: “but you say that all views are equally valid”.  That’s not what postmodern (deconstructionist, post-structuralist) theory actually says.  I’d characterise it more as:

  • all views are worthy of consideration;
  • BUT we should treat with suspicion those views held by those which privilege, or which privilege those with power.

Embracing fallibility

History repeats itself because no one was listening the first time. (Anonymous)

We’re all fallible.  You’re fallible, he’s fallible, she’s fallible, I’m fallible*.  We all get things wrong from time to time, and the generally accepted “modern” management approach is that it’s OK to fail – “fail early, fail often” – as long as you learn from your mistakes.  In fact, there’s a growing view that if you’d don’t fail, you can’t learn – or that your learning will be slower, and restricted.

The problem with some fields – and IT security is one of them – is that failing can be a very bad thing, with lots of very unexpected consequences.  This is particularly true for operational security, but the same can be the case for application, infrastructure or feature security.  In fact, one of the few expected consequences is that call to visit your boss once things are over, so that you can find out how many days*** you still have left with your organisation.  But if we are to be able to make mistakes**** and learn from them, we need to find ways to allow failure to happen without catastrophic consequences to our organisations (and our careers).

The first thing to be aware of is that we can learn from other people’s mistakes.  There’s a famous aphorism, supposedly first said by George Santayana and often translated as “Those who cannot learn from history are doomed to repeat it.”  I quite like the alternative:  “History repeats itself because no one was listening the first time.”  So, let’s listen, and let’s consider how to learn from other people’s mistakes (and our own).  The classic way of thinking about this is by following “best practices”, but I have a couple of problems with this phrase.  The first is that very rarely can you be certain that the context in which you’re operating is exactly the same as that of those who framed these practices.  The other – possibly more important – is that “best” suggests the summit of possibilities: you can’t do better than best.  But we all know that many practices can indeed be improved on.  For that reason, I rather like the alternative, much-used at Intel Corporation, which is “BKMs”: Best Known Methods.  This suggests that there may well be better approaches waiting to be discovered.  It also talks about methods, which suggests to me more conscious activities than practices, which may become unconscious or uncritical followings of others.

What other opportunities are open to us to fail?  Well, to return to a theme which is dear to my heart, we can – and must – discuss with those within our organisations who run the business what levels of risk are appropriate, and explain that we know that mistakes can occur, so how can we mitigate against them and work around them?  And there’s the word “mitigate” – another approach is to consider managed degradation as one way to protect our organisations***** from the full impact of failure.

Another is to embrace methodologies which have failure as a key part of their philosophy.  The most obvious is Agile Programming, which can be extended to other disciplines, and, when combined with DevOps, allows not only for fast failure but fast correction of failures.  I plan to discuss DevOps – and DevSecOps, the practice of rolling security into DevOps – in more detail in a future post.

One last approach that springs to mind, and which should always be part of our arsenal, is defence in depth.  We should be assured that if one element of a system fails, that’s not the end of the whole kit and caboodle******.  That only works if we’ve thought about single points of failure, of course.

The approaches above are all well and good, but I’m not entirely convinced that any one of them – or a combination of them – gives us a complete enough picture that we can fully embrace “fail fast, fail often”.  There are other pieces, too, including testing, monitoring, and organisational cultural change – an important and often overlooked element – that need to be considered, but it feels to me that we have some way to go, still.  I’d be very interested to hear your thoughts and comments.

 


*my family is very clear on this point**.

**I’m trying to keep it from my manager.

***or if you’re very unlucky, minutes.

****amusingly, I first typed this word as “misteaks”.  You’ve got to love those Freudian slips.

*****and hence ourselves.

******no excuse – I just love the phrase.