Wow: autonomous agents!

The problem is not the autonomy. The problem isn’t even particularly with the intelligence…

Autonomous, intelligent agents offer some great opportunities for our digital lives*.  There, look, I said it.  They will book meetings for us, negotiate cheap holidays, order our children’s complete school outfit for the beginning of term, and let us know when it’s time to go to the nurse for our check-up.  Our business lives, our personal lives, our family relationships – they’ll all be revolutionised by autonomous agents.  Autonomous agents will learn our preferences, have access to our diaries, pay for items, be able to send messages to our friends.

This is all fantastic, and I’m very excited about it.  The problem is that I’ve been excited about it for nearly 20 years, when I was involved in a project around autonomous agents in Java.  It was very neat then, and it’s still very neat now***.

Of course, technology has moved on.  Some of the underlying capabilities are much more advanced now than then.  General availability of APIs, consistency of data formats, better Machine Learning (or Artificial Intelligence, if you must), less computationally expensive cryptography, and the rise of blockchains and distributed ledgers: they all bring the ability for us to build autonomous agents closer than ever before.  We talked about disintermediation back in the day, and that looked plausible.  We really can build scalable marketplaces now in ways which just weren’t as feasible two decades ago.

The problem, though, isn’t the technology.  It was never the technology.  We could have made the technology work 20 years ago, even if it wasn’t as fast, secure or wide-ranging as it could be today.  It isn’t even vested interests from the large platform players, who arguably own much of this space at the moment – though these interests are much more consolidated than they were when I was first looking at this issue.

The problem is not the autonomy.  The problem isn’t even particularly with the intelligence: you can program as much or as little in as you want, or as the technology allows.  The problem is with the agency.

How much of my life do I want to hand over to what’s basically a ‘bot?  Ignore***** the fact that these things will get hacked******, and assume we’re talking about normal, intended usage.  What does “agency” mean?  It means acting for someone: being their agent – think of what actors’ agents do, for example.  When I engage a lawyer or a builder or an accountant to do something for me, or when an actor employs an agent for that matter, we’re very clear about what they’ll be doing.  This is to protect both me and them from unintended consequences.  There’s a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent.  There are contracts, and agreed restitutions – basically punishments – for when things go wrong.  Say that an accountant buys 500 shares in a bank, and then I turn round and say that she never had the authority to do so: if we’ve set up the relationship correctly, it should be entirely clear whether or not she did, and whose responsibility it is to deal with any fall-out from that purchase.

Now think about that in terms of autonomous, intelligent agents.  Write me that contract, and make it equivalent in software and the legal system.  Tell me what happens when things go wrong with the software.  Show me how to prove that I didn’t tell the agent to buy those shares.  Explain to me where the restitution lies.

And these are arguably the simple problems.  How to I rebuild the business reputation that I’ve built up over the past 15 years when my agent posts on Twitter a tweet about how I use a competitor’s products, when I’m just trialling them for interest?  How does an agent know not to let my wife see the diary entry for my meeting with that divorce lawyer*******?  What aspects of my browsing profile are appropriate for suggesting – or even buying – online products or services with my personal or business credit card*********?  And there’s the classic “buying flowers for the mistress and having them sent to the wife” problem**********.

I don’t think we have an answer to these questions: not even close.  You know that virtual admin assistant we’ve been promised in sci-fi movies for decades now: the one with the futuristic haircut who appears as a hologram outside our office?  Holograms – nearly.  Technology behind it – pretty much.  Trust, reputation and agency?  Nowhere near.

 


*I hate this word: “digital”.  Well, not really, but it’s used far too much as a shorthand for “newest technology”**.

**”Digital businesses”.  You mean, unlike all the analogue ones?  Come on.

***this is one of those words that my kids hate me using.  There are two types of word that come into this category: old words and new words.  Either I’m showing how old I am, or I’m trying to be hip****, which is arguably worse.  I can’t win.

****yeah, they don’t say hip.  That’s one of the “old person words”.

*****for now, at least.  Let’s not forget it.

******_everything_  gets hacked*******.

*******I could say “cracked”, but some of it won’t be malicious, and hacking might be positive.

********I’m not.  This is an example.

*********this isn’t even about “dodgy” things I might have been browsing on home time.  I may have been browsing for analyst services, with the intent to buy a subscription: how sure am I that the agent won’t decide to charge these to my personal credit card when it knows that I perform other “business-like” actions like pay for business-related books myself sometimes?

**********how many times do I have to tell you, darling…?

Next generation … people

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead.  The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out.  Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”.  Or, if you’re really lucky “Next gen”.  What I’d like to talk about this week, however, is the actual next generation – that’s people.  IT people.  IT security people.  I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field.  Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture.  Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents.  However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know).  Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it.  Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise.  I mean: really?] and improve security that way by having a lot of developers.”  Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction.  And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees.  I was astonished to note that maybe half of the people there had not come before.  And heartened.  I was also pleased to note that a good number of them appeared fairly young*********.  On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

  1. we’ve got a job to do.  The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.
  2. if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”.  It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever.  Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least.  Which, if you listen to my kids, isn’t that hard.

**********who actually gets paid by cheque (or check) any more?

Diversity – redux

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.

Fair warning: this is not really a blog post about IT security, but about issues which pertain to our industry.  You’ll find social sciences and humanities – “soft sciences” – referenced.  I make no excuses (and I should declare previous form*).

Warning two: many of the examples I’m going to be citing are to do with gender discrimination and imbalances.  These are areas that I know the most about, but I’m very aware of other areas of privilege and discrimination, and I’d specifically call out LGBTQ, ethnic minority, age, disability and non-neurotypical discrimination.  I’m very happy to hear (privately or in comments) from people with expertise in other areas.

You’ve probably read the leaked internal document (a “manifesto”) from a Google staffer talking challenging affirmative action to try to address diversity, and complaining about a liberal/left-leaning monoculture at the company.  If you haven’t, you should: take the time now.  It’s well-written, with some interesting points, but I have some major problems with it that I think it’s worth addressing.  (There’s a very good rebuttal of certain aspects available from an ex-Google staffer.)  If you’re interested in where I’m coming from on this issue, please feel free to read my earlier post: Diversity in IT security: not just a canine issue**.

There are two issues that concern me specifically:

  1. no obvious attempt to acknowledge the existence of privilege and power imbalances;
  2. the attempt to advance the gender essentialism argument by alleging an overly leftist bias in the social sciences.

I’m not sure that these approaches are intentional or unconscious, but they’re both insidious, and if ignored, allow more weight to be given to the broader arguments put forward than I believe they merit.  I’m not planning to address those broader issues: there are other people doing a good job of that (see the rebuttal I referenced above, for instance).

Before I go any further, I’d like to record that I know very little about Google, its employment practices or its corporate culture: pretty much everything I know has been gleaned from what I’ve read online***.  I’m not, therefore, going to try to condone or condemn any particular practices.  It may well be that some of the criticisms levelled in the article/letter are entirely fair: I just don’t know.  What I’m interested in doing here is addressing those areas which seem to me not to be entirely open or fair.

Privilege and power imbalances

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.  “Why is there no march for heterosexual pride?”  “Why are there no men-only colleges in the UK?”  The generally accepted argument is that until there is equality in the particular sphere in which a group is campaigning, then the power imbalance and privilege afforded to the majority-represented group means that there may be a need for action to help for members the under-represented group to achieve parity.  That doesn’t mean that members of that group are necessarily unable to reach positions of power and influence within that sphere, just that, on average, the effort required will be greater than that for those in the majority-privileged group.

What does all of the above mean for women in tech, for example?  That it’s generally harder for women to succeed than it is for men.  Not always.  But on average.  So if we want to make it easier for women (in this example) to succeed in tech, we need to find ways to help.

The author of the Google piece doesn’t really address this issue.  He (and I’m just assuming it’s a man who wrote it) suggests that women (who seem to be the key demographic with whom he’s concerned) don’t need to be better represented in all parts of Google, and therefore affirmative action is inappropriate.  I’d say that even if the first part of that thesis is true (and I’m not sure it is: see below), then affirmative action may still be required for those who do.

The impact of “leftist bias”

Many of the arguments presented in the manifesto are predicated on the following thesis:

  • the corporate culture at Google**** are generally leftist-leaning
  • many social sciences are heavily populated by leftist-leaning theorists
  • these social scientists don’t accept the theory of gender essentialism (that women and men are suited to different roles)
  • THEREFORE corporate culture is overly inclined to reject gender essentialism
  • HENCE if a truly diverse culture is to be encouraged within corporate culture, leftist theories such as gender essentialism should be rejected.

There are several flaws here, one of which is that diversity means accepting views which are anti-diverse.  It’s a reflection of a similar right-leaning fallacy that in order to show true tolerance, the views of intolerant people should be afforded the same privilege of those who are aiming for greater tolerance.*****

Another flaw is the argument that just because a set of theories is espoused by a political movement to which one doesn’t subscribe that it’s therefore suspect.

Conclusion

As I’ve noted above, I’m far from happy with much of the so-called manifesto from what I’m assuming is a male Google staffer.  This post hasn’t been an attempt to address all of the arguments, but to attack a couple of the underlying arguments, without which I believe the general thread of the document is extremely weak.  As always, I welcome responses either in comments or privately.

 


*my degree is in English Literature and Theology.  Yeah, I know.

**it’s the only post on which I’ve had some pretty negative comments, which appeared on the reddit board from which I linked it.

***and is probably therefore just as far off the mark as anything else that you or I read online.

****and many other tech firms, I’d suggest.

*****an appeal is sometimes made to the left’s perceived poster child of postmodernism: “but you say that all views are equally valid”.  That’s not what postmodern (deconstructionist, post-structuralist) theory actually says.  I’d characterise it more as:

  • all views are worthy of consideration;
  • BUT we should treat with suspicion those views held by those which privilege, or which privilege those with power.

Embracing fallibility

History repeats itself because no one was listening the first time. (Anonymous)

We’re all fallible.  You’re fallible, he’s fallible, she’s fallible, I’m fallible*.  We all get things wrong from time to time, and the generally accepted “modern” management approach is that it’s OK to fail – “fail early, fail often” – as long as you learn from your mistakes.  In fact, there’s a growing view that if you’d don’t fail, you can’t learn – or that your learning will be slower, and restricted.

The problem with some fields – and IT security is one of them – is that failing can be a very bad thing, with lots of very unexpected consequences.  This is particularly true for operational security, but the same can be the case for application, infrastructure or feature security.  In fact, one of the few expected consequences is that call to visit your boss once things are over, so that you can find out how many days*** you still have left with your organisation.  But if we are to be able to make mistakes**** and learn from them, we need to find ways to allow failure to happen without catastrophic consequences to our organisations (and our careers).

The first thing to be aware of is that we can learn from other people’s mistakes.  There’s a famous aphorism, supposedly first said by George Santayana and often translated as “Those who cannot learn from history are doomed to repeat it.”  I quite like the alternative:  “History repeats itself because no one was listening the first time.”  So, let’s listen, and let’s consider how to learn from other people’s mistakes (and our own).  The classic way of thinking about this is by following “best practices”, but I have a couple of problems with this phrase.  The first is that very rarely can you be certain that the context in which you’re operating is exactly the same as that of those who framed these practices.  The other – possibly more important – is that “best” suggests the summit of possibilities: you can’t do better than best.  But we all know that many practices can indeed be improved on.  For that reason, I rather like the alternative, much-used at Intel Corporation, which is “BKMs”: Best Known Methods.  This suggests that there may well be better approaches waiting to be discovered.  It also talks about methods, which suggests to me more conscious activities than practices, which may become unconscious or uncritical followings of others.

What other opportunities are open to us to fail?  Well, to return to a theme which is dear to my heart, we can – and must – discuss with those within our organisations who run the business what levels of risk are appropriate, and explain that we know that mistakes can occur, so how can we mitigate against them and work around them?  And there’s the word “mitigate” – another approach is to consider managed degradation as one way to protect our organisations***** from the full impact of failure.

Another is to embrace methodologies which have failure as a key part of their philosophy.  The most obvious is Agile Programming, which can be extended to other disciplines, and, when combined with DevOps, allows not only for fast failure but fast correction of failures.  I plan to discuss DevOps – and DevSecOps, the practice of rolling security into DevOps – in more detail in a future post.

One last approach that springs to mind, and which should always be part of our arsenal, is defence in depth.  We should be assured that if one element of a system fails, that’s not the end of the whole kit and caboodle******.  That only works if we’ve thought about single points of failure, of course.

The approaches above are all well and good, but I’m not entirely convinced that any one of them – or a combination of them – gives us a complete enough picture that we can fully embrace “fail fast, fail often”.  There are other pieces, too, including testing, monitoring, and organisational cultural change – an important and often overlooked element – that need to be considered, but it feels to me that we have some way to go, still.  I’d be very interested to hear your thoughts and comments.

 


*my family is very clear on this point**.

**I’m trying to keep it from my manager.

***or if you’re very unlucky, minutes.

****amusingly, I first typed this word as “misteaks”.  You’ve got to love those Freudian slips.

*****and hence ourselves.

******no excuse – I just love the phrase.

 

 

Helping our governments – differently

… we may live in a new security and terrorism landscape

Two weeks ago, I didn’t write a full post, because the Manchester arena bombing was too raw.  We are only a few days on from the London Bridge attack, and I could make the same decision, but think it’s time to recognise that we have a new reality that we need to face in Britain: that we may live in a new security and terrorism landscape.  The sorts of attacks – atrocities – that have been perpetrated over the past few weeks (and the police and security services say that despite three succeeding, they’ve foiled another five) are likely to keep happening.

And they’re difficult to predict, which means that they’re difficult to stop.  There are already renewed calls for tech companies* to provide tools to allow the Good Guys[tm**] to read the correspondence of the people who are going to commit terrorist acts.  The problem is that the preferred approach requested/demanded by governments seems to be backdoors in encryption and/or communications software, which just doesn’t work – see my post The Backdoor Fallacy – explaining it slowly for governments.  I understand that “reasonable people” believe that this is a solution, but it really isn’t, for all sorts of reasons, most of which aren’t really that technical at all.

So what can we do?  Three things spring to mind, and before I go into them, I’d like to make something clear, and it’s that I have a huge amount of respect for the men and women who make up our security services and intelligence community.  All those who I’ve met have a strong desire to perform their job to the best of their ability, and to help protect us from people and threats which could damage us, our property, and our way of life.  Many of these people and threats we know nothing about, and neither do we need to.  The job that the people in the security services do is vital, and I really don’t see any conspiracy to harm us or take huge amounts of power because it’s there for the taking.  I’m all for helping them, but not at the expense of the rights and freedoms that we hold dear.  So back to the question of what we can do.  And by “we” I mean the nebulous Security Community****.  Please treat these people with respect, and be aware they they work very, very hard, and often in difficult and stressful jobs*****.

The first is to be more aware of our environment.  We’re encouraged to do this in our daily lives (“Report unaccompanied luggage”…), but what more could we do in our professional lives?  Or what could we do in our daily lives by applying our professional capabilities and expertise to everyday activities?  What suspicious activities – from traffic on networks from unexpected place to new malware – might be a precursor to something else?  I’m not saying that we’re likely to spot the next terrorism attack – though we might – but helping to combat other crime more effectively both reduces the attack surface for terrorists and increases the available resourcing for counter-intelligence.

Second: there are, I’m sure, many techniques that are available to the intelligence community that we don’t know about.  But there is a great deal of innovation within enterprise, health and telco (to choose three sectors that I happen to know quite well******) that could well benefit our security services.  Maybe your new network analysis tool, intrusion detector, data aggregator has some clever smarts in it, or creates information which might be of interest to the security community.  I think we need to be more open to the idea of sharing these projects, products and skills – proactively.

The third is information sharing.  I work for Red Hat, an Open Source company which also tries to foster open thinking and open management styles.  We’re used to sharing, and industry, in general, is getting better about sharing information with other organisations, government and the security services.  We need to get better at sharing both active data from systems which are running as designed and bad data from systems that are failing, under attack or compromised.  Open, I firmly believe, should be our default state*******.

If we get better at sharing information and expertise which can help the intelligence services in ways which don’t impinge negatively on our existing freedoms, maybe we can reduce the calls for laws that will do so.  And maybe we can help stop more injuries, maimings and deaths.  Stand tall, stand proud.  We will win.


*who isn’t a tech company, these days, though?  If you sell home-made birthday cards on Etsy, or send invoices via email, are you a tech company?  Who knows.

**this an ironic tm***

***not that I don’t think that there are good guys – and gals – but just that it’s difficult to define them.  Read on: you’ll see.

****I’ve talked about this before – some day I’ll define it.

*****and most likely for less money than most of the rest of us.

******feel free to add or substitute your own.

*******OK, DROP for firewall and authorisation rules, but you get my point.

Talking to (actual) people – a guide for security folks

…”am I safe from this ransomware thing?”

As you may have noticed*, there was somewhat of a commotion over the past week when the WannaCrypt ransomware infection spread across the world, infecting all manner of systems**, most notably, from my point of view, many NHS systems.  This is relevant to me because I’m UK-based, and also because I volunteer for the local ambulance service as a CFR.  And because I’m a security professional.

I’m not going to go into the whys and wherefores of the attack, of the importance of keeping systems up to date, the morality of those who spread ransomware***,  how to fund IT security, or the politics of patch release.  All of these issues have been dealt with very well elsewhere. Instead, I’m going to discuss talking to people.

I’m slightly hopeful that this most recent attack is going to have some positive side effects.  Now, in computing, we’re generally against side effects, as they usually have negative unintended consequences, but on Monday, I got a call from my Dad.  I’m aware that this is the second post in a row to mention my family, but it turns out that my Dad trusts me to help him with his computing needs.  This is somewhat laughable, since he uses a Mac, which employs an OS of which I have almost no knowledge****, but I was pleased that he even called to ask a question about it.  The question was “am I safe from this ransomware thing?”  The answer, as he’d already pretty much worked out was, “yes”, and he was also able to explain that he was unsurprised, because he knew that Macs weren’t affected, and because he keeps it up to date, and because he keeps backups.

Somebody, somewhere (and it wasn’t me on this occasion) had done something right: they had explained, in terms that my father could understand, not only the impact of an attack, but also what to do to keep yourself safe (patching), what systems were most likely to be affected (not my Dad’s Mac), and what do to in mitigation (store backups).  The message had come through the media, but the media, for a change, seemed to have got it correct.

I’ve talked before about the importance of informing our users, and allowing them to make choices.  I think we need to be honest, as well, about when things aren’t going well, when we (singularly, or communally) have made a mistake.  We need to help them to take steps to protect themselves, and when that fails, to help them clear things up.

And who was it that made the mistake?  The NSA, for researching vulnerabilities, or for letting them leak?  Whoever it was leaked them?  Microsoft, for not providing patches?  The sysadmins, for not patching?  The suits, for not providing money for upgrades?  The security group, putting sufficient controls in place to catch and contain the problem?  The training organisation for not training the users enough?  The users, for ignoring training and performing actions which allowed the attack to happen?

Probably all of the above.  But, in most of those cases, talking about the problem, explaining what to do, and admitting when we make a mistake, is going to help improve things, not bring the whole world crashing down around us.  Talking, in other words, to “real” people (not just ourselves and each other*****): getting out there and having discussions.

Sometimes a lubricant can help: tea, beer, biscuits******.  Sometimes you’ll even find that “real” people are quite friendly.  Talk to them.  In words they understand.  But remember that even the best of them will nod off after 45 minutes or so of our explaining our passion to them.  They’re only human, after all.

 


*unless you live under a rock.

**well, Windows systems, anyway.

***(scum).

****this is entirely intentional: the less I know about their computing usage, the easier it is for me to avoid providing lengthy and painful (not to mention unpaid) support services to my close family.

*****and our machines.  Let’s not pretend we don’t do that.

******probably not coffee: as a community, we almost certainly drink enough of that as it is.

Service degradation: actually a good thing

…here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running.

Well, not all the time, obviously*.  But bear with me: we spend most of our time ensuring that all of our systems are up and secure and working as expected, because that’s what we hope for, but there’s a real argument for not only finding out what happens when they don’t, and not just planning for when they don’t, but also planning for how they shouldn’t.  Let’s start by examining some techniques for how we might do that.

Part 1 – planning

There’s a story** that the oil company Shell, in the 1970’s, did some scenario planning that examined what were considered, at the time, very unlikely events, and which allowed it to react when OPEC’s strategy surprised most of the rest of the industry a few years later.  Sensitivity modelling is another technique that organisations use at the financial level to understand what impact various changes – in order fulfilment, currency exchange or interest rates, for instance – make to the various parts of their business.  Yet another is war gaming, which the military use to try to understand what will happen when failures occur: putting real people and their associated systems into situations and watching them react.  And Netflix are famous for taking this a step further in the context of the IT world and having a virtual Chaos Monkey (a set of processes and scripts) which they use to bring down parts of their systems in real time to allow them to understand how resilient they the wider system is.

So that gives us four approaches that are applicable, with various options for automation:

  1. scenario planning – trying to understand what impact large scale events might have on your systems;
  2. sensitivity planning – modelling the impact on your systems of specific changes to the operating environment;
  3. wargaming – putting your people and systems through simulated events to see what happens;
  4. real outages – testing your people and systems with actual events and failures.

Actually going out of your way to sabotage your own systems might seem like insane behaviour, but it’s actually a work of genius.  If you don’t plan for failure, what are you going to do when it happens?

So let’s say that you’ve adopted all of these practices****: what are you going to do with the information?  Well, there are some obvious things you can do, such as:

  • removing discovered weaknesses;
  • improving resilience;
  • getting rid of single points of failure;
  • ensuring that you have adequately trained staff;
  • making sure that your backups are protected, but available to authorised entities.

I won’t try to compile an exhaustive list, because there are loads books and articles and training courses about this sort of thing, but there’s another, maybe less obvious, course of action which I believe we must take, and that’s plan for managed degradation.

Part 2 – managed degradation

What do I mean by that?  Well, it’s simple.  We***** are trained and indoctrinated to take the view that if something fails, it must always “fail to safe” or “fail to secure”.  If something stops working right, it should stop working at all.

There’s value in this approach, of course there is, and we’re paid****** to ensure everything is secure, right?  Wrong.  We’re actually paid to help keep the business running, and here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running.  Crazy, right?  “The business” want to keep making money and servicing customers even if things aren’t perfectly secure!  Don’t they know the risks?

And the answer to that question is “no”.  They don’t know the risks.  And that’s our real job: we need to explain the risks and the mitigations, and allow a balancing act to take place.  In fact, we’re always making those trade-offs and managing that balance – after all, the only truly secure computer is one with no network connection, no keyboard, no mouse and no power connection*******.  But most of the time, we don’t need to explain the decisions we make around risk: we just take them, following best industry practice, regulatory requirements and the rest.  Nor are the trade-offs usually so stark, because when failure strikes – whether through an attack, accident or misfortune – it’s often a pretty simple choice between maintaining a particular security posture and keeping the lights on.  So we need to think about and plan for some degradation, and realise that on occasion, we may need to adopt a different security posture to the perfect (or at least preferred) one in which we normally operate.

How would we do that?  Well, the approach I’m advocating is best described as “managed degradation”.  We allow our systems – including, where necessary our security systems – to degrade to a managed (and preferably planned) state, where we know that they’re not operating at peak efficiency, but where they are operating.  Key, however, is that we know the conditions under which they’re working, so we understand their operational parameters, and can explain and manage the risks associated with this new posture.  That posture may change, in response to ongoing events, and the systems and our responses to those events, so we need to plan ahead (using the techniques I discussed above) so that we can be flexible enough to provide real resiliency.

We need to find modes of operation which don’t expose the crown jewels******** of the business, but do allow key business operations to take place.  And those key business operations may not be the ones we expect – maybe it’s more important to be able to create new orders than to collect payments for them, for instance, at least in the short term.  So we need to discuss the options with the business, and respond to their needs.  This planning is not just security resiliency planning: it’s business resiliency planning.  We won’t be able to consider all the possible failures – though the techniques I outlined above will help us to identify many of them – but the more we plan for, the better we will be at reacting to the surprises.  And, possibly best of all, we’ll be talking to the business, informing them, learning from them, and even, maybe just a bit, helping them understand that the job we do does have some value after all.


*I’m assuming that we’re the Good Guys/Gals**.

**Maybe less story than MBA*** case study.

***There’s no shame in it.

****Well done, by the way.

*****The mythical security community again – see past posts.

******Hopefully…

*******Preferably at the bottom of a well, encased in concrete, with all storage already removed and destroyed.

********Probably not the actual Crown Jewels, unless you work at the Tower of London.