Using words that normal people understand

Normal people generally just want things to work.

Most people[1] don’t realise quite how much fun security is, or exactly how sexy security expertise makes you to other people[2].  We know that it’s engrossing, engaging and cool, they don’t.  For this reason, when security people go to the other people (let’s just call them “normal people” for the purposes of this article), and tell them that they’re doing something wrong, and that they can’t launch their product, or deploy their application, or that they must stop taking sales orders immediately, and probably for the next couple of days until this are fixed, then those normal people don’t always react with the levels of gratefulness that we feel is appropriate.

Sometimes, in fact, they will exhibit negative responses – even quite personal negative responses – to these suggestions.

The problem is this: security folks know how things should be, and that’s secure.  They’ve taken the training, they’ve attended the sessions, they’ve read the articles, they’ve skimmed the heavy books[3], and all of these sources are quite clear: everything must be secure.  And secure generally means “closed” – particularly if the security folks weren’t sufficiently involved in the design, implementation and operations processes.   Normal people, on the other hand, generally just want things to work.  There’s a fundamental disjoint between those two points of view that we’re not going to get fixed until security is the very top requirement for any project from its inception to its ending[4].

Now, normal people aren’t stupid[5].  They know that things can’t always work perfectly: but they would like them to work as well as they can.  This is the gap[7] that we need to cross.  I’ve talked about managed degradation as a concept before, in the post Service degradation: actually a good thing, and this is part of the story.  One of the things that we security people should be ready to do is explain that there are risks to be mitigated.  For security people, those risks should be mitigated by “failing closed”.  It’s easy to stop risk: you just stop system operation, and there’s no risk that it can be misused.  But for many people, there are other risks: an example being that the organisation may in fact go completely out of business because some *[8]* security person turned the ordering system off.  If they’d offered me the choice to balance the risk of stopping taking orders against the risk of losing some internal company data, would I have taken it?  Well yes, I might have.  But if I’m not offered the option, and the risk isn’t explained, then I have no choice.  These are the sorts of words that I’d like to hear if I’m running a business.

It’s not just this type of risk, though.  Coming to a project meeting two weeks before launch and announcing that the project can’t be deployed because this calls against this API aren’t being authenticated is no good at all.  To anybody.  As a developer, though, I have a different vocabulary – and different concerns – to those of the business owner.  How about, instead of saying “you need to use authentication on this API, or you can’t proceed”, the security person asks “what would happen if data that was provided on this API was incorrect, or provided by someone who wanted to disrupt system operation?”?  In my experience, most developers are interested – are invested – in the correct operation of the system that they’re running, and of the data that it processes.  Asking questions that show the possible impact of lack of security is much more likely to garner positive reactions than an initial “discussion” that basically amounts to a “no”.

Don’t get me wrong; there are times when we, as security people, need to be firm, and stick to our guns[9], but in the end, it’s the owners – of systems, or organisations, of business units, of resources – who get to make the final decision.  It’s our job to talk to them in words they can understand and ensure that they are as well informed as we can possibly make them.  Without just saying “no”.


1. by which I mean “those poor unfortunate souls who don’t read these posts, unlike you, dear and intelligent reader”.

2. my wife, sadly, seems to fall into this category.

3. which usually have a picture of a lock on the cover.

4. and good luck with that.

5. while we’ve all met our fair share of stupid normal people, I’m betting that you’ve met your fair share of stupid security people, too, so it balances out[6].

6. probably more than balances out.  Let’s leave it there.

7. chasm.

8. insert your favourite adjectival expletive here.

9. figuratively: I don’t condone bringing any weapons, including firearms, to your place of work.

Who’s saying “hello”? – agency, intent and AI

Who is saying “hello world?”: you, or the computer?

I don’t yet have one of those Google or Amazon talking speaker thingies in my house or office.  A large part of this is that I’m just not happy about the security side: I know that the respective companies swear that they’re only “listening” when you say the device’s trigger word, but even if that’s the case, I like to pretend[1] that I have at least some semblance of privacy in my life.  Another reason, however, is that I’m not sure that I like what happens to people when they pretend that there’s a person listening to them, but it’s really just a machine.

It’s not just Alexa and the OK, Google persona, however.  When I connect to an automated phone-answering service, I worry when I hear “I’ll direct your call” from a non-human.  Who is “I”?  “We’ll direct your call” is better – “we” could be the organisation with whom I’m interacting.  But “I”?  “I” is the pronoun that people use.  When I hear “I”, I’m expecting sentience: if it’s a machine I’m expecting AI – preferably fully Turing-compliant.

There’s a more important point here, though.  I’m entirely aware that there’s no sentience behind that “I”[2], but there’s an important issue about agency that we should unpack.

What, then, is “agency”?  I’m talking about the ability of an entity to act on its or another’s behalf, and I touched on this this in a previous post, “Wow: autonomous agents!“.  When somebody writes some code, what they’re doing is giving ability to the system that will run that code to do something – that’s the first part.  But the agency doesn’t really occur, I’d say, until that code is run/instantiated/executed.  At this point, I would argue, the software instance has agency.

But whose agency, exactly?  For whom is this software acting?

Here are some answers.  I honestly don’t think that any of them is right.

  1. the person who owns the hardware (you own the Alexa hardware, right?  You paid Amazon for it…  Or what about running applications on the cloud?).
  2. the person who started the software (you turned on the Alexa hardware, which started the software…  And don’t forget software which is automatically executed in response to triggers or on a time schedule.)
  3. the person who gave the software the instructions (what do you mean, “gave it the instructions”?  Wrote its config file?  Spoke to it?  Set up initial settings?  Typed in commands?  And even if you gave it instructions, do you think that your OK Google hardware is implementing your wishes, or Google’s?  For whom is it actually acting?  And what side effects (like recording your search history and deciding what to suggest in your feed) are you happy to believe are “yours”?)
  4. the person who installed the software (your phone comes with all sorts of software installed, but surely you are the one who imbues it with agency?  If not, whom are you blaming: Google (for the Android apps) or Samsung (which actually put them on the phone)?)
  5. the person who wrote the software (I think we’ve already dealt with this, but even then, is it a single person, or an organisation?  What about open source software, which is typically written, compiled and documented by many different people?  Ascribing “ownership” or “authorship” is a distinctly tricky (and intentionally tricky) issue when you discuss open source)

Another way to think of this problem is to ask: when you write and execute a program, who is saying “hello world?”: you, or the computer?

There are some really interesting questions that come out of this.  Here are a couple that come to mind, which seem to me to be closely connected.

  • In the film Wargames[3], is the automatic dialling that Matthew Broderick’s character’s computer carries out an act with agency?  Or is it when it connects to another machine?  Or when it records the details of that machine?  I don’t think anyone would argue that the computer is acting with agency once David Lightman actually gets it to complete a connection and interact with it, but what about before?
  • Google used to run automated programs against messages received as part of the Gmail service looking for information and phrases which it could use to serve ads.  They were absolutely adamant that they, Google, weren’t doing the reading: it was just a computer program.  I’m not sure how clear or safe a distinction that is.

Why does this all matter?  Well, one of the more pressing reasons is because of self-driving cars.  Whose fault is it when one goes wrong and injures or kills someone?  What about autonomous defence systems?

And here’s the question that really interests – and vexes – me: is this different when the program which is executing can learn.  I don’t even mean strong AI: just that it can change what it does based on the behaviour it “sees”, “hears” or otherwise senses.  It feels to me that there’s a substantive difference between:

a) actions carried out at the explicit (asynchronous) request of a human operator, or according to sets of rules coded into a program

AND

b) actions carried out in response to rules that have been formed by the operation of the program itself.  There is what I’d called synchronous intent within the program.

You can argue that b) has pretty much always been around, in basic forms, but it seems to me to be different when programs are being created with the expectation that humans will not necessarily be able to decode the rules, and where the intent of the human designers is to allow rulesets to be created in this way.

There is some discussion about at the moment as to how and/or whether rulesets generated by open source projects should be shared.  I think the general feeling is that there’s no requirement for them to be – in the same way that material I write using an open source text editor shouldn’t automatically be considered open source – but open data is valuable, and finding ways to share it is a good idea, IMHO.

In Wargames, that is the key difference between the system as originally planned, and what it ends up doing: Joshua has synchronous intent.

I really don’t think this is all bad: we need these systems, and they’re going to improve our lives significantly.  But I do feel that it’s important that you and I start thinking hard about what is acting for whom, and how.

Now, if you wouldn’t mind opening the Pod bay doors, HAL…[5]


1. and yes, I know it’s a pretense.

2. yet…

3. go on – re-watch it: you know you want to[4].

4. and if you’ve never watched it, then stop reading this article and go and watch it NOW.

5. I think you know the problem just as well as I do, Dave.

On passwords and accounts

This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.

Once a year or so, one of the big UK tech magazines or websites[1] does a survey where they send a group of people to one of the big London train stations and ask travellers for their password[3].  The deal is that every traveller who gives up their password gets a pencil, a chocolate bar or similar.

I’ve always been sad that I’ve never managed to be at the requisite station for one of these polls.  I would love to get a free pencil – or even better a chocolate bar[4] – for lying about a password.  Or even, frankly, for giving them one of my actual passwords, which would be completely useless to them without some identifying information about me.  Which I obviously wouldn’t give them.  Or again, would pretend to give them, but lie.

The point of this exercise[5] is supposed to be to expose the fact that people are very bad about protecting their passwords.  What it actually identifies is that a good percentage of the British travelling public are either very bad about protecting their passwords, or are entirely capable of making informed (or false) statements in order to get a free pencil or chocolate bar[4]. Good on the British travelling public, say I. 

Now, everybody agrees that passwords are on their way out, as they have been on their way out for a good 15-20 years, so that’s nice.  People misuse them, reuse them, don’t change them often enough, etc., etc..  But it turns out that it’s not the passwords that are the real problem.  This week, more than one British MP admitted – seemingly without any realisation that they were doing  anything wrong – that they share their passwords with their staff, or just leave their machines unlocked so that anyone on their staff can answer their email or perform other actions on their behalf.

This isn’t a password problem.  It’s a misunderstanding-of-what-accounts-are-for problem.

People seem to think that, in a corporate or government setting, the point of passwords is to stop people looking at things they shouldn’t.

That’s wrong.  The point of passwords is to allow different accounts for different people, so that the appropriate people can exercise the appropriate actions, and be audited as having done so.  It is, basically, a matching of authority to responsibility – as I discussed in last week’s post Explained: five misused security words – with a bit of auditing thrown in.

Now, looking at things you shouldn’t is one action that a person may have responsibility for, certainly, but it’s not the main thing.  But if you misuse accounts in the way that has been exposed in the UK parliament, then worse things are going to happen.  If you willingly bypass accounts, you are removing the ability of those who have a responsibility to ensure correct responsibility-authority pairings to track and audit actions.  You are, in fact, setting yourself up with excuses before the fact, but also making it very difficult to prove wrongdoing by other people who may misuse an account.  A culture that allows such behaviour is one which doesn’t allow misuse to be tracked.  This is bad enough in a company or a school – but in our seat of government?  Unacceptable.  You just have to hope that there are free pencils.  Or chocolate bars[4].


1. I can’t remember which, and I’m not going to do them the service of referencing them, or even looking them up, for reasons that should be clear once you read the main text.[2]

2. I’m trialling a new form or footnote referencing. Please let me know whether you like it.

3. I guess their email password, but again, I can’t remember and I’m not going to look it up.

4. Or similar.

5. I say “point”…

What’s the point of security? 

Like it or not, your users are part of your system.

No, really: what’s the point of it? I’m currently at the Openstack Summit in Sydney, and was just thrown out of the exhibition area as it’s closed for preparations for an event. A fairly large gentleman in a dark-coloured uniform made it clear that if I didn’t have taken particular sticker on my badge, then I wasn’t allowed to stay.  Now, as Red Hat* is an exhibitor, and I’m our booth manager’s a buddy**, she immediately offered me the relevant sticker, but as I needed a sit down and a cup of tea***, I made my way to the exit, feeling only slightly disgruntled.

“What’s the point of this story?” you’re probably asking. Well, the point is this: I’m 100% certain that if I’d asked the security guard why he was throwing me out, he wouldn’t have had a good reason. Oh, he’d probably have given me a reason, and it might have seemed good to him, but I’m really pretty sure that it wouldn’t have satisfied me. And I don’t mean the “slightly annoyed, jet-lagged me who doesn’t want to move”, but the rational, security-aware me who can hopefully reason sensibly about such things. There may even have been such a reason, but I doubt that he was privy to it.

My point, then, really comes down to this. We need be able to explain the reasons for security policies in ways which:

  1. Can be expressed by those enforcing them;
  2. Can be understood by uninformed users;
  3. Can be understood and queried by informed users.

In the first point, I don’t even necessarily mean people: sometimes – often – it’s systems that are doing the enforcing. This makes things even more difficult in ways: you need to think about UX***** in ways that are appropriate for two very, very different constituencies.

I’ve written before about how frustrating it can be when you can’t engage with the people who have come up with a security policy, or implemented a particular control. If there’s no explanation of why a control is in place – the policy behind it – and it’s an impediment to the user*******, then it’s highly likely that people will either work around it, or stop using the system.

Unless. Unless you can prove that the security control is relevant, proportionate and beneficial to the user. This is going to be difficult in many cases.  But I have seen some good examples, which means that I’m hopeful that we can generally do it if we try hard enough.  The problem is that most designers of systems don’t bother trying. They don’t bother to have any description, usually, but as for making that description show the relevance, proportionality and benefits to the user? Rare, very rare.

This is another argument for security folks being more embedded in systems design because, like it or not, your users are part of your system, and they need to be included in the design.  Which means that you need to educate your UX people, too, because if you can’t convince them of the need to do security, you’re sure as heck not going to be able to convince your users.

And when you work with them on the personae and use case modelling, make sure that you include users who are like you: knowledgeable security experts who want to know why they should submit themselves to the controls that you’re including in the system.

Now, they’ve opened up the exhibitor hall again: I’ve got to go and grab myself a drink before they start kicking people out again.


*my employer.

**she’s American.

***I’m British****.

****and the conference is in Australia, which meant that I was actually able to get a decent cup of the same.

*****this, I believe, stands for “User eXperience”. I find it almost unbearably ironic that the people tasked with communicating clearly to us can’t fully grasp the idea of initial letters standing for something.

******and most security controls are.

The commonwealth of Open Source

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

“But surely Open Source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they’ve written?”

Hands up who’s heard this?*  I’ve been talking to customers – yes, they let me talk to customers sometimes – and to folks in the Field**, and this is one that comes up, it turns out, quite frequently.  I talked in a previous post (“Disbelieving the many eyes hypothesis“) about how Open Source software – particularly security software – doesn’t get to be magically more secure than proprietary software, and talked a little bit there about how I’d still go with Open Source over proprietary every time, but the way that I’ve heard the particular question – about OSS being less secure – suggests to me that we there are times when we don’t just need to a be able to explain why Open Source needs work, but also to be able to engage actively in Apologetics***.  So here goes.  I don’t expect it to be up to Newton’s or Wittgenstein’s levels of logic, but I’ll do what I can, and I’ll summarise at the bottom so that you’ve got a quick list of the points if you want it.

The arguments

First of all, we should accept that no software is perfect******.  Not proprietary software, not Open Source software.  Second, we should accept that there absolutely is good proprietary software out there.  Third, on the other hand, there is some very bad Open Source software.  Fourth, there are some extremely intelligent, gifted and dedicated architects, designers and software engineers who create proprietary software.

But here’s the rub.  Fifth – the pool of people who will work on or otherwise look at that proprietary software is limited.  And you can never hire all the best people.  Even in government and public sector organisations – who often have a larger talent pool available to them, particularly for *cough* security-related *cough* applications – the pool is limited.

Sixth – the pool of people available to look at, test, improve, break, re-improve, and roll out Open Source software is almost unlimited, and does include the best people.  Seventh – and I love this one: the pool also includes many of the people writing the proprietary software.  Eighth – many of the applications being written by public sector and government organisations are open sourced anyway these days.

Ninth – if you’re worried about running Open Source software which is unsupported, or comes from dodgy, un-provenanced sources, then good news: there are a bunch of organisations******* who will check the provenance of that code, support, maintain and patch it.  They’ll do it along the same type of business lines that you’d expect from a proprietary software provider.  You can also ensure that the software you get from them is the right software: the standard technique is for them to sign bundles of software so that you can check that what you’re installing isn’t just from some random bad person who’s taken that code and done Bad Things[tm] with it.

Tenth – and here’s the point of this post – when you run Open Source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into, and adding to, the commonwealth of knowledge and expertise and experience that is Open Source.  And which is only made greater by your doing so.  If you do this yourself, or through one of the businesses who will support that Open Source software********, you are part of this commonwealth.  Things get better with Open Source software, and you can see them getting better.  Nothing is hidden – it’s, well, “open”.  Can things get worse?  Yes, they can, but we can see when that happens, and fix it.

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

I know that I need to be careful about the use of the “commonwealth” as a Briton: it has connotations of (faded…) empire which I don’t intend it to hold in this case.  It’s probably not what Cromwell*********, had in mind when he talked about the “Commonwealth”, either, and anyway, he’s a somewhat … controversial historical figure.  What I’m talking about is a concept in which I think the words deserve concatenation – “common” and “wealth” – to show that we’re talking about something more than just money, but shared wealth available to all of humanity.

I really believe in this.  If you want to take away a religious message from this blog, it should be this**********: the commonwealth is our heritage, our experience, our knowledge, our responsibility.  The commonwealth is available to all of humanity.  We have it in common, and it is an almost inestimable wealth.

 

A handy crib sheet

  1. (Almost) no software is perfect.
  2. There is good proprietary software.
  3. There is bad Open Source software.
  4. There are some very clever, talented and devoted people who create proprietary software.
  5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
  6. The corresponding pool of people for Open Source is virtually unlimited…
  7. …and includes a goodly number of the talent pool of people writing proprietary software.
  8. Public sector and government organisations often open source their software anyway.
  9. There are businesses who will support Open Source software for you.
  10. Contribution – even usage – adds to the commonwealth.

*OK – you can put your hands down now.

**should this be capitalised?  Is there a particular field, or how does it work?  I’m not sure.

***I have a degree in English Literature and Theology – this probably won’t surprise some of the regular readers of this blog****.

****not, I hope, because I spout too much theology*****, but because it’s often full of long-winded, irrelevant Humanities (US Eng: “liberal arts”) references.

*****Emacs.  Every time.

******not even Emacs.  And yes, I know that there are techniques to prove the correctness of some software.  (I suspect that Emacs doesn’t pass many of them…)

*******hand up here: I’m employed by one of them, Red Hat, Inc..  Go have a look – fun place to work, and we’re usually hiring.

********assuming that they fully abide by the rules of the Open Source licence(s) they’re using, that is.

*********erstwhile “Lord Protector of England, Scotland and Ireland” – that Cromwell.

**********oh, and choose Emacs over vi variants, obviously.

Wow: autonomous agents!

The problem is not the autonomy. The problem isn’t even particularly with the intelligence…

Autonomous, intelligent agents offer some great opportunities for our digital lives*.  There, look, I said it.  They will book meetings for us, negotiate cheap holidays, order our children’s complete school outfit for the beginning of term, and let us know when it’s time to go to the nurse for our check-up.  Our business lives, our personal lives, our family relationships – they’ll all be revolutionised by autonomous agents.  Autonomous agents will learn our preferences, have access to our diaries, pay for items, be able to send messages to our friends.

This is all fantastic, and I’m very excited about it.  The problem is that I’ve been excited about it for nearly 20 years, when I was involved in a project around autonomous agents in Java.  It was very neat then, and it’s still very neat now***.

Of course, technology has moved on.  Some of the underlying capabilities are much more advanced now than then.  General availability of APIs, consistency of data formats, better Machine Learning (or Artificial Intelligence, if you must), less computationally expensive cryptography, and the rise of blockchains and distributed ledgers: they all bring the ability for us to build autonomous agents closer than ever before.  We talked about disintermediation back in the day, and that looked plausible.  We really can build scalable marketplaces now in ways which just weren’t as feasible two decades ago.

The problem, though, isn’t the technology.  It was never the technology.  We could have made the technology work 20 years ago, even if it wasn’t as fast, secure or wide-ranging as it could be today.  It isn’t even vested interests from the large platform players, who arguably own much of this space at the moment – though these interests are much more consolidated than they were when I was first looking at this issue.

The problem is not the autonomy.  The problem isn’t even particularly with the intelligence: you can program as much or as little in as you want, or as the technology allows.  The problem is with the agency.

How much of my life do I want to hand over to what’s basically a ‘bot?  Ignore***** the fact that these things will get hacked******, and assume we’re talking about normal, intended usage.  What does “agency” mean?  It means acting for someone: being their agent – think of what actors’ agents do, for example.  When I engage a lawyer or a builder or an accountant to do something for me, or when an actor employs an agent for that matter, we’re very clear about what they’ll be doing.  This is to protect both me and them from unintended consequences.  There’s a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent.  There are contracts, and agreed restitutions – basically punishments – for when things go wrong.  Say that an accountant buys 500 shares in a bank, and then I turn round and say that she never had the authority to do so: if we’ve set up the relationship correctly, it should be entirely clear whether or not she did, and whose responsibility it is to deal with any fall-out from that purchase.

Now think about that in terms of autonomous, intelligent agents.  Write me that contract, and make it equivalent in software and the legal system.  Tell me what happens when things go wrong with the software.  Show me how to prove that I didn’t tell the agent to buy those shares.  Explain to me where the restitution lies.

And these are arguably the simple problems.  How to I rebuild the business reputation that I’ve built up over the past 15 years when my agent posts on Twitter a tweet about how I use a competitor’s products, when I’m just trialling them for interest?  How does an agent know not to let my wife see the diary entry for my meeting with that divorce lawyer*******?  What aspects of my browsing profile are appropriate for suggesting – or even buying – online products or services with my personal or business credit card*********?  And there’s the classic “buying flowers for the mistress and having them sent to the wife” problem**********.

I don’t think we have an answer to these questions: not even close.  You know that virtual admin assistant we’ve been promised in sci-fi movies for decades now: the one with the futuristic haircut who appears as a hologram outside our office?  Holograms – nearly.  Technology behind it – pretty much.  Trust, reputation and agency?  Nowhere near.

 


*I hate this word: “digital”.  Well, not really, but it’s used far too much as a shorthand for “newest technology”**.

**”Digital businesses”.  You mean, unlike all the analogue ones?  Come on.

***this is one of those words that my kids hate me using.  There are two types of word that come into this category: old words and new words.  Either I’m showing how old I am, or I’m trying to be hip****, which is arguably worse.  I can’t win.

****yeah, they don’t say hip.  That’s one of the “old person words”.

*****for now, at least.  Let’s not forget it.

******_everything_  gets hacked*******.

*******I could say “cracked”, but some of it won’t be malicious, and hacking might be positive.

********I’m not.  This is an example.

*********this isn’t even about “dodgy” things I might have been browsing on home time.  I may have been browsing for analyst services, with the intent to buy a subscription: how sure am I that the agent won’t decide to charge these to my personal credit card when it knows that I perform other “business-like” actions like pay for business-related books myself sometimes?

**********how many times do I have to tell you, darling…?

Next generation … people

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead.  The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out.  Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”.  Or, if you’re really lucky “Next gen”.  What I’d like to talk about this week, however, is the actual next generation – that’s people.  IT people.  IT security people.  I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field.  Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture.  Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents.  However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know).  Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it.  Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise.  I mean: really?] and improve security that way by having a lot of developers.”  Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction.  And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees.  I was astonished to note that maybe half of the people there had not come before.  And heartened.  I was also pleased to note that a good number of them appeared fairly young*********.  On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

  1. we’ve got a job to do.  The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.
  2. if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”.  It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever.  Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least.  Which, if you listen to my kids, isn’t that hard.

**********who actually gets paid by cheque (or check) any more?