Equality in volunteering and open source

Volunteering favours the socially privileged

Volunteering is “in”. Lots of companies – particularly tech companies, it seems – provide incentives to employees to volunteer for charities, NGOs abs other “not-for-profits”. These incentives range from donations matching to paid volunteer days to matching hours worked for a charity with a cash donation.

Then there’s other types of voluntary work: helping out at a local sports club, mowing a neighbour’s lawn or fetching their groceries, and, of course, a open source, which we’ll be looking at in some detail. There are almost countless thousands of projects which could benefit from your time.

Let’s step back first and look at the benefits of volunteering. The most obvious, if course, is the direct benefit to the organisation, group or individual of your time and/or expertise. Then there’s the benefits to the wider community. Having people volunteering their time to help out with various groups – particularly those with whom they would have little contact in other circumstances – helps social cohesion and encourages better understanding of differing points of view as you meet people, and not just opinions.

Then there’s the benefit to you. Helping others feels great, looks good on your CV[1], can give you more skills, and make you friends – quite apart from the benefit I mentioned above about helping you to understand differing points of view. On the issue of open source, it’s something that lots of companies – certainly the sorts of companies with which I’m generally involved – are interested in, or even expect to see on your CV. Your contributions to open source projects are visible – unlike whatever you’ve been doing in most other jobs – they can be looked over, they show a commitment and are also a way of gauging your enthusiasm, expertise and knowledge in particular areas. All this seems to make lots of sense, and until fairly recently, I was concerned when I was confronted with a CV which didn’t have any open source contributions that I could check.

The inequality of volunteering

And then I did some reading by a feminist open sourcer (I’m afraid that I can’t remember who it was[3]), and did a little more digging, and realised that it’s far from that simple. Volunteering is an activity which favours the socially privileged – whether that’s in terms of income, gender, language or any other number of indicator. That’s particularly true for software and open source volunteering.

Let me explain. We’ll start with the gender issue. On average, you’re much less likely to have spare time to be involved in an open source project if you’re a woman, because women, on average, have more responsibilities in the home, and less free time. They are also globally less likely to have access to computing resources with which to contribute. due to wage discrepancies. Even beyond that, they are less likely to be welcomed into communities and have their contributions valued, whilst being more likely to attract abuse.

If you are in a low income bracket, you are less likely to have time to volunteer, and again, to have access to the resources needed to contribute.

If your first language is not English, you are less likely to be able to find an accepting project, and more likely to receive abuse for not explaining what you are doing.

If your name reflects a particular ethnicity, you may not be made to feel welcome in some contexts online.

If you are not neurotypical (e.g. you have Aspergers or are on the autism spectrum, or if you are dyslexic), you may face problems in engaging in the social activities – online and offline – which are important to full participation in many projects.

The list goes on. There are, of course, many welcoming project and communities that attempt to address all of these issues, and we must encourage that. Some people who are disadvantaged in terms of some of the privilege-types that I’ve noted above may actually find that open source suits them very well, as their privilege can be hidden online in ways in which it could not be in other settings, and that some communities make a special effort to be welcoming and accepting.

However, if we just assume – that’s unconscious bias, folks – that volunteering, and specifically open source volunteering, is a sine qua non for “serious” candidates for roles, or a foundational required expertise for someone we are looking to employ, then we set a dangerous precedent, and run a very real danger of reinforcing privilege, rather than reducing it.

What can we do?

First, we can make our open source projects more welcoming, and be aware of the problems that those from less privileged groups may face. Second, we must be aware, and make our colleagues aware, that when we are interviewing and hiring, lack evidence of volunteering is not evidence that the person is not talented, enthusiastic or skilled. Third, and always, we should look for more ways to help those who are less privileged than us to overcome the barriers to accessing not only jobs but also volunteering opportunities which will benefit not only them, but our communities as a whole.


1 – Curriculum vitae[2].

2 – Oh, you wanted the Americanism? It’s “resume” or something similar, but with more accents on it.

3 – a friend reminded me that it might have been this: https://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community

Cryptographers arise!

Cryptography is a strange field, in that it’s both concerned with keeping secrets, but also has a long history of being kept secret, as well.  There are famous names from the early days, from Caesar (Julius, that is) to Vigenère, to more recent names like Diffie, Hellman[1], Rivest, Shamir and Adleman.  The trend even more recently has been away from naming cryptographic protocols after their creators, and more to snappy names like Blowfish or less snappy descriptions such as “ECC”.  Although I’m not generally a fan of glorifying individual talent over collective work, this feels like a bit of a pity in some ways.

In fact, over the past 80 years or so, more effort has been probably put into keeping the work of teams in cryptanalysis – the study of breaking cryptography – secret, though there are some famous names from the past like Al-Kindi, Phelippes (or “Phillips), Rejewski, Turing, Tiltman, Knox and Briggs[2].

Cryptography is difficult.  Actually, let me rephrase that: cryptography is easy to do badly, and difficult to do well.  “Anybody can design a cipher that they can’t break”, goes an old dictum, with the second half of the sentence, “and somebody else can easily break”, being generally left unsaid.  Creation of cryptographic primitives requires significant of knowledge of mathematics – some branches of which are well within the grasp of an average high-school student, and some of which are considerably more arcane.  Putting those primitives together in ways that allow you to create interesting protocols for use in the real world doesn’t necessarily require that you understand the full depth of the mathematics of the primitives that you’re using[3], but does require a good grounding in how they should be used, and how they should not be used.  Even then, a wise protocol designer, like a wise cryptographer[4], always gets colleagues and others to review his or her work.  This is one of the reasons that it’s so important that cryptography should be in the public domain, and preferably fully open source.

Why am I writing about this?  Well, partly because I think that, on the whole, the work of cryptographers is undervalued.  The work they do is not only very tricky, but also vital.  We need cryptographers and cryptanalysts to be working in the public realm, designing new algorithms and breaking old (and, I suppose) new ones.  We should be recognising and celebrating their work.  Mathematics is not standing still, and, as I wrote recently, quantum computing is threatening to chip away at our privacy and secrecy.  The other reasons that I’m writing about this is because I think we should be proud of our history and heritage, inspired to work on important problems, and to inspire those around us to work on them, too.

Oh, and if you’re interested in the t-shirt, drop me a line or put something in the comments.


1 – I’m good at spelling, really I am, but I need to check the number of ells and ens in his name every single time.

2 – I know that is heavily Bletchley-centric: it’s an area of history in which I’m particularly interested.  Bletchley was also an important training ground for some very important women in security – something of which we have maybe lost sight.

3 – good thing, too, as I’m not a mathematician, but I have designed the odd protocol here and there.

4 – that is, any cryptographer who recognises the truth of the dictum I quote above.

Will quantum computing break security?

Do you want J. Random Hacker to be able to pretend that they’re your bank?

Over the past few years, a new type of computer has arrived on the block: the quantum computer.  It’s arguably the sixth type of computer:

  1. humans – before there were artificial computers, people used, well, people.  And people with this job were called “computers”.
  2. mechanical analogue – devices such as the Antikythera mechanism, astrolabes or slide rules.
  3. mechanical digital – in this category I’d count anything that allowed discrete mathematics, but didn’t use electronics for the actual calculation: the abacus, Babbage’s Difference Engine, etc.
  4. electronic analogue – many of these were invented for military uses such as bomb sights, gun aiming, etc.
  5. electronic digital – I’m going to go out on a limb here, and characterise Colossus as the first electronic digital computer[1]: these are basically what we use today for anything from mobile phones to supercomputers.
  6. quantum computers – these are coming, and are fundamentally different to all of the previous generations.

What is quantum computing?

Quantum computing uses concepts from quantum mechanics to allow very different types of calculations to what we’re used to in “classical computing”.  I’m not even going to try to explain, because I know that I’d do a terrible job, so I suggest you try something like Wikipedia’s definition as a starting point.  What’s important for our purposes is to understand that quantum computers use qubits to do calculations, and for quite a few types of mathematical algorithms – and therefore computing operations – they can solve problems much faster than classical computers.

What’s “much faster”?  Much, much faster: orders of magnitude faster.  A calculation that might take years or decades with a classical computer could, in certain circumstances, take seconds.  Impressive, yes?  And scary.  Because one of the types of problems that quantum computers should be good at solving is decrypting encrypted messages, even without the keys.

This means that someone with a sufficiently powerful quantum computer should be able to read all of your current and past messages, decrypt any stored data, and maybe fake digital signatures.  Is this a big thing?  Yes.  Do you want J. Random Hacker to be able to pretend that they’re your bank[2]?  Do you want that transaction on the blockchain where you were sold a 10 bedroom mansion in Mayfair to be “corrected” to be a bedsit in Weston-super-Mare[3]?

Some good news

This is all scary stuff, but there’s good news, of various types.

The first is that in order to make any of this work at all, you need a quantum computer with a good number of qubits operating, and this is turning out to be hard[4].  The general consensus is that we’ve got a few years before anybody has a “big” enough quantum computer to do serious damage to classical encryption algorithms.

The second is that, even with a sufficient number of qubits to attacks our existing algorithms, you still need even more in order to allow for error correction.

The third is that although there are theoretical models to show how to attack some of our existing algorithms, actually making them work is significantly harder than you or I[5] might expect.  In fact, some of the attacks may turn out to be infeasible, or just take more years to perfect that we’d worried about.

The fourth is that there are clever people out there who are designing quantum-computation resistant algorithms (sometimes referred to as “post-quantum algorithms”) that we can use, at least for new encryption, once they’ve been tested and become widely available.

All-in-all, in fact, there’s a strong body of expert opinion that says that we shouldn’t be overly worried about quantum computing breaking our encryption in the next 5 or even 10 years.

And some bad news

It’s not all rosy, however.  Two issues stick out to me as areas of concern.

  1. People are still designing and rolling out systems which don’t consider the issue.  If you’re coming up with a system which is likely to be in use for ten or more years, or which will be encrypting or signing data which must remain confidential or attributable over those sorts of periods, then you should be considering what the possible impact of quantum computing may have on your system.
  2. some of the new, quantum-computing resistant algorithms are proprietary.  This means that when you and I want to start implementing systems which are designed to be quantum-computing resistant, we’ll have to pay to do so.  I’m a big proponent of open source, and particularly of open source cryptography, and my big worry is that we just won’t be able to open source these things, and worse, that when new protocol standards are created – either de facto or through standards bodies – they will choose proprietary algorithms that exclude the use of open source, whether on purpose, through ignorance, or because few good alternatives are available.

What to do?

Luckily, there are things you can do to address both of the issues above.  The first is to think and plan, when designing a system, about what the impact of quantum computing might be on it.  Often – very often – you won’t actually need to implement anything explicit now (and it could be hard to, given the current state of the art), but you should at least embrace the concept of crypto-agility: designing protocols and systems so that you can swap out algorithms if required[7].

The second is a call to arms: get involved in the open source movement, and encourage everybody you know who has anything to do with cryptography to rally for open standards and for research into non-proprietary, quantum-computing resistant algorithms.  This is something that’s very much on my to-do list, and an area where pressure and lobbying is just as important as the research itself.


1 – I think it’s fair to call it the first electronic, programmable computer.  I know there were earlier non-programmable ones, and that some claim ENIAC, but I don’t have the space or the energy to argue the case here.

2 – no.

3 – see [2].  Don’t get me wrong, by the way – I grew up near Weston-super-Mare, and it’s got things going for it, but it’s not Mayfair.

4 – and if a quantum physicist says that something’s hard, then, to my mind, it’s hard.

5 – and I’m assuming that neither of us is a quantum physicist or mathematician[6].

6 – I’m definitely not.

7 – and not just for quantum-computing reasons: there’s a good chance that some of our existing classical algorithms may just fall to other, non-quantum attacks such as new mathematical approaches.

The 3 things you need to know about disk encryption

Use software encryption, preferably an open-source and audited solution.

It turns out that somebody – well, lots of people, in fact – failed to implement a cryptographic standard very well.  This isn’t a surprise, I’m afraid, but it’s bad news.  I’ve written before about how important it is to be using disk encryption, but it turns out that the advice I gave wasn’t sufficient, or detailed enough.

Here’s a bit of background.  There are two ways to do disk encryption:

  1. let the disk hardware (and firmware) manage it: HDD (hard disk drive), SSD (solid state drive) and hybrid (a mix of HDD and SDD technologies) manufacturers create drives which have encryption built in.
  2. allow your Operating System (e.g. Linux[0], OSX[1], Windows[2]) to do the job: the O/S will have a little bit of itself on the disk unencrypted, which will allow it to decrypt the rest of the disk (which is encrypted) when provided with a password or key.

You’d think, wouldn’t you, that option 1 would be the safest?  It should be quick, as it’s done in hardware, and well, the companies who manufacture these disks will know that they’re doing, right?

No.

A paper (link opens a PDF file) written by some researchers in the Netherlands reveals some work that they did on several SSD drives to try to work out how good a job had been done on the encryption security.  They are all supposed to have implemented a fairly complex standard from the TCG[4] called Opal, but it seems that none of them did it right.  It turns out that someone with physical access to your hardware can, fairly trivially, decrypt what’s on your drive.  And they can do this without the password that you use to lock it or any associated key(s).  The simple lesson from this is that you shouldn’t trust hardware disk encryption.

So, software disk encryption is OK, then?

Also no.

Well, actually yes, as long as you’re not using Microsoft’s BitLocker in its default mode.  It turns out that BitLocker will just use hardware encryption if the drive its using supports it.  In other words, using BitLocker just uses hardware encryption unless you tell it not to do so.

What about other options?  Well, you can tell BitLocker not to use hardware encryption, but only for a new installation: it won’t change on an existing disk.  The best option[5] is to use a software encryption solution which is open source and audited by the wider community.  LUKS is the default for most Linux distributions.  One suggested by the papers’ authors for Windows is Veracrypt.  Can we be certain that there are no holes or mistakes in the implementation of these solutions?  No, we can’t, but the chances of security issues being found and fixed are much, much higher than for proprietary software[6].

What, then are my recommendations?

  1. Don’t use hardware disk encryption.  It’s been shown to be flawed in many implementations.
  2. Don’t use proprietary software.  For anything, honestly, particularly anything security-related, but specifically not for disk encryption.
  3. If you have to use Windows, and are using BitLocker, run with VeraCrypt on top.

 


1 – GNU Linux.

2 – I’m not even sure if this is the OS that Macs run anymore, to be honest.

3 – not my thing either, but I’m pretty sure this is what it’s call.  Couldn’t be certain of the version, though.

4 – Trusted Computing Group.

5 – as noted by the paper’s authors, and heartily endorsed by me.

6 – I’m not aware of any problems with Macintosh-based implementations, but open source is just better – read the article linked from earlier in the sentence.

On being acquired – a personal view

It’s difficult to think of a better fit than IBM.

First off, today is one of those days when I need to point you at the standard disclaimer that the views expressed in this post are my own, and not necessarily those of my employers.  That said, I think that many of them probably align, but better safe than sorry[1].  Another note: I believe that all of the information in this article is public knowledge.

The news came out two days ago (last Sunday, 2018-10-28) that Red Hat, my employer, is being acquired by IBM for $34bn.  I didn’t know about it the deal in advance (I’m not that exalted within the company hierarchy, which is probably a good thing, as all those involved needed to keep very tight-lipped about it, and that would have been hard), so the first intimation I got was when people started sharing stories from various news sites on internal chat discussions.  They (IBM) are quite clear about the fact that they are acquiring us for the people, which means that each of us (including me!) is worth around $2.6m, based on our current headcount.  Sadly, I don’t think it works quite like this, and certainly nobody has (yet) offered to pay me that amount[2].  IBM have also said that they intend to keep Red Hat operating as a separate entity within IBM.

How do I feel?  My initial emotion was shock.  It’s always a surprise when you get news that you weren’t expecting, and the message that we’d carried for a long time was the Red Hat would attempt to keep ploughing its own furrow[3] for as long as possible.  But I’d always known that, as a public company, we were available to be bought, if the money was good enough.  It appears on this occasion that it was.  And that emotion turned to interest as to what was going to happen next.

And do you know what?  It’s difficult to think of a better fit than IBM.  I’m not going to enumerate the reasons that I feel that other possible acquirers would have been worse, but here are some of the reasons that IBM, at least in this arrangement, is good:

  • they “get” open source, and have a long history of encouraging its use;
  • they seem to understand that Red Hat has a very distinctive culture, and want to encourage that, post-acquisition;
  • they have a hybrid cloud strategy and products, Red Hat has a hybrid cloud strategy and products: they’re fairly well-aligned;
  • we’re complementary in a number of sectors and markets;
  • they’re a much bigger player than us, and suddenly, we’ll have access to more senior people in new and exciting companies.

What about the impact on me, though?  Well, IBM takes security seriously.  IBM has some fantastic research and academic connections.  The group in which I work has some really bright and interesting people in it, and it’s difficult to imagine IBM wanting to break it up.  A number of the things I’m working on will continue to align with both Red Hat’s direction and IBM’s.  The acquisition will take up to a year to complete – assuming no awkward regulatory hurdles along the way – and not much is going to change in the day-to-day.  Except that I hope to get even better access to my soon-to-be-colleagues working in similar fields to me, but within IBM.

Will there be issues along the way?  Yes.  Will there be uncertainty?  Yes.  But do I trust that the leadership within Red Hat and IBM have an honest commitment to making things work in a way that will benefit Red Hatters?  Yes.

And am I looking to jump ship?  Oh, no.  Far too much interesting stuff to be doing.  We’ve got an interesting few months and years ahead of us.  My future looked red, until Sunday night.  Then maybe blue.  But now I’m betting on something somewhere between the two: go Team Purple.


1 – because, well, lawyers, the SEC, etc., etc.

2 – if it does, then, well, could somebody please contact me?

3 – doing its own thing independently.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.

On conversation and the benefits of boasting

On Monday and Tuesday this week I’m attending DevSecCon in Boston – a city which is much more pleasant when it’s not raining or snowing, which it often seems to be doing while I’m here.  There are a bunch of interesting talks[1] and workshops, and I was asked, at the last minute, to facilitate an “Open Space Discussion” at the end of the first day (as two people hadn’t arrived as expected).  Facilitating discussions is about not talking all the time, but encouraging other people to talk[2]: my approach to this is to tell a story, and then encourage them to share stories.

People enjoy listening to stories, and people enjoy telling stories, and there is a type of story that is particularly useful and important in the world of work: “war-stories”.  Within the IT industry, at least, this refers to stories about experiences – usually bad experiences – from our day-to-day working lives.  They are often used to illustrate a point or lend experiential weight to an opinion being put forward. But they are also great learning experiences.

What I learned yesterday – or re-learned – is the immense value of conversation with our peers in a neutral setting, with no formal bounds or difference in “rank”.  We had at least one participant who was only two years out of college, participants with 25-30 years of experience, a CISO of a major healthcare provider, a CEO, DevOps engineers, customer-facing people, security people, non-security people, people with Humanities[4] degrees, people with Computer Science degrees.  We were about twelve people, and everybody contributed, to greater or lesser degrees.  I hope that we managed to maintain a conversation where age and numbers of years in the industry were unimportant, but the experiences shared were.

And I learned about other people’s opinions, their viewpoints, their experiences, their tips for what works – and doesn’t work – and made, I hope, some new friends.  Certainly some new peers.  What we talked about isn’t vitally important to this article[5]: the important thing was the conversation, and the stories they told that brought their shared wisdom to the table.  I felt, by the end of the session, that we had added something to the commonwealth of knowledge within the industry

I was looking for a way to close the session as we were moving to the end, and hit upon something which seemed to work: I encouraged everybody to spend 30 seconds or so to tell the group about an incident in their career that they are proud of.  We got some great stories, and not only did we learn from them, but I think it’s really important that we get the chance to express our pride in the things that we’ve done.  We rarely get the chance to boast, or to let people outside our general circle know why we think we should be valued.  There’s nothing wrong with being proud of the things we’ve done, but we’re often – usually – discouraged from doing so.  It was great to have people share their various experiences of personal expertise, and to think about how they would use them to further their career.  I didn’t force everybody to speak – and was thanked by one of the silent participants later – and it’s important to realise that not everybody will be happy doing so.  But I think that the rapport that we’d built as a group meant that more people were happy to contribute something than would have considered it at the beginning of the session.  I left with a respect for all of the participants, and a realisation of the importance of shared experience.

 


1 – I gave a talk based on my blog article Why I love technical debtI found it interesting…

2 – based on this definition, it may surprise regular readers – and people who know me IRL[3] – that I’d even consider participating, let alone facilitating.

3 – does anybody use this term anymore?

4 – Liberal Arts/Social Sciences.

5 – but included:

  • the impact of different open source licences
  • how legal teams engage with open source questions
  • how to encourage more conversation between technical and legal folks
  • the importance of systems engineering
  • how to talk to customers and vendors
  • how to build teams through social participation[6]
  • the NIST 800 series and other models to consider security
  • risk: how to talk about it, measure it, discuss it with other functions within the organisation.

6 – the word “beer” came up.  From somebody else, on this occasion.