Recruiting on ability and potential

Let’s not just hire people who look and think and talk (and take exams) like us

This is one of those more open-ended posts, in that I don’t have any good answers, but I’ve got a bunch of questions. I’d love to have feedback, comments and thoughts if you have any.

This week, it turns out is “results week” in the UK. Here, there are two major sets of exams which most students take: GCSEs and A levels. The former generally are taken by 16 year olds, with around 8-12 in total, and the latter by 18 year olds, with 2-4 in total (3 being the strong median). A level results are one of the major criteria for entry into universities, and GCSE results typically determine what A levels students will take, and therefore have an impact on university course choice in the future. In normal years, A level results are release a week before the GCSE results, but (Covid having messed things up), A level results day is Tuesday, 2021-08-10, and GCSE results day is Thursday, 2021-08-12.

Both GCSEs and A levels are determined, in most cases, mainly by examination, with even subjects like PE (Physical Education), Dance and Food Technology having fairly large examination loads. This was not always the case, particularly for GCSEs, which, when they were launched in the mid 1980s, had much larger coursework components than now, aiming to provide opportunities to those who may have learned retained lots of information, but for whom examinations are not a good way of showing their expertise or acquired knowledge base.

At this point, I need to come out with an admission: exams suit me just fine. I’ve always been able to extricate information from my memory in these sorts of situations and put it down on paper, whether that’s in a school setting, university setting or professional setting (I took the CISSP examination ages ago). But this isn’t true for everybody. Exams are very artificial situations, and they just don’t work for many people. The same is true for other types of attempts to extract information out of people, such as classic coding tests. Now, I know that coding tests, when designed and applied well, can do a good job in finding out how people think, as well as what they know – in fact, that should be true for many well-designed examinations – but not only are they not always well administered, but the very context in which they are taken can severely disadvantage folks whose preferred mechanism of coding is more solitary and interactive than performative.

A family friend, currently applying for university places, expressed surprise the other day that some of the Computer Science courses for which he was applying didn’t require any previous experience of programming. I reflected that for some candidates, access to computing resources might be very limited, putting them at a disadvantage. I’ve made a similar argument when discussing how to improve diversity in security (see Is homogeneity bad for security?): we mustn’t make assumptions about people’s potential based on their previous ability to access resources which would allow that ability to be expressed and visible.

What does this have to do with recruitment, particularly? Everything. I’ve been involved in recruiting some folks recently, and the more I think about it, the more complex I realise the task is. Finding the right people – the people who will grow into roles, and become really strong players – is really tricky. I’m not just talking about finding candidates and encouraging them to apply (both difficult tasks in their own rights), but choosing from the candidates that do apply. I don’t want to just people who examine well (looking solely at their academic grades), people who perform well (who shine at coding exercises), people who interview well (those who are confident in face-to-face or video conference settings). I want to find people who have the ability to work in a team, grow the projects they’re working on, contribute creatively to what they’re doing. That may include people who fall apart in exams, who freeze during coding exercises, or whose social anxiety leads them to interview “badly”.

I’m not sure how to address these problems completely, but there are some things we can try. Here are my thoughts, and I’d love to hear more:

  1. Look beyond exams, and rate experience equally with academic attainment
  2. Accommodate different working styles
  3. Be aware that the candidate may not share your preferred interaction style
  4. Think “first language” and work with those whose native language is not yours
  5. Look beyond the neurotypical

None of these is rocket science, and I hope that the recent changes in working habits brought around by the Covid pandemic have already started people thinking about these issues, but let’s work harder to find the right people – not just the people who look and think and talk (and take exams) just like us.

Silencing my voice

Just links. I didn’t write these: others did.

Just links. I didn’t write these: others did.

(Image by Pexels from Pixabay).

No security without an architecture

Your diagrams don’t need to be perfect. But they do need to be there.

I attended a virtual demo this week. It didn’t work, but none of us was stressed by that: it was an internal demo, and these things happen. Luckily, the members of the team presenting the demo had lots of information about what it would have shown us, and a particularly good architectural diagram to discuss. We’ve all been in the place where the demo doesn’t work, and all felt for the colleague who was presenting the slidedeck, and on whose screen a message popped up a few slides in, saying “Demo NO GO!” from one of her team members.

After apologies, she asked if we wanted to bail completely, or to discuss the information they had to hand. We opted for the latter – after all, most demos which aren’t foregrounding user experience components don’t show much beyond terminal windows that most of us could fake up in half an hour or so anyway. She answered a couple of questions, and then I piped up with one about security.

This article could have been about the failures in security in a project which was showing an early demo: another example of security being left till late (often too late) in the process, at which point it’s difficult and expensive to integrate. However, it’s not. It clear that thought had been given to specific aspects of security, both on the network (in transit) and in storage (at rest), and though there was probably room for improvement (and when isn’t there?), a team member messaged me more documentation during the call which allowed me to understand of the choices the team had made.

What this article is about is the fact that we were able to have a discussion at all. The slidedeck included an architecture diagram showing all of the main components, with arrows showing the direction of data flows. It was clear, colour-coded to show the provenance of the different components, which were sourced from external projects, which from internal, and which were new to this demo. The people on the call – all technical – were able to see at a glance what was going on, and the team lead, who was providing the description, had a clear explanation for the various flows. Her team members chipped in to answer specific questions or to provide more detail on particular points. This is how technical discussions should work, and there was one thing in particular which pleased me (beyond the fact that the project had thought about security at all!): that there was an architectural diagram to discuss.

There are not enough security experts in the world to go around, which means that not every project will have the opportunity to get every stage of their design pored over by a member of the security community. But when it’s time to share, a diagram is invaluable. I hate to think about the number of times I’ve been asked to look at project in order to give my thoughts about security aspects, only to find that all that’s available is a mix of code and component documentation, with no explanation of how it all fits together and, worse, no architecture diagram.

When you’re building a project, you and your team are often so into the nuts and bolts that you know how it all fits together, and can hold it in your head, or describe the key points to a colleague. The problem comes when someone needs to ask questions of a different type, or review the architecture and design from a different slant. A picture – an architectural diagram – is a great way to educate external parties (or new members of the project) in what’s going on at a technical level. It also has a number of extra benefits:

  • it forces you to think about whether everything can be described in this way;
  • it forces you to consider levels of abstraction, and what should be shown at what levels;
  • it can reveal assumptions about dependencies that weren’t previously clear;
  • it is helpful to show data flows between the various components
  • it allows for simpler conversations with people whose first language is not that of your main documentation.

To be clear, this isn’t just a security problem – the same can go for other non-functional requirements such as high-availability, data consistency, performance or resilience – but I’m a security guy, and this is how I experience the issue. I’m also aware that I have a very visual mind, and this is how I like to get my head around something new, but even for those who aren’t visually inclined, a diagram at least offers the opportunity to orient yourself and work out where you need to dive deeper into code or execution. I also believe that it’s next to impossible for anybody to consider all the security implications (or any of the higher-order emergent characteristics and qualities) of a system of any significant complexity without architectural diagrams. And that includes the people who designed the system, because no system exists on its own (or there’s no point to it), so you can’t hold all of those pieces in your head of any length of time.

I’ve written before about the book Building Evolutionary Architectures, which does a great job in helping projects think about managing requirements which can morph or change their priority, and which, unsurprisingly, makes much use of architectural diagrams. Enarx, a project with which I’m closely involved, has always had lots of diagrams, and I’m aware that there’s an overhead involved here, both in updating diagrams as designs change and in considering which abstractions to provide for different consumers of our documentation, but I truly believe that it’s worth it. Whenever we introduce new people to the project or give a demo, we ensure that we include at least one diagram – often more – and when we get questions at the end of a presentation, they are almost always preceded with a phrase such as, “could you please go back to the diagram on slide x?”.

I nearly published this article without adding another point: this is part of being “open”. I’m a strong open source advocate, but source code isn’t enough to make a successful project, or even, I would add, to be a truly open source project: your documentation should not just be available to everybody, but accessible to everyone. If you want to get people involved, then providing a way in is vital. But beyond that, I think we have a responsibility (and opportunity!) towards diversity within open source. Providing diagrams helps address four types of diversity (at least!):

  • people whose first language is not the same as that of your main documentation (noted above);
  • people who have problems reading lots of text (e.g. those with dyslexia);
  • people who think more visually than textually (like me!);
  • people who want to understand your project from different points of view (e.g. security, management, legal).

If you’ve ever visited a project on github (for instance), with the intention of understanding how it fits into a larger system, you’ll recognise the sigh of relief you experience when you find a diagram or two on (or easily reached from) the initial landing page.

And so I urge you to create diagrams, both for your benefit, and also for anyone who’s going to be looking at your project in the future. They will appreciate it (and so should you). Your diagrams don’t need to be perfect. But they do need to be there.

5 security tips from Santa

Have you been naughty or nice this year?

If you’re reading this in 2019, it’s less than a month to Christmas (as celebrated according to the Western Christian calendar), or Christmas has just passed.  Let’s assume that it’s the former, and that, like all children and IT professionals, it’s time to write your letter to Santa/St Nick/Father Christmas.  Don’t forget, those who have been good get nice presents, and those who don’t get coal.  Coal is not a clean-burning fuel these days, and with climate change well and truly upon us[1], you don’t want to be going for the latter option.

Think back to all of the good security practices you’ve adopted over the past 11 or so months.  And then think back to all the bad security practices you’ve adopted when you should have been doing the right thing.  Oh, dear.  It’s not looking good for you, is it?

Here’s the good news, though: unless you’re reading this very, very close to Christmas itself[2], then there’s time to make amends.  Here’s a list of useful security tips and practices that Santa follows, and which are therefore bound to put you on his “good” side.

Use a password manager

Santa is very careful with his passwords.  Here’s a little secret: from time to time, rather than have his elves handcraft every little present, he sources his gifts from other parties.  I’m not suggesting that he pays market rates (he’s ordering in bulk, and he has a very, very good credit rating), but he uses lots of different suppliers, and he’s aware that not all of them take security as seriously as he does.  He doesn’t want all of his account logins to be leaked if one of his suppliers is hacked, so he uses separate passwords for each account.  Now, Santa, being Santa, could remember all of these details if he wanted to, and even generate passwords that meet all the relevant complexity requirements for each site, but he uses an open source password manager for safety, and for succession planning[3].

Manage personal information properly

You may work for a large company, organisation or government, and you may think that you have lots of customers and associated data, but consider Santa.  He manages, or has managed, names, dates of birth, addresses, hobby, shoe sizes, colour preferences and other personal data for literally every person on Earth.  That’s an awful lot of sensitive data, and it needs to be protected.  When people grow too old for presents from Santa[4], he needs to delete their data securely.  Santa may well have been the archetypal GDPR Data Controller, and he needs to be very careful who and what can access the data that he holds.  Of course, he encrypts all the data, and is very careful about key management.  He’s also very aware of the dangers associated with Cold Boot Attacks (given the average temperature around his relevance), so he ensures that data is properly wiped before shutdown.

Measure and mitigate risk

Santa knows all about risk.  He has complex systems for ordering, fulfilment, travel planning, logistics and delivery that are the envy of most of the world.  He understands what impact failure in any particular part of the supply chain can have on his customers: mainly children and IT professionals.  He quantifies risk, recalculating on a regular basis to ensure that he is up to date with possible vulnerabilities, and ready with mitigations.

Patch frequently, but carefully

Santa absolutely cannot afford for his systems to go down, particularly around his most busy period.  He has established processes to ensure that the concerns of security are balanced with the needs of the business[5].  He knows that sometimes, business continuity must take priority, and that on other occasions, the impact of a security breach would be so major that patches just have to be applied.  He tells people what he wants, and listens to their views, taking them into account where he can. In other words, he embraces open management, delegating decisions, where possible, to the sets of people who are best positioned to make the call, and only intervenes when asked for an executive decision, or when exceptions arise.  Santa is a very enlightened manager.

Embrace diversity

One of the useful benefits of running a global operation is that Santa values diversity.  Old or young (at heart), male, female or gender-neutral, neuro-typical or neuro-diverse, of whatever culture, sexuality, race, ability, creed or nose-colour, Santa takes into account his stakeholders and their views on what might go wrong.  What a fantastic set of viewpoints Santa has available to him.  And, for an Aging White Guy, he’s surprisingly hip to the opportunities for security practices that a wide and diverse set of opinions and experiences can bring[6].

Summary

Here’s my advice.  Be like Santa, and adopt at least some of his security practices yourself.  You’ll have a much better opportunity of getting onto his good side, and that’s going to go down well not just with Santa, but also your employer, who is just certain to give you a nice bonus, right?  And if not, well, it’s not too late to write that letter directly to Santa himself.


1 – if you have a problem with this statement, then either you need to find another blog, or you’re reading this in the far future, where all our climate problems have been solved. I hope.

2 – or you dwell in one of those cultures where Santa visits quite early in December.

3 – a high-flying goose in the face can do terrible damage to a fast-moving reindeer, and if the sleigh were to crash, what then…?

4 – not me!

5 – Santa doesn’t refer to it as a “business”, but he’s happy for us to call it that so that we can model our own experience on his.  He’s nice like that.

6 – though Santa would never use the phrase “hip to the opportunities”.  He’s way too cool for that.

Is homogeneity bad for security?

Can it really be good for security to have such a small number of systems out there?

For the last three years, I’ve attended the Linux Security Summit (though it’s not solely about Linux, actually), and that’s where I am for the first two days of this week – the next three days are taken up with the Open Source Summit.  This year, both are being run both in North America and in Europe – and there was a version of the Open Source Summit in Asia, too.  This is all good, of course: the more people, and the more diversity we have in the community, the stronger we’ll be.

The question of diversity came up at the Linux Security Summit today, but not in the way you might necessarily expect.  As with most of the industry, this very technical conference (there’s a very strong Linux kernel developer bias) is very under-represented by women, ethnic minorities and people with disabilities.  It’s a pity, and something we need to address, but when a question came up after someone’s talk, it wasn’t diversity of people’s background that was being questioned, but of the systems we deploy around the world.

The question was asked of a panel who were talking about open firmware and how making it open source will (hopefully) increase the security of the system.  We’d already heard how most systems – laptops, servers, desktops and beyond – come with a range of different pieces of firmware from a variety of different vendors.  And when we talk about a variety, this can easily hit over 100 different pieces of firmware per system.  How are you supposed to trust a system with some many different pieces?  And, as one of the panel members pointed out, many of the vendors are quite open about the fact that they don’t see themselves as security experts, and are actually asking the members of open source projects to design APIs, make recommendations about design, etc..

This self-knowledge is clearly a good thing, and the main focus of the panel’s efforts has been to try to define a small core of well-understood and better designed elements that can be deployed in a more trusted manner.   The question that was asked from the audience was in response to this effort, and seemed to me to be a very fair one.  It was (to paraphrase slightly): “Can it really be good for security to have such a small number of systems out there?”  The argument – and it’s a good one in general – is that if you have a small number of designs which are deployed across the vast majority of installations, then there is a real danger that a small number of vulnerabilities can impact on a large percentage of that install base.

It’s a similar problem in the natural world: a population with a restricted genetic pool is at risk from a successful attacker: a virus or fungus, for instance, which can attack many individuals due to their similar genetic make-up.

In principle, I would love to see more diversity of design within computing, and particular security, but there are two issues with this:

  1. management: there is a real cost to managing multiple different implementations and products, so organisations prefer to have a smaller number of designs, reducing the number of the tools to manage them, and the number of people required to be trained.
  2. scarcity of resources: there is a scarcity of resources within IT security.  There just aren’t enough security experts around to design good security into systems, to support them and then to respond to attacks as vulnerabilities are found and exploited.

To the first issue, I don’t see many easy answers, but to the second, there are three responses:

  1. find ways to scale the impact of your resources: if you open source your code, then the number of expert resources available to work on it expands enormously.  I wrote about this a couple of years ago in Disbelieving the many eyes hypothesis.  If your code is proprietary, then the number of experts you can leverage is small: if it is open source, you have access to almost the entire worldwide pool of experts.
  2. be able to respond quickly: if attacks on systems are found, and vulnerabilities identified, then the ability to move quickly to remedy them allows you to mitigate significantly the impact on the installation base.
  3. design in defence in depth: rather than relying on one defence to an attack or type of attack, try to design your deployment in such a way that you have layers of defence. This means that you have some time to fix a problem that arises before catastrophic failure affects your deployment.

I’m hesitant to overplay the biological analogy, but the second and third of these seem quite similar to defences we see in nature.  The equivalent to quick response is to have multiple generations in a short time, giving a species the opportunity to develop immunity to a particular attack, and defence in depth is a typical defence mechanism in nature – think of human’s ability to recognise bad meat by its smell, taste its “off-ness” and then vomit it up if swallowed.  I’m not quite sure how this particular analogy would map to the world of IT security (though some of the practices you see in the industry can turn your stomach), but while we wait to have a bigger – and more diverse pool of security experts, let’s keep being open source, let’s keep responding quickly, and let’s make sure that we design for defence in depth.

 

Next generation … people

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead.  The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out.  Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”.  Or, if you’re really lucky “Next gen”.  What I’d like to talk about this week, however, is the actual next generation – that’s people.  IT people.  IT security people.  I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field.  Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture.  Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents.  However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know).  Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it.  Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise.  I mean: really?] and improve security that way by having a lot of developers.”  Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction.  And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees.  I was astonished to note that maybe half of the people there had not come before.  And heartened.  I was also pleased to note that a good number of them appeared fairly young*********.  On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

  1. we’ve got a job to do.  The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.
  2. if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”.  It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever.  Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least.  Which, if you listen to my kids, isn’t that hard.

**********who actually gets paid by cheque (or check) any more?

Diversity in IT security: not just a canine issue

“People won’t listen to you or take you seriously unless you’re an old white man, and since I’m an old white man I’m going to use that to help the people who need it.” —Patrick Stewart, Actor

A couple of weeks ago, I wrote an April Fool’s post: “Changing the demographic in IT security: a radical proposal“.  It was “guest-written” by my dog, Sherlock, and suggested that dogs would make a good demographic from which to recruit IT professionals.  It went down quite well, and I had a good spike of hits, which was nice, but I wish that it hadn’t been necessary, or resonated so obviously with where we really are in the industry*.

Of special interest to me is the representation of women within IT security, and particularly within technical roles.  This is largely due to the fact that I have two daughters of school age**, and although I wouldn’t want to force them into a technical career, I want to be absolutely sure that they have as much chance both to try it – and then to succeed – as anybody else does, regardless of their gender.

But I think we should feel that other issues of under-representation should be of equal concern.  Professionals from ethnic minorities and with disabilities are also under-represented within IT security, and this is, without a doubt, a Bad Thing[tm].  I suspect that the same goes for people in the LGBTQ+ demographics.  From my perspective, diversity is something which is an unalloyed good within pretty much any organisation.  Different viewpoints don’t just allow us to reflect what our customers see and do, but also bring different perspectives to anything from perimeter defence to user stories, from UX to threading models.  Companies and organisations are just more flexible – and therefore more resilient – if they represent a wide ranging set of perspectives and views.  Not only because they’re more likely to be able to react positively if they come under criticism, but because they are less likely to succumb to groupthink and the “yes-men”*** mentality.

Part of the problem is that we hire ourselves.  I’m a white male in a straight marriage with a Western university education and a nuclear family.  I’ve got all of the privilege starting right there, and it’s really, really easy to find people like me to work with, and to hire, and to trust in business relationships.  And I know that I sometimes get annoyed with people who approach things differently to me, whose viewpoint leads them to consider alternative solutions or ideas.  And whether there’s a disproportionate percentage of annoyances associated with people who come from a different background to me, or that I’m just less likely to notice such annoyances when they come from someone who shares my background, there’s a real danger of prejudice kicking in and privilege – my privilege – taking over.

So, what can we do?  Here are some ideas:

  • Go out of our way to read, listen to and engage with people from different backgrounds to our own, particularly if we disagree with them, and particularly if they’re in our industry
  • Make a point of including the views of non-majority members of teams and groups in which you participate
  • Mentor and encourage those from disparate backgrounds in their careers
  • Consider positive discrimination – this is tricky, particularly with legal requirements in some contexts, but it’s worth considering, if only to recognise what a difference it might make.
  • Encourage our companies to engage in affirmative groups and events
  • Encourage our companies only to sponsor events with positive policies on harassment, speaker and panel selection, etc.
  • Consider refusing to speak on industry panels made up of people who are all in our demographic****
  • Interview out-liers
  • Practice “blind CV” selection

These are my views. The views of someone with privilege.  I’m sure they’re not all right.  I’m sure they’re not all applicable to everybody’s situation.  I’m aware that there’s a danger of my misappropriating a fight which is not mine, and of the dangers of intersectionality.

But if I can’t stand up from my position of privilege***** and say something, then who can?


*Or, let’s face it, society.

**I’m also married to a very strong proponent of equal rights and feminism.  It’s not so much that it rubbed off on me, but that I’m pretty sure she’d have little to do with me if I didn’t feel the same way.

***And I do mean “men” here, yes.

****My wife challenged me to put this in.  Because I don’t do it, and I should.

*****“People won’t listen to you or take you seriously unless you’re an old****** white man, and since I’m an old white man I’m going to use that to help the people who need it.” —Patrick Stewart, Actor

******Although I’m not old.*******

*******Whatever my daughters may say.

Diversity – redux

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.

Fair warning: this is not really a blog post about IT security, but about issues which pertain to our industry.  You’ll find social sciences and humanities – “soft sciences” – referenced.  I make no excuses (and I should declare previous form*).

Warning two: many of the examples I’m going to be citing are to do with gender discrimination and imbalances.  These are areas that I know the most about, but I’m very aware of other areas of privilege and discrimination, and I’d specifically call out LGBTQ, ethnic minority, age, disability and non-neurotypical discrimination.  I’m very happy to hear (privately or in comments) from people with expertise in other areas.

You’ve probably read the leaked internal document (a “manifesto”) from a Google staffer talking challenging affirmative action to try to address diversity, and complaining about a liberal/left-leaning monoculture at the company.  If you haven’t, you should: take the time now.  It’s well-written, with some interesting points, but I have some major problems with it that I think it’s worth addressing.  (There’s a very good rebuttal of certain aspects available from an ex-Google staffer.)  If you’re interested in where I’m coming from on this issue, please feel free to read my earlier post: Diversity in IT security: not just a canine issue**.

There are two issues that concern me specifically:

  1. no obvious attempt to acknowledge the existence of privilege and power imbalances;
  2. the attempt to advance the gender essentialism argument by alleging an overly leftist bias in the social sciences.

I’m not sure that these approaches are intentional or unconscious, but they’re both insidious, and if ignored, allow more weight to be given to the broader arguments put forward than I believe they merit.  I’m not planning to address those broader issues: there are other people doing a good job of that (see the rebuttal I referenced above, for instance).

Before I go any further, I’d like to record that I know very little about Google, its employment practices or its corporate culture: pretty much everything I know has been gleaned from what I’ve read online***.  I’m not, therefore, going to try to condone or condemn any particular practices.  It may well be that some of the criticisms levelled in the article/letter are entirely fair: I just don’t know.  What I’m interested in doing here is addressing those areas which seem to me not to be entirely open or fair.

Privilege and power imbalances

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.  “Why is there no march for heterosexual pride?”  “Why are there no men-only colleges in the UK?”  The generally accepted argument is that until there is equality in the particular sphere in which a group is campaigning, then the power imbalance and privilege afforded to the majority-represented group means that there may be a need for action to help for members the under-represented group to achieve parity.  That doesn’t mean that members of that group are necessarily unable to reach positions of power and influence within that sphere, just that, on average, the effort required will be greater than that for those in the majority-privileged group.

What does all of the above mean for women in tech, for example?  That it’s generally harder for women to succeed than it is for men.  Not always.  But on average.  So if we want to make it easier for women (in this example) to succeed in tech, we need to find ways to help.

The author of the Google piece doesn’t really address this issue.  He (and I’m just assuming it’s a man who wrote it) suggests that women (who seem to be the key demographic with whom he’s concerned) don’t need to be better represented in all parts of Google, and therefore affirmative action is inappropriate.  I’d say that even if the first part of that thesis is true (and I’m not sure it is: see below), then affirmative action may still be required for those who do.

The impact of “leftist bias”

Many of the arguments presented in the manifesto are predicated on the following thesis:

  • the corporate culture at Google**** are generally leftist-leaning
  • many social sciences are heavily populated by leftist-leaning theorists
  • these social scientists don’t accept the theory of gender essentialism (that women and men are suited to different roles)
  • THEREFORE corporate culture is overly inclined to reject gender essentialism
  • HENCE if a truly diverse culture is to be encouraged within corporate culture, leftist theories such as gender essentialism should be rejected.

There are several flaws here, one of which is that diversity means accepting views which are anti-diverse.  It’s a reflection of a similar right-leaning fallacy that in order to show true tolerance, the views of intolerant people should be afforded the same privilege of those who are aiming for greater tolerance.*****

Another flaw is the argument that just because a set of theories is espoused by a political movement to which one doesn’t subscribe that it’s therefore suspect.

Conclusion

As I’ve noted above, I’m far from happy with much of the so-called manifesto from what I’m assuming is a male Google staffer.  This post hasn’t been an attempt to address all of the arguments, but to attack a couple of the underlying arguments, without which I believe the general thread of the document is extremely weak.  As always, I welcome responses either in comments or privately.

 


*my degree is in English Literature and Theology.  Yeah, I know.

**it’s the only post on which I’ve had some pretty negative comments, which appeared on the reddit board from which I linked it.

***and is probably therefore just as far off the mark as anything else that you or I read online.

****and many other tech firms, I’d suggest.

*****an appeal is sometimes made to the left’s perceived poster child of postmodernism: “but you say that all views are equally valid”.  That’s not what postmodern (deconstructionist, post-structuralist) theory actually says.  I’d characterise it more as:

  • all views are worthy of consideration;
  • BUT we should treat with suspicion those views held by those which privilege, or which privilege those with power.