The commonwealth of Open Source

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

“But surely Open Source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they’ve written?”

Hands up who’s heard this?*  I’ve been talking to customers – yes, they let me talk to customers sometimes – and to folks in the Field**, and this is one that comes up, it turns out, quite frequently.  I talked in a previous post (“Disbelieving the many eyes hypothesis“) about how Open Source software – particularly security software – doesn’t get to be magically more secure than proprietary software, and talked a little bit there about how I’d still go with Open Source over proprietary every time, but the way that I’ve heard the particular question – about OSS being less secure – suggests to me that we there are times when we don’t just need to a be able to explain why Open Source needs work, but also to be able to engage actively in Apologetics***.  So here goes.  I don’t expect it to be up to Newton’s or Wittgenstein’s levels of logic, but I’ll do what I can, and I’ll summarise at the bottom so that you’ve got a quick list of the points if you want it.

The arguments

First of all, we should accept that no software is perfect******.  Not proprietary software, not Open Source software.  Second, we should accept that there absolutely is good proprietary software out there.  Third, on the other hand, there is some very bad Open Source software.  Fourth, there are some extremely intelligent, gifted and dedicated architects, designers and software engineers who create proprietary software.

But here’s the rub.  Fifth – the pool of people who will work on or otherwise look at that proprietary software is limited.  And you can never hire all the best people.  Even in government and public sector organisations – who often have a larger talent pool available to them, particularly for *cough* security-related *cough* applications – the pool is limited.

Sixth – the pool of people available to look at, test, improve, break, re-improve, and roll out Open Source software is almost unlimited, and does include the best people.  Seventh – and I love this one: the pool also includes many of the people writing the proprietary software.  Eighth – many of the applications being written by public sector and government organisations are open sourced anyway these days.

Ninth – if you’re worried about running Open Source software which is unsupported, or comes from dodgy, un-provenanced sources, then good news: there are a bunch of organisations******* who will check the provenance of that code, support, maintain and patch it.  They’ll do it along the same type of business lines that you’d expect from a proprietary software provider.  You can also ensure that the software you get from them is the right software: the standard technique is for them to sign bundles of software so that you can check that what you’re installing isn’t just from some random bad person who’s taken that code and done Bad Things[tm] with it.

Tenth – and here’s the point of this post – when you run Open Source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into, and adding to, the commonwealth of knowledge and expertise and experience that is Open Source.  And which is only made greater by your doing so.  If you do this yourself, or through one of the businesses who will support that Open Source software********, you are part of this commonwealth.  Things get better with Open Source software, and you can see them getting better.  Nothing is hidden – it’s, well, “open”.  Can things get worse?  Yes, they can, but we can see when that happens, and fix it.

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

I know that I need to be careful about the use of the “commonwealth” as a Briton: it has connotations of (faded…) empire which I don’t intend it to hold in this case.  It’s probably not what Cromwell*********, had in mind when he talked about the “Commonwealth”, either, and anyway, he’s a somewhat … controversial historical figure.  What I’m talking about is a concept in which I think the words deserve concatenation – “common” and “wealth” – to show that we’re talking about something more than just money, but shared wealth available to all of humanity.

I really believe in this.  If you want to take away a religious message from this blog, it should be this**********: the commonwealth is our heritage, our experience, our knowledge, our responsibility.  The commonwealth is available to all of humanity.  We have it in common, and it is an almost inestimable wealth.

 

A handy crib sheet

  1. (Almost) no software is perfect.
  2. There is good proprietary software.
  3. There is bad Open Source software.
  4. There are some very clever, talented and devoted people who create proprietary software.
  5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
  6. The corresponding pool of people for Open Source is virtually unlimited…
  7. …and includes a goodly number of the talent pool of people writing proprietary software.
  8. Public sector and government organisations often open source their software anyway.
  9. There are businesses who will support Open Source software for you.
  10. Contribution – even usage – adds to the commonwealth.

*OK – you can put your hands down now.

**should this be capitalised?  Is there a particular field, or how does it work?  I’m not sure.

***I have a degree in English Literature and Theology – this probably won’t surprise some of the regular readers of this blog****.

****not, I hope, because I spout too much theology*****, but because it’s often full of long-winded, irrelevant Humanities (US Eng: “liberal arts”) references.

*****Emacs.  Every time.

******not even Emacs.  And yes, I know that there are techniques to prove the correctness of some software.  (I suspect that Emacs doesn’t pass many of them…)

*******hand up here: I’m employed by one of them, Red Hat, Inc..  Go have a look – fun place to work, and we’re usually hiring.

********assuming that they fully abide by the rules of the Open Source licence(s) they’re using, that is.

*********erstwhile “Lord Protector of England, Scotland and Ireland” – that Cromwell.

**********oh, and choose Emacs over vi variants, obviously.

Disbelieving the many eyes hypothesis

There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth.

Writing code is hard.  Writing secure code is harder: much harder.  And before you get there, you need to think about design and architecture.  When you’re writing code to implement security functionality, it’s often based on architectures and designs which have been pored over and examined in detail.  They may even reflect standards which have gone through worldwide review processes and are generally considered perfect and unbreakable*.

However good those designs and architectures are, though, there’s something about putting things into actual software that’s, well, special.  With the exception of software proven to be mathematically correct**, being able to write software which accurately implements the functionality you’re trying to realise is somewhere between a science and an art.  This is no surprise to anyone who’s actually written any software, tried to debug software or divine software’s correctness by stepping through it.  It’s not the key point of this post either, however.

Nobody*** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible.  It is for this reason that code review is a core principle of software development.  And luckily – in my view, at least – much of the code that we use these days in our day-to-day lives is Open Source, which means that anybody can look at it, and it’s available for tens or hundreds of thousands of eyes to review.

And herein lies the problem.  There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it.  This is a myth.  A dangerous myth.  The problems with this view are at least twofold.  The first is the “if you build it, they will come” fallacy.  I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it****.  In the same way, the number of Open Source projects was (maybe) once so small that there was a good chance that people might look at and review your code.  Those days are past – long past.  Second, for many areas of security functionality – crypto primitives implementation is a good example – the number of suitably qualified eyes is low.

Don’t think that I am in any way suggesting that the problem is any lesser in proprietary code: quite the opposite.  Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased.  “Proprietary code is more secure” is less myth, more fake news.  I completely understand why companies like to keep their security software secret – and I’m afraid that the “it’s to protect our intellectual property” line is too often a platitude they tell themselves, when really, it’s just unsafe to release it.  So for me, it’s Open Source all the way when we’re looking at security software.

So, what can we do?  Well, companies and other organisations that care about security functionality can – and have, I believe a responsibility to – expend resources on checking and reviewing the code that implements that functionality.  That is part of what Red Hat, the organisation for whom I work, is committed to doing.  Alongside that, we, the Open Source community, can – and are – finding ways to support critical projects and improve the amount of review that goes into that code*****.  And we should encourage academic organisations to train students in the black art of security software writing and review, not to mention highlighting the importance of Open Source Software.

We can do better – and we are doing better.  Because what we need to realise is that the reason the “many eyes hypothesis” is a myth is not that many eyes won’t improve code – they will – but that we don’t have enough expert eyes looking.  Yet.


* Yeah, really: “perfect and unbreakable”.  Let’s just pretend that’s true for the purposes of this discussion.

** …and which still relies on the design and architecture actually to do what you want – or think you want – of course, so good luck.

*** nobody who’s actually written more than about 5 lines of code (or more than 6 characters of Perl)

**** I added one.  They came.  It was like some sort of magic.

***** see, for instance, the Linux Foundation‘s Core Infrastructure Initiative