Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitively applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”


Of projects, products and (security) community

Not all open source is created (and maintained) equal.

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

Enarx for everyone (a quest)

In your backpack, the only tool that you have to protect you is Enarx…

You are stuck in a deep, dark wood, with spooky noises and roots that seem to move and trip you up.  Behind every tree malevolent eyes look out at you.  You look in your backpack and realise that the only tool that you have for your protection is Enarx, the trusty open source project given you by the wizened old person at the beginning of your quest.  Hard as you try, you can’t remember what it does, or how to use it.  You realise that now is that time to find out.

What do you do next?

  • If you are a business person, go to 1. Why I need Enarx to reduce business risk.
  • If you are an architect, go to 2. How I can use Enarx to protect sensitive data.
  • If you are a techy, go to 3. Tell me more about Enarx technology (I can take it).

1. Why I need Enarx to reduce business risk

You are the wise head upon which your business relies to consider and manage risk.  One of the problems that you run into is that you have sensitive data that needs to be protected.  Financial data, customer data, legal data, payroll data: it’s all at risk of compromise if it’s not adequately protected.  Who can you trust, however?  You want to be able to use public clouds, but the risks of keeping and processing information on systems which are not under your direct control are many and difficult to quantify.  Even your own systems are vulnerable to outdated patches, insider attacks or compromises: confidentiality is difficult to ensure, but vital to your business.

Enarx is a project which allows you to run applications in the public cloud, on your premises – or wherever else – with significantly reduced and better quantifiable risk.  It uses hardware-based security called “Trust Execution Environments” from CPU manufacturers, and cuts out many of the layers that can be compromised.  The only components that do need to be trusted are fully open source software, which means that they can be examined and audited by industry experts and your own teams.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


2. How I can use Enarx to protect sensitive data

You are the expert architect who has to consider the best technologies and approaches for your organisation.  You worry about where best to deploy sensitive applications and data, given the number of layers in the stack that may have been compromised, and the number of entities – human and machine – that have the opportunity to peek into or mess with the integrity of your applications.  You can’t control the public cloud, nor know exactly what the stack it’s running is, but equally, the resources required to ensure that you can run sufficient numbers of hardened systems on premises are growing.

Enarx is an open source project which uses TEEs (Trusted Execution Environments), to allow you to run applications within “Keeps” on systems that you don’t trust.  Enarx manages the creation of these Keeps, providing cryptographic confidence that the Keeps are using valid CPU hardware and then encrypting and provisioning your applications and data to the Keep using one-time cryptographic keys.  Your applications run without any of the layers in the stack (e.g. hypervisor, kernel, user-space, middleware) being able to look into the Keep.  The Keep’s run-time can accept applications written in many different languages, including Rust, C, C++, C#, Go, Java, Python and Haskell.  It allows you to run on TEEs from various CPU manufacturers without having to worry about portability: Enarx manages that for you, along with attestation and deployment.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


3. Tell me more about Enarx technology (I can take it)

You are a wily developer with technical skills beyond the ken of most of your peers.  A quick look at the github pages tells you more: Enarx is an open source project to allow you to deploy and applications within TEEs (Trusted Execution Environments).

  • If you’d like to learn about how to use Enarx, proceed to 4. I want to use Enarx.
  • If you’d like to learn about contributing to the Enarx project, proceed to 5. I want to contribute to Enarx.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


4. I want to use Enarx

You learn good news: Enarx is designed to be easy to use!

If you want to run applications that process sensitive data, or which implement sensitive algorithms themselves, Enarx is for you.  Enarx is a deployment framework for applications, rather than a development framework.  What this means is that you don’t have to write to particular SDKs, or manage the tricky attestation steps required to use TEEs.  You write your application in your favourite language, and as long as it has WebAssembly as a compile target, it should run within an Enarx “Keep”.  Enarx even manages portability across hardware platforms, so you don’t need to worry about that, either.  It’s all open source, so you can look at it yourself, audit it, or even contribute (if you’re interested in that, you might want to proceed to 5. I want to contribute to Enarx).

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


5. I want to contribute to Enarx

Enarx is an open source project (under the Apache 2.0 licence), and we welcome contributions, whether you are a developer, tester, documentation guru or other enthusiastic bod with an interest in providing a way for the rest of the world to up the security level of the applications they’re running with minimal effort.  There are various components to Enarx, including attestation, hypervisor work, uni-kernel and WebAssembly run-time pieces.  We want to provide a simple and flexible framework to allow developers and operations folks to deploy applications to TEEs on any supported platform without recompilation, having to choose an obscure language or write to a particular SDK.  Please have a look around our github site and get in touch if you’re in a position to contribute.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


6. Well, what’s next?

You now know enough to understand how Enarx can help you: well done!  At time of writing, Enarx is still in development, but we’re working hard to make it available to all.

We’ve known for a long time that we need encryption for data at rest and in transit: Enarx helps you do encryption for data in use.

For more information, you may wish to visit:

Open source and – well, bad people

For most people writing open source, it – open source software – seems like an unalloyed good.  You write code, and other nice people, like you, get to add to it, test it, document it and use it.  Look what good it can do to the world!  Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source.  Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed.  The very soul of open source – the licence – is doing our work for us: improving the world.

And on the whole, I’d agree.  But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder.  Because I know, and you know, when you think about it, that not everybody who uses open source is a good person.  Crackers (that’s “bad hackers”) use open source.  Drug dealers use open source.  People traffickers use open source.  Terrorists use open source.  Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people.  These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”.  You’re happy to continue contributing to open source because many more people are helped by it than harmed by it.  The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1].  This is sometimes described as:

“Actions are right in proportion as they tend to promote overall human happiness.”

I certainly hope that open source does tend to promote overall human happiness.  The problem is that criminals are not the only people who will be using open source – your open source – code.  There will be businesses whose practices are shady, governments that  oppress their detractors, police forces that spy on the citizens they watch.  This is your code, being used to do bad things.

But what even are bad things?  This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad.  We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:

  • tobacco manufacturers;
  • petrochemical and fracking companies;
  • plastics manufacturers;
  • organisations who don’t support LGBTQ+ people;
  • gun manufacturers.

There’s quite a range here, and that’s intentional.  Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3].  He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”.  For ESR, “it’s all about freedom”.  I disagree, although I don’t feel the need to attack him for it.  But it’s clear that his view about what constitutes good is different to mine.  I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view.  Although we tend to characterise the open source community as liberal, this has never been a good generalisation.  According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:

Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.

This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community.  Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation.  Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.

I still think, though, that I’m not too worried.  I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).


1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.

2 – you should be used to this by now: UK spelling throughout.

3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Is homogeneity bad for security?

Can it really be good for security to have such a small number of systems out there?

For the last three years, I’ve attended the Linux Security Summit (though it’s not solely about Linux, actually), and that’s where I am for the first two days of this week – the next three days are taken up with the Open Source Summit.  This year, both are being run both in North America and in Europe – and there was a version of the Open Source Summit in Asia, too.  This is all good, of course: the more people, and the more diversity we have in the community, the stronger we’ll be.

The question of diversity came up at the Linux Security Summit today, but not in the way you might necessarily expect.  As with most of the industry, this very technical conference (there’s a very strong Linux kernel developer bias) is very under-represented by women, ethnic minorities and people with disabilities.  It’s a pity, and something we need to address, but when a question came up after someone’s talk, it wasn’t diversity of people’s background that was being questioned, but of the systems we deploy around the world.

The question was asked of a panel who were talking about open firmware and how making it open source will (hopefully) increase the security of the system.  We’d already heard how most systems – laptops, servers, desktops and beyond – come with a range of different pieces of firmware from a variety of different vendors.  And when we talk about a variety, this can easily hit over 100 different pieces of firmware per system.  How are you supposed to trust a system with some many different pieces?  And, as one of the panel members pointed out, many of the vendors are quite open about the fact that they don’t see themselves as security experts, and are actually asking the members of open source projects to design APIs, make recommendations about design, etc..

This self-knowledge is clearly a good thing, and the main focus of the panel’s efforts has been to try to define a small core of well-understood and better designed elements that can be deployed in a more trusted manner.   The question that was asked from the audience was in response to this effort, and seemed to me to be a very fair one.  It was (to paraphrase slightly): “Can it really be good for security to have such a small number of systems out there?”  The argument – and it’s a good one in general – is that if you have a small number of designs which are deployed across the vast majority of installations, then there is a real danger that a small number of vulnerabilities can impact on a large percentage of that install base.

It’s a similar problem in the natural world: a population with a restricted genetic pool is at risk from a successful attacker: a virus or fungus, for instance, which can attack many individuals due to their similar genetic make-up.

In principle, I would love to see more diversity of design within computing, and particular security, but there are two issues with this:

  1. management: there is a real cost to managing multiple different implementations and products, so organisations prefer to have a smaller number of designs, reducing the number of the tools to manage them, and the number of people required to be trained.
  2. scarcity of resources: there is a scarcity of resources within IT security.  There just aren’t enough security experts around to design good security into systems, to support them and then to respond to attacks as vulnerabilities are found and exploited.

To the first issue, I don’t see many easy answers, but to the second, there are three responses:

  1. find ways to scale the impact of your resources: if you open source your code, then the number of expert resources available to work on it expands enormously.  I wrote about this a couple of years ago in Disbelieving the many eyes hypothesis.  If your code is proprietary, then the number of experts you can leverage is small: if it is open source, you have access to almost the entire worldwide pool of experts.
  2. be able to respond quickly: if attacks on systems are found, and vulnerabilities identified, then the ability to move quickly to remedy them allows you to mitigate significantly the impact on the installation base.
  3. design in defence in depth: rather than relying on one defence to an attack or type of attack, try to design your deployment in such a way that you have layers of defence. This means that you have some time to fix a problem that arises before catastrophic failure affects your deployment.

I’m hesitant to overplay the biological analogy, but the second and third of these seem quite similar to defences we see in nature.  The equivalent to quick response is to have multiple generations in a short time, giving a species the opportunity to develop immunity to a particular attack, and defence in depth is a typical defence mechanism in nature – think of human’s ability to recognise bad meat by its smell, taste its “off-ness” and then vomit it up if swallowed.  I’m not quite sure how this particular analogy would map to the world of IT security (though some of the practices you see in the industry can turn your stomach), but while we wait to have a bigger – and more diverse pool of security experts, let’s keep being open source, let’s keep responding quickly, and let’s make sure that we design for defence in depth.

 

The commonwealth of Open Source

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

“But surely Open Source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they’ve written?”

Hands up who’s heard this?*  I’ve been talking to customers – yes, they let me talk to customers sometimes – and to folks in the Field**, and this is one that comes up, it turns out, quite frequently.  I talked in a previous post (“Disbelieving the many eyes hypothesis“) about how Open Source software – particularly security software – doesn’t get to be magically more secure than proprietary software, and talked a little bit there about how I’d still go with Open Source over proprietary every time, but the way that I’ve heard the particular question – about OSS being less secure – suggests to me that we there are times when we don’t just need to a be able to explain why Open Source needs work, but also to be able to engage actively in Apologetics***.  So here goes.  I don’t expect it to be up to Newton’s or Wittgenstein’s levels of logic, but I’ll do what I can, and I’ll summarise at the bottom so that you’ve got a quick list of the points if you want it.

The arguments

First of all, we should accept that no software is perfect******.  Not proprietary software, not Open Source software.  Second, we should accept that there absolutely is good proprietary software out there.  Third, on the other hand, there is some very bad Open Source software.  Fourth, there are some extremely intelligent, gifted and dedicated architects, designers and software engineers who create proprietary software.

But here’s the rub.  Fifth – the pool of people who will work on or otherwise look at that proprietary software is limited.  And you can never hire all the best people.  Even in government and public sector organisations – who often have a larger talent pool available to them, particularly for *cough* security-related *cough* applications – the pool is limited.

Sixth – the pool of people available to look at, test, improve, break, re-improve, and roll out Open Source software is almost unlimited, and does include the best people.  Seventh – and I love this one: the pool also includes many of the people writing the proprietary software.  Eighth – many of the applications being written by public sector and government organisations are open sourced anyway these days.

Ninth – if you’re worried about running Open Source software which is unsupported, or comes from dodgy, un-provenanced sources, then good news: there are a bunch of organisations******* who will check the provenance of that code, support, maintain and patch it.  They’ll do it along the same type of business lines that you’d expect from a proprietary software provider.  You can also ensure that the software you get from them is the right software: the standard technique is for them to sign bundles of software so that you can check that what you’re installing isn’t just from some random bad person who’s taken that code and done Bad Things[tm] with it.

Tenth – and here’s the point of this post – when you run Open Source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into, and adding to, the commonwealth of knowledge and expertise and experience that is Open Source.  And which is only made greater by your doing so.  If you do this yourself, or through one of the businesses who will support that Open Source software********, you are part of this commonwealth.  Things get better with Open Source software, and you can see them getting better.  Nothing is hidden – it’s, well, “open”.  Can things get worse?  Yes, they can, but we can see when that happens, and fix it.

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

I know that I need to be careful about the use of the “commonwealth” as a Briton: it has connotations of (faded…) empire which I don’t intend it to hold in this case.  It’s probably not what Cromwell*********, had in mind when he talked about the “Commonwealth”, either, and anyway, he’s a somewhat … controversial historical figure.  What I’m talking about is a concept in which I think the words deserve concatenation – “common” and “wealth” – to show that we’re talking about something more than just money, but shared wealth available to all of humanity.

I really believe in this.  If you want to take away a religious message from this blog, it should be this**********: the commonwealth is our heritage, our experience, our knowledge, our responsibility.  The commonwealth is available to all of humanity.  We have it in common, and it is an almost inestimable wealth.

 

A handy crib sheet

  1. (Almost) no software is perfect.
  2. There is good proprietary software.
  3. There is bad Open Source software.
  4. There are some very clever, talented and devoted people who create proprietary software.
  5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
  6. The corresponding pool of people for Open Source is virtually unlimited…
  7. …and includes a goodly number of the talent pool of people writing proprietary software.
  8. Public sector and government organisations often open source their software anyway.
  9. There are businesses who will support Open Source software for you.
  10. Contribution – even usage – adds to the commonwealth.

*OK – you can put your hands down now.

**should this be capitalised?  Is there a particular field, or how does it work?  I’m not sure.

***I have a degree in English Literature and Theology – this probably won’t surprise some of the regular readers of this blog****.

****not, I hope, because I spout too much theology*****, but because it’s often full of long-winded, irrelevant Humanities (US Eng: “liberal arts”) references.

*****Emacs.  Every time.

******not even Emacs.  And yes, I know that there are techniques to prove the correctness of some software.  (I suspect that Emacs doesn’t pass many of them…)

*******hand up here: I’m employed by one of them, Red Hat, Inc..  Go have a look – fun place to work, and we’re usually hiring.

********assuming that they fully abide by the rules of the Open Source licence(s) they’re using, that is.

*********erstwhile “Lord Protector of England, Scotland and Ireland” – that Cromwell.

**********oh, and choose Emacs over vi variants, obviously.