Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”


5 security tips from Santa

Have you been naughty or nice this year?

If you’re reading this in 2019, it’s less than a month to Christmas (as celebrated according to the Western Christian calendar), or Christmas has just passed.  Let’s assume that it’s the former, and that, like all children and IT professionals, it’s time to write your letter to Santa/St Nick/Father Christmas.  Don’t forget, those who have been good get nice presents, and those who don’t get coal.  Coal is not a clean-burning fuel these days, and with climate change well and truly upon us[1], you don’t want to be going for the latter option.

Think back to all of the good security practices you’ve adopted over the past 11 or so months.  And then think back to all the bad security practices you’ve adopted when you should have been doing the right thing.  Oh, dear.  It’s not looking good for you, is it?

Here’s the good news, though: unless you’re reading this very, very close to Christmas itself[2], then there’s time to make amends.  Here’s a list of useful security tips and practices that Santa follows, and which are therefore bound to put you on his “good” side.

Use a password manager

Santa is very careful with his passwords.  Here’s a little secret: from time to time, rather than have his elves handcraft every little present, he sources his gifts from other parties.  I’m not suggesting that he pays market rates (he’s ordering in bulk, and he has a very, very good credit rating), but he uses lots of different suppliers, and he’s aware that not all of them take security as seriously as he does.  He doesn’t want all of his account logins to be leaked if one of his suppliers is hacked, so he uses separate passwords for each account.  Now, Santa, being Santa, could remember all of these details if he wanted to, and even generate passwords that meet all the relevant complexity requirements for each site, but he uses an open source password manager for safety, and for succession planning[3].

Manage personal information properly

You may work for a large company, organisation or government, and you may think that you have lots of customers and associated data, but consider Santa.  He manages, or has managed, names, dates of birth, addresses, hobby, shoe sizes, colour preferences and other personal data for literally every person on Earth.  That’s an awful lot of sensitive data, and it needs to be protected.  When people grow too old for presents from Santa[4], he needs to delete their data securely.  Santa may well have been the archetypal GDPR Data Controller, and he needs to be very careful who and what can access the data that he holds.  Of course, he encrypts all the data, and is very careful about key management.  He’s also very aware of the dangers associated with Cold Boot Attacks (given the average temperature around his relevance), so he ensures that data is properly wiped before shutdown.

Measure and mitigate risk

Santa knows all about risk.  He has complex systems for ordering, fulfilment, travel planning, logistics and delivery that are the envy of most of the world.  He understands what impact failure in any particular part of the supply chain can have on his customers: mainly children and IT professionals.  He quantifies risk, recalculating on a regular basis to ensure that he is up to date with possible vulnerabilities, and ready with mitigations.

Patch frequently, but carefully

Santa absolutely cannot afford for his systems to go down, particularly around his most busy period.  He has established processes to ensure that the concerns of security are balanced with the needs of the business[5].  He knows that sometimes, business continuity must take priority, and that on other occasions, the impact of a security breach would be so major that patches just have to be applied.  He tells people what he wants, and listens to their views, taking them into account where he can. In other words, he embraces open management, delegating decisions, where possible, to the sets of people who are best positioned to make the call, and only intervenes when asked for an executive decision, or when exceptions arise.  Santa is a very enlightened manager.

Embrace diversity

One of the useful benefits of running a global operation is that Santa values diversity.  Old or young (at heart), male, female or gender-neutral, neuro-typical or neuro-diverse, of whatever culture, sexuality, race, ability, creed or nose-colour, Santa takes into account his stakeholders and their views on what might go wrong.  What a fantastic set of viewpoints Santa has available to him.  And, for an Aging White Guy, he’s surprisingly hip to the opportunities for security practices that a wide and diverse set of opinions and experiences can bring[6].

Summary

Here’s my advice.  Be like Santa, and adopt at least some of his security practices yourself.  You’ll have a much better opportunity of getting onto his good side, and that’s going to go down well not just with Santa, but also your employer, who is just certain to give you a nice bonus, right?  And if not, well, it’s not too late to write that letter directly to Santa himself.


1 – if you have a problem with this statement, then either you need to find another blog, or you’re reading this in the far future, where all our climate problems have been solved. I hope.

2 – or you dwell in one of those cultures where Santa visits quite early in December.

3 – a high-flying goose in the face can do terrible damage to a fast-moving reindeer, and if the sleigh were to crash, what then…?

4 – not me!

5 – Santa doesn’t refer to it as a “business”, but he’s happy for us to call it that so that we can model our own experience on his.  He’s nice like that.

6 – though Santa would never use the phrase “hip to the opportunities”.  He’s way too cool for that.

Who do you trust on trust?

(I’m hoping it’s me.)

I’ve been writing about trust on this blog for a little over two years now. It’s not the only topic, but it’s one about which I’m passionate. I’ve been thinking about issues around trust, particularly in regards to computing and security, for nearly 20 years, and it’s something I care about a lot. I care about it so much that I’m writing a book about it.

In fact, I care about it maybe a little too much. I was at a conference earlier this year and – in a move that will come as little surprise to regular readers of this blog[1] – actually ended up getting quite cross about it. The problem is that lots of people talk about trust, but they either don’t really know what they’re talking about, or they really don’t know what they’re talking about. To be clear, I mean different things by those two statements. Some people know their subject, but their subject isn’t really trust. Other people don’t know their subject, but then again, the thing they think they’re talking about often isn’t trust either. Some people talk about “zero trust“, when I really need to look beyond that concept, and discuss implicit vs explicit trust. People ignore the importance of establishing trust. People ignore the importance of decaying trust. People assume that transitive trust is the same as direct trust. People ignore context. All of these are important, and arguably, its not their fault. There’s actually very little detailed writing about trust outside the social sciences. Given how much discussion there is of trust, trusted computing, trusted systems and the like within the world of IT security, there’s astonishingly little theoretical underpinning of the concept, which means that there’s very little agreement as to what is really meant. And, it turns out, although it seems that trust within the social sciences is quite like trust within computing, it really isn’t.

Anyway, there were people at this conference earlier this year who said things about trust which strongly suggested to me that it would be helpful if there were a good underpinning that people could read and discuss and disagree with: a book, in fact, about trust in computing. I got so annoyed that I made a decision to tell two people – my boss and one of the editors of Opensource.com – that I planned to write a book about it. I’m not sure whether they really believed me, but I ended up putting together a Table of Contents. And then looking for a publisher, and then sending several publishers a copy of the ToC and some further thoughts about what a book might look like, and word count estimates, and a list of possible reader types and markets.

And then someone offered me a contract. This was a little bit of surprise, but after some discussion and negotiation, I’m now contracted to write a book on trust for Wiley. I’m absolutely going to continue to publish this blog, and I’ll continue to write about trust here. And, on occasion, something a little bit more random. I don’t pretend to know everything about the subject, and writing about it here allows me to explore some of the more tricky issues. I hope you’ll join me for the ride – and if you have suggestions or questions, I’d love to hear about them.


1 – or my wife and kids.

“Unhackability” or just poor journalism?

An over-extended analogy about seat belts and passwords.

I recently saw a tagline for a brief article in a very reputable British newspaper which was “Four easy steps to unhackability”. It did two things to me:

  1. it made me die a little inside;
  2. it made me really quite angry.

The latter could be partly related to the fact that it was a Friday evening and I felt that I deserved a beer, and the former to the amount of time I’d spent during the week mastering our new expenses system, but whatever. The problem is that there is no “unhackable”. Just as there is no “secure”.

This, I suppose, is really what made me die a little inside. If journalists are going to write these sorts of articles, then they should know better. And if they don’t, the editor shouldn’t let them write the article. And if they didn’t write the tagline, then whoever did should be contacted, shouted at, and forced to rewrite it. And provide an apology.  Preferably a public one.

The article was about good password practice, and though short, contained sensible advice. For a more complete (and, dare I say, wittier) guide, see my article The gift that keeps on giving: passwords.  I was happy about the advice, but far, far from happy about the title.  Let’s employ that most dangerous of techniques: an analogy.  If, say, someone wrote an article on motoring about how to use seat belts with the tagline “Four steps to uninjureability”, anyone who knew anything about cars would be up in arms, because it’s clear that seat belts, useful as they are, and injury-reducing as they are, do not protect you from all injury when driving, even if employed perfectly correctly.  This is what made me angry, because the password article seemed to suggest that good passwords would stop you being technologically injured (see: here’s why we don’t let people play with analogies).

Because, although most people might understand about seat belts, fewer people – many fewer people – have a good idea about computer security.  Even the people who do understand lots about computer security aren’t immune from being hacked, however well they pursue good practice (and, to reiterate, the advice in the article was good practice).  It’s the same with motoring – even people who use their seat belts assiduously, and drive within the speed limit, and follow all the rules of the road, aren’t immune from injury.  In fact, no: motoring is better, by at least one measure, which is that (in most cases at least), there aren’t a whole bunch of people whose main aim in life is to injure as many other motorists as they can.  As opposed to the world of technology, where there really is a goodly number of not-so-goodly people out there on the Internet whose main aim in life is to hack[1] other people’s computers and do bad things with their data and resources.

As my friend Cathy said, “it gives people a false sense of security”.

Some actual advice

Computer security is about several things, about which the following come immediately to mind:

  • layers: the more measures or layers of security that you have in place, the better your chances of not being hacked;
  • timeliness: I’m not sure how many times I’ve said this, but you need to keep your systems up-to-date.  This may seem like an unnecessary hassle, but the older your software is, the more likely that there are known vulnerabilities, and the more likely that a hacker will be able to compromise your system;
  • awareness: sometimes we just need to be aware that emails can be malicious, or that that phone-call purporting to be from your Internet Service Provider may in fact be from someone trying to do bad things to your computer[2];
  • reaction: if you realise something’s wrong, don’t keep doing it.  It’s usually best to step away from the keyboard and turn off the machine before more damage is done.

There: a set of pieces of advice, with no ridiculous claims about how well they’ll serve you.  I’ll save that for another, lazier article (or hopefully not).


1 – mean “crack”, but I’ve pretty much given up on trying to enforce this distinction now.  If you’re with me and feel sad about this, nod quietly to yourself and go to enjoy that beer I mentioned at the beginning of the article: you deserve it.

2 – don’t even start me on using random USB drives – I even had an anxiety dream about this last night.

Enarx goes multi-platform

Now with added SGX!

Yesterday, Nathaniel McCallum and I presented a session “Confidential Computing and Enarx” at Open Source Summit Europe. As well as some new information on the architectural components for an Enarx deployment, we had a new demo. What’s exciting about this demo was that it shows off attestation and encryption on Intel’s SGX. Our initial work focussed on AMD’s SEV, so this is our first working multi-platform work flow. We’re very excited, and particularly as this week a number of the team will be attending the first face to face meetings of the Confidential Computing Consortium, at which we’ll be submitting Enarx as a project for contribution to the Consortium.

The demo had been the work of several people, but I’d like to call out Lily Sturmann in particular, who got things working late at night her time, with little time to spare.

What’s particularly important about this news is that SGX has a very different approach to providing a TEE compared with the other technology on which Enarx was previously concentrating, SEV. Whereas SEV provides a VM-based model for a TEE, SGX works at the process level. Each approach has different advantages and offers different challenges, and the very different models that they espouse mean that developers wishing to target TEEs have some tricky decisions to make about which to choose: the run-time models are so different that developing for both isn’t really an option. Add to that the significant differences in attestation models, and there’s no easy way to address more than one silicon platform at a time.

Which is where Enarx comes in. Enarx will provide platform independence both for attestation and run-time, on process-based TEEs (like SGX) and VM-based TEEs (like SEV). Our work on SEV and SGX is far from done, but also we plan to support more silicon platforms as they become available. On the attestation side (which we demoed yesterday), we’ll provide software to abstract away the different approaches. On the run-time side, we’ll provide a W3C standardised WebAssembly environment to allow you to choose at deployment time what host you want to execute your application on, rather than having to choose at development time where you’ll be running your code.

This article has sounded a little like a marketing pitch, for which I apologise. As one of the founders of the project, alongside Nathaniel, I’m passionate about Enarx, and would love you, the reader, to become passionate about it, too. Please visit enarx.io for more information – we’d love to tell you more about our passion.

What’s a Trusted Compute Base?

Tamper-evidence, auditability and measurability are three important properties.

A few months ago, in an article called “Turtles – and chains of trust“, I briefly mentioned Trusted Compute Bases, or TCBs, but then didn’t go any deeper.  I had a bit of a search across the articles on this blog, and realised that I’ve never gone into this topic in much detail, which feels like a mistake, so I’m going to do it now.

First of all, let’s think about computer systems.  When I talk about systems, I’m being both quite specific (see Systems security – why it matters) and quite broad (I don’t just mean computer that sits on your desk or in a data centre, but include phones, routers, aircraft navigation devices – pretty much anything that has a set of chips inside it).  There are surely some systems that you don’t rely on too much to do important things, but in most cases, you’re going to care if they go wrong, or, more relevant to this discussion, if they get compromised.  Even the most benign of systems – a smart light-bulb, for instance – can become a nightmare if compromised.  Even if you don’t particularly care whether you can continue to use it in the way it was intended, there are still worries about its misuse in the case of compromise:

  1. it may become a “jumping off point” for malicious attacks into your network or other systems;
  2. it may be used as part of a botnet, piggybacking on your network to attack other systems (leading to sanctions against your legitimate systems from outside);
  3. it may be used as part of a botnet, using up resources such as network bandwidth, storage or electricity (leading to resource constraints or increased charges).

For any systems dealing with sensitive data – anything from your messages to loved ones on your phone through intellectual property secrets for a manufacturing organisation through to National Security data for government department – these issues are compounded.  In order to protect your system, you can’t just say “this system is secure” (lovely as that would be).  What can you do to start making statement about the general security of a system?

The stack

Systems consist of multiple components, and modern computing systems are typically composed from multiple layers (one of my favourite xkcd comics, Stack, shows some of them).  What’s relevant from the point of view of this article is that, on the whole, the different layers of the stack start up – boot up – from the bottom upwards.  This means, following the “bottom turtle” rule (see the Turtles article referenced above), that we need to ensure that the bottom layer is as secure as possible.  In fact, in order to build a system in which we can have assurance that it will behave as expected and designed (in other words, a system in which we can have a trust relationship), we need to build a Trusted Compute Base.  This should have at least the following set of properties: tamper-evidence, auditability and measurability, all of which are related to each other.

Tamper-evidence

We want to know if the TCB – on which we are building everything else – has a problem.  Specifically, we need a set of layers or components that we are pretty sure have not been compromised, or which, if compromised, will be tamper-evident:

  • fail in expected ways,
  • refuse to start, or
  • flag that they have been compromised.

It turns out that this is not easy, and typically becomes more difficult as you move up the stack – partly because you’re adding more layers, and partly because those layers tend to get more complex.

Our TCB should have the properties listed above (around failure, refusing to start or compromise-flagging), and be as small as possible.  This seems the wrong way around: surely you would want to ensure that as much of your system was trusted as possible?  In fact, what you want is a small, easily measurable and easily auditable TCB on which you can build the rest of your system – from which you can build a “chain of trust” to the other parts of your system about which you care.  Auditability and measurability are the other two properties that you want in a TCB, and these two properties are why open source is a very useful tool in your toolkit when building a TCB.

Auditability (and open source)

Auditability means that you – or someone else who you trust to do the job – can look into the various components of the TCB and assure yourself that they have been written, compiled and  are executing properly.  As I explained in Of projects, products and (security) community, the person may not always be you, or even someone in your organisation, but if you’re using widely deployed open source software, the rest of the community can be doing that auditing for you, which is a win for you and – if you contribute your knowledge back into the community – for everybody else as well.

Auditability typically gets harder the further you go down the stack – partly because you’re getting closer and closer to bits – ones and zeros – and to actual electrons, and partly because there is very little truly open source hardware out there at the moment.  However, the more that we can see and audit of the TCB, the more confidence we can have in it as a building block for the rest of our system.

Measurability (and open source)

The other thing you want to be able to do is ensure that your TCB really is your TCB.  Tamper-evidence is related to this, but that’s a run-time property only (for software components, at least).  Being able to measure when you provision your system and then to check that what you originally loaded is still what you think it should be when you boot it is a very important property of a TCB.  If what you’re running is open source, you can check it yourself, against your own measurements and those of the community, and if changes are made – by you or others – those changes can be checked (as part of auditing) and then propagated through measurement checking to the rest of the community.  Equally important – and much more difficult – is run-time measurability.  This turns out to be very difficult to do, although there are some techniques emerging which are beginning to get traction – for now, we tend to rely on tamper-evidence, which is easier in hardware than software.

Summary

Trusted Compute Bases (TCBs) are a key concept in building systems that we hope will behave in ways we expect – or allow us to find out when they are not.  Tamper-evidence, auditability and measurability are three important properties that they should display, and it turns out that open source is an important factor in helping us ensure two of those.

 

 

 

プロジェクトとプロダクトとセキュリティコミュニティと

全てのオープンソースが平等に作られてメンテナンスされている訳ではないのです。

この記事は https://aliceevebob.com/2019/10/15/of-projects-products-and-security-community/ を翻訳したものです。
オープンソースは良いこと、です。
オープンソースは特にセキュリティ周りにはピッタリです。

このことに関しては前の記事 Disbelieving the many eyes hypothesisThe commonwealth of Open Sourceでも書きましたが、さらに書き足したいと思います。

この記事ではオープンソースの機能、議論の余地はありますが、その欠点と利点についてついてです。さらにいうと、プロジェクトとプロダクトの違いです。

一面から話しますが(あらかじめ警告すると、組織にとっては「プロダクト」なのですが)、ここでちょっとした免責事項から始めましょう。

私はRed Hatに勤めています。そして、Red Hatはオープンソースをサポートすることで利益を得ている企業です。
これは良いことで、私はこの企業モデルを認めていますが、この記事に関してはバイアスがかかってることをはじめにお伝えします。

オープンソースがセキュリティに良いという理由は、問題があるときに何が起こっているか実際自分で見ることができる上、自分で修正することもできるからです。
もしくは現実的に言うと、問題が起こったオープンソースプロジェクトでセキュリティプロフェッショナルかつその分野のエキスパートでなければ、他の人が修正してくれるかもしれません。
その分野に詳しいセキュリティ関連の人が十分にいて、ソフトウェアプロジェクトの中の問題や脆弱性を解決してくれるのを願うばかりです。

ただ、それよりはもっと事態は複雑かもしれません。
組織としてはオープンソースを使用するには二つの方法があります。

・プロジェクトとして
コードを持ってきてどのバージョンを使うか決め、自分でコンパイル、テスト、管理をする

・プロダクトとして
ベンダーがプロジェクトを持ってきて、どのバーションか決め、コンパイル、テスト、サポートをパッケージにつけて売ります。ドキュメントやパッチ、アップデートも大抵含みます。

さて、「生」プロジェクトを使えばオプションがもっとあることを否定はできませんね。
最新バージョンをチェックして、コンパイル、テストを自由にでき、プロダクトバージョンよりも早く、さらに自分のビジネスとユースケースに合ったセキュリティパッチを当てることもできます。とても良いことのように思えます

しかしながら、セキュリティ特有の欠点があります。

1 セキュリティパッチの中には規制があるものがあります。(ブログ参照)限られた組織(大体はベンダーです)だけがアクセスできるものです。
大きなエコシステムと同時期にアクセスを得て修正できたとしても、チェックして、テストをしなければいけません。それもベンダーによってすでにされているかもしれません。(もちろん盲目的にパッチを当てることもできますが、しないで!)

2 必要性も緊急性がないのにコード変更をしたいという大きな欲求
をアップストリームプロジェクトに反映させることは、コードをフォークしているようなものです。
期日通りにアップストリームに入れこめたとしても、その期間中はアップストリームにない変更を維持していることになるので、他のセキュリティパッチがあなたのバージョンにすぐに当てられないと言う危険性があります。(これはセキュリティ関連ではないパッチにも当てはまりますが、セキュリティパッチの方が緊急性があります)
オプションとしてもちろんあなたのバージョンが他の人に使ってもらえるのであれば、プロジェクトのオフィシャルフォークにすることもできます。コミュニティにそこを推すこともできます、しかしその新しいバージョンを内部的または外部的にサポートし続けるか決めなければいけません。

3 ソフトウェアの全てのインスタンスを同じ環境で同じバージョンで稼働させているのでない限り、セキュリティパッチを古いバージョンにバックポートすることが必要です。そうするのであれば、はじめに修正を行った人と同等もしくは同等程度にセキュリティに精通していなければいけません。
この場合、オープンソースの「共同体」と言う利点を捨てるということです。つまり、コミュニティのスキルをコピーできるようなエキスパートを雇う必要があるからです。

プロダクトではなくプロジェクトをデプロイするということは、プロジェクトを内部でプロダクト化するようなものです。

セキュリティパッチの「共同体」の利点だけでなく、ベンダーサポートプロダクトモデルに本来ある「規模の経済」を失うことになります。

「範囲の経済」も失っているかもしれません。多くのベンダーはたくさんのプロダクトをサポートしています。それらのプロダクトサポートに重点を置いていない組織にとっては、ハードルが高い方法を使って、セキュリティ のエキスパートをあてがうことができるかもしれません。

このような経済学で見ると、ベンダーを使うことの「共同体」の利点がわかります。
たくさんの顧客が製品を使うということは、セキュリティパッチと主要な機能に収益構造とインセンティブを見出せるということなのです。

他のパッチや機能向上にリソースをあてがう場合もあるかもしれません。しかし、スキルのあるセキュリティエキスパートが不足しているということは、「比較優位」性が訴えるように、大きなコミュニティの利点のためにそのポジションを保持すべきなのです。

もし、ベンダーがオープンソースプロジェクトを製品化したバージョンを終わりにする、もしくはサポートを終了する場合どうなるでしょう。
そう、もちろん、ベンダー固有のソフトが持つ問題です。
ベンダーソフトの場合、3つのアウトカムがあります。

・ソフトウェアのソースコードにアクセスできないので、機能向上はできない
・あなただけがソースコードにアクセス権を与えられているが、広げることができないので孤立している
・全ての人がソースコードにアクセスできるが、機能向上させることができるコミュニティがないので、ソフトが消える、もしくはコミュニティがソフト周りを整えるのにものすごい時間がかかる

オープンソースの場合、選択したベンダーがビジネスを終了させたら、別のベンダーを使う、新しいベンダーに引き継いでもらう、自分で製品化する(その上で別の組織に提供する)、最悪の場合は内部で製品化して長期的な対策を練る、などのオプションがあります。

最近のオープンソースの世界では私たちコミュニティはオープンソースのコンソーシアムの成長とともに、これらのオプションを上手く使えるようになってきました。
コンソーシアムではソフトウェアのプロジェクトやそれに関わるプロジェクト周りで組織や団体、個人が集まって上記にあげた規模の経済と範囲の経済を模索しながら、コミュニティの成長促進、機能周りや追加機能を一律にしたり、まだ上手く定義されていないユースケースの一般的なセキュリティ周りや製品化をしたりしています。

例としては、Enarxプロジェクトも貢献しているLinux FoundationのConfidential Computing Consortiumでしょう。

オープンソースのソフトをプロジェクトとしてではなくプロダクトとして使うということは、トレードオフがあります。

しかし、少なくともセキュリティの観点から言うと、組織の経済についてはとても明確です。セキュリティのエキスパートを雇う立場出ないのであれば、プロダクトが一番ニーズにあっているのです。
元の記事:https://aliceevebob.com/2019/10/15/of-projects-products-and-security-community/
2019年10月15日 Mike Bursell

Of projects, products and (security) community

Not all open source is created (and maintained) equal.

プロジェクトとプロダクトとセキュリティコミュニティと

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

Humans and (being bad at) trust

Why “signing parties” were never a good idea.

I went to a party recently, and it reminded of quite how bad humans are at trust. It was a work “mixer”, and an attempt to get people who didn’t know each other well to chat and exchange some information. We were each given two cards to hang around our necks: one on which to write our own name, and the other on which we were supposed to collect the initials of those to whom we spoke (in their own hand). At the end of the event, the plan was to hand out rewards whose value was related to the number of initials collected. Pens/markers were provided.

I gamed the system by standing by the entrance, giving out the cards, controlling the markers and ensuring that everybody signed card, hence ending up with easily the largest number of initials of anyone at the party. But that’s not the point. Somebody – a number of people, in fact – pointed out the similarities between this and “key signing parties”, and that got me thinking. For those of you not old enough – or not security-geeky enough – to have come across these, they were events which were popular in the late nineties and early parts of the first decade of the twenty-first century[1] where people would get together, typically at a tech show, and sign each other’s PGP keys. PGP keys are an interesting idea whereby you maintain a public-private key pair which you use to sign emails, assert your identity, etc., in the online world. In order for this to work, however, you need to establish that you are who you say you are, and in order for this to work, you need to convince someone of this fact.

There are two easy ways to do this:

  1. meet someone IRL[2], get them to validate your public key, and sign it with theirs;
  2. have someone who knows the person you met in step 1 agree that they can probably trust you, as the person in step 1 did, and they trust them.

This is a form of trust based on reputation, and it turns out that it is a terrible model for trust. Let’s talk about some of the reasons for it not working. There are four main ones:

  • context
  • decay
  • transitive trust
  • peer pressure.

Let’s evaluate these briefly.

Context

I can’t emphasise this enough: trust is always, always contextual (see “What is trust?” for a quick primer). When people signed other people’s key-pairs, all they should really have been saying was “I believe that the identity of this person is as stated”, but signatures and encryption based on these keys was (and is) frequently misused to make statements about, or claim access to, capabilities that were not necessarily related to identity.

I lay some of the fault of this at the US alcohol consumption policy. Many (US) Americans use their driving licence/license as a form of authorisation: I am over this age, and am therefore entitled to purchase alcohol. It was designed to prove that their were authorised to drive, and nothing more than that, but you can now get a US driving licence to prove your age even if you can’t drive, and it can be used, for instance, as security identification for getting on aircraft at airportsThis is crazy, but partly explains why there is such a confusion between identification, authentication and authorisation.

Decay

Trust, as I’ve noted before in many articles, decays. Just because I trust you now (within a particular context) doesn’t mean that I should trust you in the future (in that or any other context). Mechanisms exist within the PGP framework to expire keys, but it was (I believe) typical for someone to resign a new set of keys just because they’d signed the previous set. If they were only being used for identity, then that’s probably OK – most people rarely change their identity, after all – but, as explained above, these key pairs were often used more widely.

Transitive trust

This is the whole “trusting someone because I trust you” problem. Again, if this were only about identity, then I’d be less worried, but given people’s lack of ability to specify context, and their equal inability to communicate that to others, the “fuzziness” of the trust relationships being expressed was only going to increase with the level of transitiveness, reducing the efficacy of the system as a whole.

Peer pressure

Honestly, this occurred to me due to my gaming of the system, as described in the second paragraph at the top of this article. I remember meeting people at events and agreeing to endorse their key-pairs basically because everybody else was doing it. I didn’t really know them, though (I hope) I had at least heard of them (“oh, you’re Denny’s friend, I think he mentioned you”), and I certainly shouldn’t have been signing their key-pairs. I am certain that I was not the only person to fall into this trap, and it’s a trap because humans are generally social animals[3], and they like to please others. There was ample opportunity for people to game the system much more cynically than I did at the party, and I’d be surprised if this didn’t happen from time to time.

Stepping back a bit

To be fair, it is possible to run a model like this properly. It’s possible to avoid all of these by insisting on proper contextual trust (with multiple keys for different contexts), by re-evaluating trust relationships on a regular basis, by being very careful about trusting people just due to their trusting someone else (or refusing to do so at all), and by refusing just to agree to trust someone because you’ve met them and they “seem nice”. But I’m not aware of anyone – anyone – who kept to these rules, and it’s why I gave up on this trust model over a decade ago. I suspect that I’m going to get some angry comments from people who assert that they used (and use) the system properly, and I’m sure that there are people out there who did and do: but as a widespread system, it was only going to work if the large majority of all users treated it correctly, and given human nature and failings, that never really happened.

I’m also not suggesting that we have many better models – but we really, really need to start looking for some, as this is important, and difficult stuff.


1 – I refuse to refer to these years the “aughts”.

2 – In Real Life – this used to be an actual distinction to online.

3 – even a large enough percentage of IT folks to make this a problem.