Track and trace failure: a systems issue

The problem was not Excel, but that somebody used the wrong tools in a system.

Like many other IT professionals in the UK – and across the world, having spoken to some other colleagues in other countries – I was first surprised and then horrified as I found out more about the failures of the UK testing and track and trace systems. What was most shocking about the failure is not that it was caused by some alleged problem with Microsoft Excel, but that anyone thought this was a problem due to Excel. The problem was not Excel, but that somebody used the wrong tools in a system which was not designed properly, tested properly, or run properly. I have written many words about systems, and one article in particular seems relevant: If it isn’t tested, it doesn’t work. In it, I assert that a system cannot be said to work properly if it has not been tested, as a fully working system requires testing in order to be “working”.

In many software and hardware projects, in order to complete a piece of work, it has to meet one or more of a set of tests which allow it to be described as “done”. These tests may be actual software tests, or documentation, or just checks done by members of the team (other than the person who did the piece of work!), but the list needs to be considered and made part of the work definition. This “done” definition is as much part of the issue being addressed, functionality added or documentation being written as the actual work done itself.

I find it difficult to believe that there was any such definition for the track and trace system. If there was, then it was not, I’m afraid, defined by someone who is an expert in distributed or large-scale systems. This may not have been the fault of the person who chose Excel for the task of recording information, but it is the fault of the person who was in charge of the system, because Excel is not, and never was, a fit application for what it was being used for. It does not have the scalability characteristics, the integrity characteristics or the versioning characteristics required. This is not the fault of Microsoft, any more than it would be fault of Porsche if a 911T broke down because its owner filled with diesel fuel, rather than petrol[1]. Any competent systems architect or software engineer, qualified to be creating such a system, would have known this: the only thing that seems possible is that whoever put together the system was unqualified to do so.

There seem to be several options here:

  1. the person putting together the system did not know they were unqualified;
  2. the person putting together the system realised that they were unqualified, but did not feel able to tell anyone;
  3. the person putting together the systems realised that they were unqualified, but lied.

In any of the above, where was the oversight? Where was the testing? Where were the requirements? This was a system intended to safeguard the health of millions – millions – of people.

Who can we blame for this? In the end, the government needs to take some large measure of responsibility: they commissioned the system, which means that they should have come up with realistic and appropriate requirements. Requirements of this type may change over the life-cycle of a project, and there are ways to manage this: I recommend a useful book in another article, Building Evolutionary Architectures – for security and for open source. These are not new problems, and they are not unsolved problems: we know how to do this as a profession, as a society.

And then again, should we blame someone? Normally, I’d consider this a question out of scope for this blog, but people may die because of this decision – the decision not to define, design, test and run a system which was fit for purpose. At the very least, there are people who are anxious and worried about whether they have Covid-19, whether they need to self-isolate, whether they may have infected vulnerable friends or family. Blame is a nasty thing, but if it’s about holding people to account, then that’s what should happen.

IT systems are important. Particularly when they involve people’s health, but in many other areas, too: banking, critical infrastructure, defence, energy, even retail and entertainment, where people’s jobs will be at stake. It is appropriate for those of us with a voice to speak out, to remind the IT community that we have a responsibility to society, and to hold those who commission IT systems to account.


1 – or “gasoline” in some geographies.

8 tips on how to do open source (badly)

Generally, a “build successful” or “build failed” message should be sufficient.

A while ago, I published my wildly popular[1] article How not to make a cup of tea. Casting around for something to write this week, it occurred to me that I might write about something that I believe is almost as important as to world peace, the forward march of progress and brotherly/sisterly love: open source projects. There are so many guides out there around how to create an open source project that it’s become almost too easy to start a new, successful, community-supported one. I think it’s time to redress the balance, and give you some clues about how not to do it.

Throw it over the wall

You know how it is: you’re a large corporation, you’ve had a team of developers working on a project for several years now, and you’re very happy with it. It’s quite expensive to maintain and improve the code, but luckily, it’s occurred to you that other people might want to use it – or bits of it. And recently, some of your customers have complained that it’s difficult to get improvements and new features, and partners complain that your APIs are obscure, ill-defined and subject to undocumented change.

Then you hit on a brilliant idea: why not open source it and tell them their worries are over? All you need to do is take the existing code, create a GitHub or GitLab[2] project (preferably under a Game of Thrones-themed username that happens to belong to one of your developers), make a public repository, upload all of the code, and put out a press release announcing: a) the availability of your project; and b) what a great open source citizen you are.

People will be falling over themselves to contribute to your project, and you’ll suddenly have hundreds of developers basically working for you for free, providing new features and bug fixes!

Keep a tight grip on the project

There’s a danger, however, when you make your project open source, that other people will think that they have a right to make changes to it. The way it’s supposed to work is that your product manager comes up with a bunch of new features that need implementing, and posts them as issues on the repository. Your lucky new contributors then get to write code to satisfy the new features, you get to test them, and then, if they’re OK, you can accept them into the project! Free development! Sometimes, if you’re lucky, customers or partners (the only two parties of any importance in this process, apart from you) will raise issues on the project repository, which, when appropriately subjected to your standard waterfall development process and vetted by the appropriate product managers, can be accepted as “approved” issues and earmarked for inclusion into the project.

There’s a danger that, as you’ve made the project code open source, some people might see this as an excuse to write irrelevant features and fixes for bugs that none of your customers have noticed (and therefore, you can safely assume, don’t care about). Clearly, in this case, you should reject close any issues related to such features or fixes, and reject (or just ignore) any related patched.

Worse yet, you’ll sometimes find developers[2] complaining about how you run the project. They may “fork” the project, making their own version. If they do this, beware setting your legal department on them. There’s a possibility that as your project is open source, they might be able to argue that they have a right to create a new version. Much better is to set your PR department on them, rubbishing the new project, launching ad hominem[4] attacks on them and showing everybody that you hold the moral high ground.

Embrace diversity (in licensing)

There may be some in your organisation who say that an open source project doesn’t need a licence[5]. Do not listen to their siren song: they are wrong. What your open source project needs is lots of licences! Let your developers choose their favourite for each file they touch, or, even better, let the project manager choose. The OSI maintains a useful list, but consider this just a starting point: why not liberally sprinkle different licences through your project? Diversity, we keep hearing, is good, so why not apply it to your open source project code?

Avoid documentation

Some people suggest that documentation can be useful for open source projects, and they are right. What few of them seem to understand is that their expectations for the type of documentation are likely to be skewed by their previous experience. You, on the other hand, have a wealth of internal project documentation and external product marketing material that you accrued before deciding to make your project open source: this is great news. All you need to do is to create a “docs” folder and copy all of the PDF files there. Don’t forget to update them whenever you do a new product version!

Avoid tooling

All you need is code (and docs – see above). Your internal developers have carefully constructed and maintained build environments, and they should therefore have no problems building and testing any parts of the project. Much of this tooling, being internal, could be considered proprietary, and the details must therefore be kept confidential. Any truly useful contributors will be able to work out everything they need for themselves, and shouldn’t need any help, so providing any information about how actually to build the code in the repository is basically redundant: don’t bother.

An alternative, for more “expert” organisations, is to provide build environments which allow contributors to batch builds to see if they compile or not (whilst avoiding giving them access to the tooling themselves, obviously). While this can work, beware providing too much in the way of output for the developer/tester, as this might expose confidential information. Generally, a “build successful” or “build failed” message should be sufficient.

Avoid diagrams

Despite what some people think, diagrams are dangerous. They can give away too much information about your underlying assumptions for the project (and, therefore, the product you’re selling which is based on it), and serious developers should be able to divine all they need from the 1,500 source files that you’ve deposited in the repository anyway.

A few “marketiture” diagrams from earlier iterations of the project may be acceptable, but only if they are somewhat outdated and don’t provide any real insight into the existing structure of the code.

Keep quiet

Sometimes, contributors – or those hoping to become contributors, should you smile upon their requests – will ask questions. In the old days, these questions tended to be sent to email lists[6], where they could be safely ignored (unless they were from an important customer). More recently, there are other channels that developers expect to use to contact members of the project team, such as issues or chat.

It’s important that you remember that you have no responsibility or duty to these external contributors: they are supposed to be helping you. What’s more, your internal developers will be too busy writing code to answer the sort of uninformed queries that are likely to be raised (and as for so-called “vulnerability disclosures“, you can just fix those in your internal version of the product, or at least reassure your customers that you have). Given that most open source projects will come with an issue database, and possibly even a chat channel, what should you do?

The answer is simple: fall back to email. Insist that the only channel which is guaranteed attention[7] is email. Don’t make the mistake of failing to create an email address to which people can send queries: contributors are much more likely to forget that they’re expecting an answer to an email if they get generic auto-response (“Thanks for your email: a member of the team should get back to you shortly”) than if they receive a bounce message. Oh, and close any issues that people create without your permission for “failing to follow project process”[8].

Post huge commits

Nobody[9] wants to have to keep track of lots of tiny changes to code (or, worse, documentation – see above), or have contributors picking holes in it. There’s a useful way to avoid much of this, however, which is to train your developers only to post large commits to the open source project. You need to ensure that your internal developers understand that code should only be posted to the external repository when the project team (or, more specifically, the product team) deems it ready. Don’t be tempted to use the open source repository as your version control system: you should have perfectly good processes internally, and, with a bit of automation, you can set them up to copy batches of updates to the external repository on a regular basis[10].


1 – well, lots of you read it, so I’m assuming you like it.

2 – other public repositories may be available, but you won’t have heard of them, so why should you care?

3 – the canonical term for such people is “whingers”: they are invariably “experts”. According to them (and their 20 years of security experience, etc., etc.).

4 – or ad mulierem – please don’t be sexist in your attacks.

5 – or license, depending on your spelling choice.

6 – where they existed – a wise organisation could carefully avoid creating them.

7 – “attention” can include a “delete all” filter.

8 – you don’t actually need to define what the process is anywhere, obviously.

9 – in your product organisation, at least.

10 – note that “regular” does not equate to “frequent”. Aim for a cadence of once every month or two.

More Rusty thoughts

I do feel that I’m now actually programming in Rust. And I like it.

I wrote an article a couple of weeks ago – 5 Rust reflections (from Java) – about learning Rust, and specifically, about moving to Rust from Java. I talked about five particular points:

  1. Rust feels familiar
  2. References make sense
  3. Ownership will make sense
  4. Cargo is helpful
  5. The compiler is amazing

I absolutely stand by all of these, but I’ve got a little more to say, because I now feel like a Rustacean[1], in that:

  • I don’t feel like programming in anything else ever again;
  • I’ve moved away from simple incantations.

What do I mean by these two statemetents? Well, the first is pretty simple: Rust feels like the place to be. It’s well-structured, it’s expressive, it helps you do the right thing[2], it’s got great documentation and tools, and there’s a fantastic community. And, of course, it’s all open source, which is something that I care about deeply.

And the second thing? Well, I decided that in order to learn Rust properly, I should take an existing project that I had originally written in Java and reimplement it in hopefully fairly idiomatic Rust. Sometime in the middle of last week, I started fixing mistakes – and making mistakes – around implementation, rather than around syntax. And I wasn’t just copying text from tutorials or making minor, seemingly random changes to my code based on the compiler output. In other words, I was getting things to compile, understanding why they compiled, and then just making programming mistakes[3].

This is a big step forward. When you start learning a language, it’s easy just to copy and paste text that you’ve seen elsewhere, or fiddle with unfamiliar constructs until they – sort of – work. Using code – or producing code -that you don’t really understand, but seems to work, is sometimes referred to as “using incantations” (from the idea that most magicians in fiction, film and gaming reciti collections of magic words which “just work” without really understanding what they’re doing or what the combination of words actually means). Some languages[4] are particularly prone to this sort of approach, but many – most? – people learning a new language will be prone to doing this when they start out, just because they want things to work.

And last night, I was up till 1am implementing a new feature – accepting command-line input – which I really couldn’t get my head round. I’d spent quite a lot of time on it (including looking for, and failing to find, some appropriate incantations), and then asked for some help on a rust-lang channel inhabited by some people I know. A number of people had made some suggestions about what had been going wrong, and one person in particular was enormously helpful in picking apart some of the suggestions so that I understood them better. He explained quite a lot, but finished with “I don’t know the return type of the hash function you’re calling – I think this is a good spot for you to figure this piece out on your own.”

This was just what I needed – and any learner of anything, including programming languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to stay down and see if I could work things out for myself. And I did. I took the suggestions that people had made, understood out what they were doing, tried to divine what they should be doing, worked out how they should be doing it, and then found the right way of making it happen.

I’ve still got lots to learn, and I’ll make lots of mistakes still, but I now feel that I’m in a place to find my way through those mistakes (with a little help along the way, probably – thanks to everyone who’s already pointed me in the right direction). But I do feel that I’m now actually programming in Rust. And I like it.


1 – this is what Rust programmers call themselves.

2 – it’s almost impossible to stop people doing the wrong thing entirely, but encouraging people do to the right thing is great. In fact, Rust goes further, and actually makes it difficult to do the wrong thing in many situations. You really have to try quite hard to do bad things in Rust.

3 – I found a particularly egregious off-by-one error in my code, for instance, which had nothing to do with Rust, and everything to do with my not paying enough attention to the program flow.

4 – *cough* Perl *cough*

Your job is unimportant (keep doing it anyway)

Keep going, but do so with a sense of perspective.

I work in IT – like many of the readers of this blog. Also like many of the readers of this blog, I’m now working from home (which is actually normal for me), but with all travel pretty much banned for the foreseeable future (which isn’t). My children’s school is still open (unlike many other governments, the UK has yet to order them closed), but when the time does come for them to be at home, my kids are old enough that they will be able to look after themselves without constant input from me. I work for Red Hat, a global company with resources to support its staff and keep its business running during the time of Covid-19 crisis. In many ways, I’m very lucky.

My wife left the house at 0630 this morning to go into London. She works for a medium-sized charity which provides a variety of types of care for adults and children. Some of the adults for whom they provide services, in particular, are extremely vulnerable – both in terms of their day-to-day lives, but also to the possible effects of serious illness. She is planning the charity’s responses, coordinating with worried staff and working out how they’re going to weather the storm. Charities and organisations like this across the world are working to manage their staff and service users and try to continue provision at levels that will keep their service users safe and alive in a context where it’s likely that the availability of back-up help from other quarters – agency staff, other charities, public or private health services or government departments – will be severely limited in scope, or totally lacking.

In comparison to what my wife is doing, the impact of my job on society seems minimal, and my daily work irrelevant. Many of my readers may be in a similar situation, whether it is spouses, family members or other people in the community who are doing the obviously important – often life-preserving – work, and with us sitting at home, appearing on video conferences, writing documents, cutting code, doing things which don’t seem to have much impact.

I think it’s important, sometimes, to look at what we do with a different eye, and this is one of those times. However, I’m going to continue working, and here are some of the reasons:

  • I expect to continue to bring in a salary, which is going to be difficult for many people in the coming months. I hope to be able to spend some of that salary in local businesses, keeping them afloat or easing their transition back into normality in the future;
  • it’s my turn to keep the household running: my wife has often had to keep things going while I’ve been abroad, and I’m grateful for the opportunity to look after the children, shop for groceries and do more cooking;
  • while I’m not sick, there are going to be ways in which I can help our local community, with food deliveries, checks on elderly neighbours and the like.

Finally, the work that I – and the readers of this blog – do, is, while obviously less important and critical than that of my wife and others on the front line of this crisis, still relevant. My wife spent several hours at work creating an online survey to help work out which of her charity’s staff and volunteers could be deployed to what services. Without the staff who run that service, she would be without that capability. Online banking will continue to be important. Critical national infrastructure like power and water need to be kept going; logistics services for food delivery are vital; messaging and conferencing services will provide important means for communication; gaming, broadcast and online entertainment services will keep those who are in isolation occupied; and, at the very least, we need to keep businesses going so that when things recover, we can get the economy going again. That, and there are going to be lots of charities, businesses and schools who need the services that we provide right now.

So, my message today is: keep going, but do so with a sense of perspective. And be ready to use your skills to help out. Keep safe.

2019年はEnarxの年でした

2020年はデモなど色々なプランを考えています!

 

私にとって2019年はEnarxプロジェクトがほとんどでした。

他のしなければいけない業務もあって、例えば顧客会議、IBM(7月に私の勤めるRed Hatを買収してます)の業務、Kubernetesのセキュリティやパートナー企業と協業など重要なことは色々ありました。しかしEnarxが2019年のハイライトです。

 

年始に私たちは実現できることがあると確信し、内部のリーダーシップチームに対して、達成可能であることの証明を課されました。

その課題に対して、私たちはAMDのSEVチップと五月のボストンでのRed Hat Summitでデモを行い、このブログでアナウンスをしました。

IntelのSGXチップセットと10月のリヨンでのOpen Source Summitでフォローアップをしています。2019年のEnarxの開発でとても大切なことだったと考えています。

 

チーム

 

Enarxは私だけのものではもちろん、ありません。Nathaniel McCallumと共にプロジェクトの共同創立者の一人であることは非常に誇りです。ここまで達成できたのは多くのチームメンバーのおかげですし、オープンソースプロジェクトとして貢献し使用している皆様のおかげです。貢献者ページにはたくさんのメンバーの名前がありますが、まだ全員の名前が挙がっているわけではありません。また、Red Hat内外の何人かの方から頂いたプロジェクトに対するアドバイス、サポートとスポンサリングはとても大切なものです。その皆様の名前を言う許可は得ていないので、ここではお話しせず、丁重に扱う事とします。皆様のサポートとそのお時間を頂けたことに非常に感謝しています。

 

ユースケースとパートナー

 

2019年に成し得た重要なことの一つに、皆さんがどのように「野良状態で」Enarxを使いたいのかをまとめたことと、その比較的詳細な分析を行い、書き上げたことです。

その全てが公開されたわけではないですが、(私が任されていることなんですけどもね)これは実際にEnarxを使用したいと考えているパートナーを見つけるのに不可欠です。まだ公表できませんが、皆さんも聞いたことがあるグローバル企業のいくつかから、また将来的に増えるであろうスタートアップ企業からも、とても興味深いユースケースが挙がってきています。このように興味を持っていただくことは、ロジェクトの実用化に不可欠で、Enarxはただエンジニアの情熱から飛び出しただけのプロジェクトではないと言う事なのです。

 

外部を見ると

 

2019年の重大イベントはLinux FoundationのOpen Source SummitでのConfidential Computing Consortiumの発表でした。私たちRed HatではEnarxはこの新しいグループにぴったりだと考えており、10月の正式発足でプレミアメンバーになったことを嬉しく思っています。これを書いている2019年12月31日時点では、会員数は21、このコンソーシアムは幅広い業界で懸念と興味を惹きつけるものだと言うことがはっきりしてきました。Enarxの信念と目的が裏付けされていると言うことです。

 

2019年に成し遂げたのはコンソーシアムへの参加だけではありません。カンファレンスで講演を行い、このブログ上やNext.redhat.comまたOpensource.comで記事を発表、プレスとの会見、ウェブキャストなどです。一番大切なのは六角形のステッカーを作ったことでしょう!(欲しい方がいらっしゃったらご連絡ください)

 

最後に大切なことを一つ。私たちはプロジェクトを公表していきます。内製のプロジェクトからRed Hat外の参加を促進するために活動しています。詳細は12月17日のBlogをご覧ください。

 

アーキテクチャとコード

 

他に何かあるでしょうか。そうだ、コードですね。そしていくつかのコンポーネントの成熟しつつあるアーキテクチャセットです。

私たちは当然これら全てを外部に公表するつもりですが、まだできていない状態です。すべきことが本当にたくさんあるのです。私たちは皆さんが使用できるようにコードを公開することに尽力していて、2020年に向けデモやそれ以外の大きな計画を立てています。

 

最後に

 

他にも大切なことはもちろんあり、私がWileyから出版するトラスト(信頼性)に関連する本を書いていることです。これはEnarxに深く関連するものです。基本的に、技術はとても「クール」なものですが、Enarxプロジェクトは既存の需要に見合うものですから、Nathanielと私はクラウドやIoT、エッジ、その他機密情報とアルゴリズムが実装される全てのワークロードの管理方法を変えていくいい機会だと考えています。

 

このブログはセキュリティに関するものですが、トラスト(信頼性)と言うものはとても重要な部分だと考えています。Enarxはそれにぴったりと合うのです。ですから、これからも信頼性とEnarxに関するポストをしていきます。Enarx.ioの最新情報に注目していてください。

 

元の記事:https://aliceevebob.com/2019/12/31/2019-a-year-of-enarx/

2019年12月31日 Mike Bursell

 

タグ:セキュリティ、Enarx、オープンソース、クラウド

 

2019: a year of Enarx

We have big plans for demos and more in 2020

2019年はEnarxの年でした

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.

 

 

Of projects, products and (security) community

Not all open source is created (and maintained) equal.

プロジェクトとプロダクトとセキュリティコミュニティと

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

What is confidential computing?

Industry interest has been high, and overwhelmingly positive.

On Wednesday, 21st August, 2019 (just under a week ago, at time of writing), Jim Zemlin of the Linux Foundation announced the intent to form the Confidential Computing Consortium, with members including Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.  I’m particularly proud as Red Hat (my employer) is one of those[1], and I spent the preceding few weeks and days working very hard to ensure that we would be listed as one of the planned founding members.

“Confidential Computing” sounds like a lofty goal, and it is.  We’ve known for ages that you should encrypt sensitive data at rest (in storage), in transit (on the network), but confidential computing, as defined by the consortium, is about doing the same for sensitive data – and algorithms – in use.  The consortium plans to encourage industry to use hardware technologies generally called Trust Execution Environments to allow applications and processes to be encrypted as they are running.

This may sound somewhat familiar to those who follow my blog, and it should: Enarx, an open source project launched by Red Hat, was announced as one of the projects that should be part of the initial launch.  I’ve written about Enarx in several places:

Additionally, you’ll find lots of information on the introduction page of the Enarx wiki.

The press release from the Linux Foundation lists the following goals for the Confidential Computing Consortium (my emboldening):

The Confidential Computing Consortium will bring together hardware vendors, cloud providers, developers, open source experts and academics to accelerate the confidential computing market; influence technical and regulatory standards; and build open source tools that provide the right environment for TEE development. The organization will also anchor industry outreach and education initiatives.

Enarx, of course, fits perfectly into this description, as per the text in bold.  Beyond that, however, is the alignment that there is with the other aims of the Enarx project, and the opportunities with which a wider consortium presents us.  The addition of hardware vendors gives us – and the other participants – opportunities to discuss implementations (hardware and software) in an open environment, cloud providers and other users will give us great use cases, and academic involvement broadens the likelihood of quick access to new ideas and research.

We also expect industry and regulatory standards to be forthcoming, and a need for education as the more sectors and industries engage with confidential computing: the consortium provides a framework to engage in related activities.

It’s early days for the Confidential Computing Consortium, but I’m really hopeful and optimistic.  Already, the openness displayed between the planned members on both technical and non-technical collaboration has gone far beyond what I would have expected.  The industry interest – as evidenced by press and community activities – has been high, and overwhelmingly positive. Fans of Enarx – and confidential computing generally – should be excited by the prospect of greater visibility and collaboration.  After all, isn’t that what open source is about in the first place?


1 – this seems like a good place to point out that the views in this article and blog are my own, and may not represent those of my employer, of the Confidential Computing Consortium, the Linux Foundation or any other body.

Equality in volunteering and open source

Volunteering favours the socially privileged

Volunteering is “in”. Lots of companies – particularly tech companies, it seems – provide incentives to employees to volunteer for charities, NGOs abs other “not-for-profits”. These incentives range from donations matching to paid volunteer days to matching hours worked for a charity with a cash donation.

Then there’s other types of voluntary work: helping out at a local sports club, mowing a neighbour’s lawn or fetching their groceries, and, of course, a open source, which we’ll be looking at in some detail. There are almost countless thousands of projects which could benefit from your time.

Let’s step back first and look at the benefits of volunteering. The most obvious, if course, is the direct benefit to the organisation, group or individual of your time and/or expertise. Then there’s the benefits to the wider community. Having people volunteering their time to help out with various groups – particularly those with whom they would have little contact in other circumstances – helps social cohesion and encourages better understanding of differing points of view as you meet people, and not just opinions.

Then there’s the benefit to you. Helping others feels great, looks good on your CV[1], can give you more skills, and make you friends – quite apart from the benefit I mentioned above about helping you to understand differing points of view. On the issue of open source, it’s something that lots of companies – certainly the sorts of companies with which I’m generally involved – are interested in, or even expect to see on your CV. Your contributions to open source projects are visible – unlike whatever you’ve been doing in most other jobs – they can be looked over, they show a commitment and are also a way of gauging your enthusiasm, expertise and knowledge in particular areas. All this seems to make lots of sense, and until fairly recently, I was concerned when I was confronted with a CV which didn’t have any open source contributions that I could check.

The inequality of volunteering

And then I did some reading by a feminist open sourcer (I’m afraid that I can’t remember who it was[3]), and did a little more digging, and realised that it’s far from that simple. Volunteering is an activity which favours the socially privileged – whether that’s in terms of income, gender, language or any other number of indicator. That’s particularly true for software and open source volunteering.

Let me explain. We’ll start with the gender issue. On average, you’re much less likely to have spare time to be involved in an open source project if you’re a woman, because women, on average, have more responsibilities in the home, and less free time. They are also globally less likely to have access to computing resources with which to contribute. due to wage discrepancies. Even beyond that, they are less likely to be welcomed into communities and have their contributions valued, whilst being more likely to attract abuse.

If you are in a low income bracket, you are less likely to have time to volunteer, and again, to have access to the resources needed to contribute.

If your first language is not English, you are less likely to be able to find an accepting project, and more likely to receive abuse for not explaining what you are doing.

If your name reflects a particular ethnicity, you may not be made to feel welcome in some contexts online.

If you are not neurotypical (e.g. you have Aspergers or are on the autism spectrum, or if you are dyslexic), you may face problems in engaging in the social activities – online and offline – which are important to full participation in many projects.

The list goes on. There are, of course, many welcoming project and communities that attempt to address all of these issues, and we must encourage that. Some people who are disadvantaged in terms of some of the privilege-types that I’ve noted above may actually find that open source suits them very well, as their privilege can be hidden online in ways in which it could not be in other settings, and that some communities make a special effort to be welcoming and accepting.

However, if we just assume – that’s unconscious bias, folks – that volunteering, and specifically open source volunteering, is a sine qua non for “serious” candidates for roles, or a foundational required expertise for someone we are looking to employ, then we set a dangerous precedent, and run a very real danger of reinforcing privilege, rather than reducing it.

What can we do?

First, we can make our open source projects more welcoming, and be aware of the problems that those from less privileged groups may face. Second, we must be aware, and make our colleagues aware, that when we are interviewing and hiring, lack evidence of volunteering is not evidence that the person is not talented, enthusiastic or skilled. Third, and always, we should look for more ways to help those who are less privileged than us to overcome the barriers to accessing not only jobs but also volunteering opportunities which will benefit not only them, but our communities as a whole.


1 – Curriculum vitae[2].

2 – Oh, you wanted the Americanism? It’s “resume” or something similar, but with more accents on it.

3 – a friend reminded me that it might have been this: https://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community

On conversation and the benefits of boasting

On Monday and Tuesday this week I’m attending DevSecCon in Boston – a city which is much more pleasant when it’s not raining or snowing, which it often seems to be doing while I’m here.  There are a bunch of interesting talks[1] and workshops, and I was asked, at the last minute, to facilitate an “Open Space Discussion” at the end of the first day (as two people hadn’t arrived as expected).  Facilitating discussions is about not talking all the time, but encouraging other people to talk[2]: my approach to this is to tell a story, and then encourage them to share stories.

People enjoy listening to stories, and people enjoy telling stories, and there is a type of story that is particularly useful and important in the world of work: “war-stories”.  Within the IT industry, at least, this refers to stories about experiences – usually bad experiences – from our day-to-day working lives.  They are often used to illustrate a point or lend experiential weight to an opinion being put forward. But they are also great learning experiences.

What I learned yesterday – or re-learned – is the immense value of conversation with our peers in a neutral setting, with no formal bounds or difference in “rank”.  We had at least one participant who was only two years out of college, participants with 25-30 years of experience, a CISO of a major healthcare provider, a CEO, DevOps engineers, customer-facing people, security people, non-security people, people with Humanities[4] degrees, people with Computer Science degrees.  We were about twelve people, and everybody contributed, to greater or lesser degrees.  I hope that we managed to maintain a conversation where age and numbers of years in the industry were unimportant, but the experiences shared were.

And I learned about other people’s opinions, their viewpoints, their experiences, their tips for what works – and doesn’t work – and made, I hope, some new friends.  Certainly some new peers.  What we talked about isn’t vitally important to this article[5]: the important thing was the conversation, and the stories they told that brought their shared wisdom to the table.  I felt, by the end of the session, that we had added something to the commonwealth of knowledge within the industry

I was looking for a way to close the session as we were moving to the end, and hit upon something which seemed to work: I encouraged everybody to spend 30 seconds or so to tell the group about an incident in their career that they are proud of.  We got some great stories, and not only did we learn from them, but I think it’s really important that we get the chance to express our pride in the things that we’ve done.  We rarely get the chance to boast, or to let people outside our general circle know why we think we should be valued.  There’s nothing wrong with being proud of the things we’ve done, but we’re often – usually – discouraged from doing so.  It was great to have people share their various experiences of personal expertise, and to think about how they would use them to further their career.  I didn’t force everybody to speak – and was thanked by one of the silent participants later – and it’s important to realise that not everybody will be happy doing so.  But I think that the rapport that we’d built as a group meant that more people were happy to contribute something than would have considered it at the beginning of the session.  I left with a respect for all of the participants, and a realisation of the importance of shared experience.

 


1 – I gave a talk based on my blog article Why I love technical debtI found it interesting…

2 – based on this definition, it may surprise regular readers – and people who know me IRL[3] – that I’d even consider participating, let alone facilitating.

3 – does anybody use this term anymore?

4 – Liberal Arts/Social Sciences.

5 – but included:

  • the impact of different open source licences
  • how legal teams engage with open source questions
  • how to encourage more conversation between technical and legal folks
  • the importance of systems engineering
  • how to talk to customers and vendors
  • how to build teams through social participation[6]
  • the NIST 800 series and other models to consider security
  • risk: how to talk about it, measure it, discuss it with other functions within the organisation.

6 – the word “beer” came up.  From somebody else, on this occasion.