2019年はEnarxの年でした

2020年はデモなど色々なプランを考えています!

 

私にとって2019年はEnarxプロジェクトがほとんどでした。

他のしなければいけない業務もあって、例えば顧客会議、IBM(7月に私の勤めるRed Hatを買収してます)の業務、Kubernetesのセキュリティやパートナー企業と協業など重要なことは色々ありました。しかしEnarxが2019年のハイライトです。

 

年始に私たちは実現できることがあると確信し、内部のリーダーシップチームに対して、達成可能であることの証明を課されました。

その課題に対して、私たちはAMDのSEVチップと五月のボストンでのRed Hat Summitでデモを行い、このブログでアナウンスをしました。

IntelのSGXチップセットと10月のリヨンでのOpen Source Summitでフォローアップをしています。2019年のEnarxの開発でとても大切なことだったと考えています。

 

チーム

 

Enarxは私だけのものではもちろん、ありません。Nathaniel McCallumと共にプロジェクトの共同創立者の一人であることは非常に誇りです。ここまで達成できたのは多くのチームメンバーのおかげですし、オープンソースプロジェクトとして貢献し使用している皆様のおかげです。貢献者ページにはたくさんのメンバーの名前がありますが、まだ全員の名前が挙がっているわけではありません。また、Red Hat内外の何人かの方から頂いたプロジェクトに対するアドバイス、サポートとスポンサリングはとても大切なものです。その皆様の名前を言う許可は得ていないので、ここではお話しせず、丁重に扱う事とします。皆様のサポートとそのお時間を頂けたことに非常に感謝しています。

 

ユースケースとパートナー

 

2019年に成し得た重要なことの一つに、皆さんがどのように「野良状態で」Enarxを使いたいのかをまとめたことと、その比較的詳細な分析を行い、書き上げたことです。

その全てが公開されたわけではないですが、(私が任されていることなんですけどもね)これは実際にEnarxを使用したいと考えているパートナーを見つけるのに不可欠です。まだ公表できませんが、皆さんも聞いたことがあるグローバル企業のいくつかから、また将来的に増えるであろうスタートアップ企業からも、とても興味深いユースケースが挙がってきています。このように興味を持っていただくことは、ロジェクトの実用化に不可欠で、Enarxはただエンジニアの情熱から飛び出しただけのプロジェクトではないと言う事なのです。

 

外部を見ると

 

2019年の重大イベントはLinux FoundationのOpen Source SummitでのConfidential Computing Consortiumの発表でした。私たちRed HatではEnarxはこの新しいグループにぴったりだと考えており、10月の正式発足でプレミアメンバーになったことを嬉しく思っています。これを書いている2019年12月31日時点では、会員数は21、このコンソーシアムは幅広い業界で懸念と興味を惹きつけるものだと言うことがはっきりしてきました。Enarxの信念と目的が裏付けされていると言うことです。

 

2019年に成し遂げたのはコンソーシアムへの参加だけではありません。カンファレンスで講演を行い、このブログ上やNext.redhat.comまたOpensource.comで記事を発表、プレスとの会見、ウェブキャストなどです。一番大切なのは六角形のステッカーを作ったことでしょう!(欲しい方がいらっしゃったらご連絡ください)

 

最後に大切なことを一つ。私たちはプロジェクトを公表していきます。内製のプロジェクトからRed Hat外の参加を促進するために活動しています。詳細は12月17日のBlogをご覧ください。

 

アーキテクチャとコード

 

他に何かあるでしょうか。そうだ、コードですね。そしていくつかのコンポーネントの成熟しつつあるアーキテクチャセットです。

私たちは当然これら全てを外部に公表するつもりですが、まだできていない状態です。すべきことが本当にたくさんあるのです。私たちは皆さんが使用できるようにコードを公開することに尽力していて、2020年に向けデモやそれ以外の大きな計画を立てています。

 

最後に

 

他にも大切なことはもちろんあり、私がWileyから出版するトラスト(信頼性)に関連する本を書いていることです。これはEnarxに深く関連するものです。基本的に、技術はとても「クール」なものですが、Enarxプロジェクトは既存の需要に見合うものですから、Nathanielと私はクラウドやIoT、エッジ、その他機密情報とアルゴリズムが実装される全てのワークロードの管理方法を変えていくいい機会だと考えています。

 

このブログはセキュリティに関するものですが、トラスト(信頼性)と言うものはとても重要な部分だと考えています。Enarxはそれにぴったりと合うのです。ですから、これからも信頼性とEnarxに関するポストをしていきます。Enarx.ioの最新情報に注目していてください。

 

元の記事:https://aliceevebob.com/2019/12/31/2019-a-year-of-enarx/

2019年12月31日 Mike Bursell

 

タグ:セキュリティ、Enarx、オープンソース、クラウド

 

2019: a year of Enarx

We have big plans for demos and more in 2020

2019年はEnarxの年でした

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.

 

 

7 tips for kicking off an open source project

It’s not really about project mechanics at all

オープンソースプロジェクトを始める7つのアドバイス

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.

 

オープンソースプロジェクトを始める7つのアドバイス

プロジェクト手法ではなく…

私は今Enarxプロジェクトに関わっています。とても深く、です。すでにご存知かもしれませんが、これはオープンソースのプロジェクトで、信頼できないホスト上で機密性の高いワークロードの実行を可能にするプロジェクトです。

 

何年もオープンソースプロジェクトに関わってきましたが、このプロジェクトで初めて私は共同創立者となりました。

私たちは現状、コードや文書を十分用意し、ロゴもステッカーも(これ重要!)用意できている段階です。

 

プロジェクトはLinux Foundationグループ(Confidential Computing Consortium)に含まれるはずなので、順調です。さらにプロジェクトを加速させるためにも内容に関してもお伝えしていったほうがいいでしょう。はっきりさせておくと、Enarxは商用とエンタープライズアプリケーションになるプロジェクトです。十分には成熟しておらず、まだまだハードルやチャレンジがあるかもしれません。さらに言うと、私たちの歩んできた道は全てのプロジェクトも当てはまらないかもしれませんが、他のプロジェクトやこれからプロジェクトを始めようとする人への指針になればと思っています。

 

ここまで来るにはたくさんのサポートがあったことをまずお伝えします。

私はOpenSource.comから始めましたが、ここではたくさんのガイドが載っています。それに従っても間違ってしまうこともあるかもしれません。ただ、以下に考慮すべき点を挙げておきます。

 

1 クリティカルマスを目標に

 

私は幸いにもRed Hatと言う素晴らしい職場で働いています。ここでは全てのものがオープンソースですし、オープンソースとそのコミュニティを非常に重要だと考えています。そこで「クリティカルマス」企業と言うものを耳にしました。物事を実際に行っていくには十分な人々の関心が必要で、人々が無視できないものとする必要があるということです。共同創立者のNathaniel McCallumと私はプロジェクトに情熱的で、組織内でスポンサーを得ることに時間をかけました。(誰のことだかわかりますよね、皆さんに感謝です!)そしてエンジニア達に「売り込み」をして惹き込み、プロジェクトが止まれなくなるくらいほどにしました。

プロジェクトのいくつかは一人二人の貢献者しか得ることができずもたついてしまいますが、人々の興味を惹きつけるには、どんどん進めてくれる、ある程度の人数を集めることが必須なのです。

 

2 デモ

 

人々を巻き込みたければデモをすると良いでしょう。洗練されている必要はなく、しようとしていることが実現可能であること、あなたが成し得たいことを示さなければいけません。初期段階のデモではコマンドラインの出力だけでしょうが、UIプロダクトを提供するのでなければ、それでもいいんです。成し遂げようとすることと情熱、プロジェクトの大切さを伝えることが有益なのです。人は何かを「見たい」「体験したい」ので、形にしてやる気を見せることが近道なのです。

 

3 ライセンスを選ぶ

コードをオープンソースで作ると、他の人にも貢献してもらいたくなるでしょう。これはあまり重要ではなく、正しいオープンソースのライセンスを選択することが他の人の貢献度を高め、定義された用語の理解を深め、その貢献する人働いている組織とその人たち自身の貢献するハードルを下げるのです。

 

4 文書作成

 

開発者文書が最重要だと考えるかもしれません。それなければどうやって人が貢献してコードを書くことができるでしょうか。

 

初めのうちは必要ないと思っています。小さなプロジェクトではコードが何をするものか、何をさせたいか、何が欠けているかいるか、の説明をすることで何人かの人を巻き込むことができます。

しかしコードが何をするものでどう便利か説明する文書がないのに、どうやってたくさんの人が時間を割いてくれるでしょうか。

 

文書といっても、ちゃんとしたマーケティング用のものだったり正式なものである必要はなくて、どうしてそれをしなくてはいけないかと皆さんに伝えるものでなければいけません。

これは一番目のポイント、クリティカルマスに注力することにも通じています。文書、ユースケースを示すことで、「ポイント」でプロジェクトが実現したいことに説得力を持たせるのに役立ちます。

 

私たちはgithubウィキをメインの文書置き場にしていて、作成と同時にアップデートしています。これはもう少し改善できるかと思います。

 

5 見えるプロジェクト

 

プロジェクトがちゃんと見える状態でないと見つけてもらえません。私たちのプロジェクトはとてもラッキーで、Confidential Computing Consortiumができた上にそこで見せられるだけのプラットフォームをすぐに作ることができたので、クリティカルマスに届く状態です。

 

Twitterのアカウントもあり(@enarxproject)このブログOpensource.comで記事も出しています。Red Hatのhttps://next.redhat.com/にてブログを出す機会にも恵まれ、プレスのインタビューも受けましたし出来るだけカンファレンスでも講演しています。私たちにはこのような良い機会がありましたが、全てのプロジェクトに適切なアプローチではないかもしれません。しかし知ってもらうことで、もっとたくさんの人々に貢献してもらえます。

 

6 歓迎しましょう

世間に知って頂けたとしましょう。次は何ができるでしょうか。そう、皆さんにプロジェクトに参加したいと思って頂きたいですよね。歓迎してもらえなければその参加数は少なくなって行きます。また上の方で私が何を言ったかに関わらず、しばらくすると技術文書も必要になります。そしてその人たちがあなたと話し合う方法が必要ですね。そうすることで評価されていると感じますから。

 

私たちの場合、Gitter(https://gitter.im/enarx/)で、毎日のスタンドアップ会議には参加したい人がみんな参加できます。最近ではそ課題データベース(https://github.com/enarx/enarx/issues)をGithubに作成しタコとで、スレッドの会話でタイムゾーンがあることから毎日のスタンドアップ会議の時間が合っていないことが明らかになりました。ので、会議の数を少なくとも週一とする配慮をしたのです。

 

7 仲の良い人と活動しましょう

 

私はとてもとてもEnarxプロジェクトチームのみんなと働くことが楽しいです。楽しく過ごし、冗談をかわし、笑って、共通の目標をシェアしています。Enarxの成功のためです。出来るだけ楽しんですること、それが大切だと思います。特にプロジェクトの初期段階では情熱的な人と楽しく仕事できる人が必要です。例えその人が地理的には数千キロ(マイル?)離れた場所にいても、です。そのように参加できなければ情熱もどんどん先細ってくるでしょうし勢いも失われ、プロジェクトは失敗に終わるでしょう。一緒に活動する人は選べるわけではないでしょうが、できればあなたの仲が良い人を選びましょう。

 

結論:「人」です。

 

この記事を書き始めるまでは気づきませんでしたが、全くプロジェクト手法が問題ではないのです。

 

人、です。

 

読み返せばどのアドバイスにも人の大切さが述べてあり、ライセンスの選び方にも、です。オープンソースプロジェクトとはコードではないのです。人なのです。どのようにシェアし、一緒に活動し交流するかなのです。

 

オープンソースプロジェクトはそれぞれ異なるものでしょうから、この7つのアドバイスが全て当てはまることはないでしょう。間違いなくEnarxはまだ成功と言い切れるものではありませんので、今の段階でこのようなアドバイスをすべきでないのかもしれません。しかし成功してきた今までのオープンソースプロジェクトを思い起こすと、やはり人と言うものはとても大切なのです。

 

元の記事:https://aliceevebob.com/2019/12/17/7-tips-for-kicking-off-an-open-source-project/

2019年12月7日 Mike Bursell

 

タグ:オープンソース

 

Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”


Enarx goes multi-platform

Now with added SGX!

Yesterday, Nathaniel McCallum and I presented a session “Confidential Computing and Enarx” at Open Source Summit Europe. As well as some new information on the architectural components for an Enarx deployment, we had a new demo. What’s exciting about this demo was that it shows off attestation and encryption on Intel’s SGX. Our initial work focussed on AMD’s SEV, so this is our first working multi-platform work flow. We’re very excited, and particularly as this week a number of the team will be attending the first face to face meetings of the Confidential Computing Consortium, at which we’ll be submitting Enarx as a project for contribution to the Consortium.

The demo had been the work of several people, but I’d like to call out Lily Sturmann in particular, who got things working late at night her time, with little time to spare.

What’s particularly important about this news is that SGX has a very different approach to providing a TEE compared with the other technology on which Enarx was previously concentrating, SEV. Whereas SEV provides a VM-based model for a TEE, SGX works at the process level. Each approach has different advantages and offers different challenges, and the very different models that they espouse mean that developers wishing to target TEEs have some tricky decisions to make about which to choose: the run-time models are so different that developing for both isn’t really an option. Add to that the significant differences in attestation models, and there’s no easy way to address more than one silicon platform at a time.

Which is where Enarx comes in. Enarx will provide platform independence both for attestation and run-time, on process-based TEEs (like SGX) and VM-based TEEs (like SEV). Our work on SEV and SGX is far from done, but also we plan to support more silicon platforms as they become available. On the attestation side (which we demoed yesterday), we’ll provide software to abstract away the different approaches. On the run-time side, we’ll provide a W3C standardised WebAssembly environment to allow you to choose at deployment time what host you want to execute your application on, rather than having to choose at development time where you’ll be running your code.

This article has sounded a little like a marketing pitch, for which I apologise. As one of the founders of the project, alongside Nathaniel, I’m passionate about Enarx, and would love you, the reader, to become passionate about it, too. Please visit enarx.io for more information – we’d love to tell you more about our passion.

What’s a Trusted Compute Base?

Tamper-evidence, auditability and measurability are three important properties.

A few months ago, in an article called “Turtles – and chains of trust“, I briefly mentioned Trusted Compute Bases, or TCBs, but then didn’t go any deeper.  I had a bit of a search across the articles on this blog, and realised that I’ve never gone into this topic in much detail, which feels like a mistake, so I’m going to do it now.

First of all, let’s think about computer systems.  When I talk about systems, I’m being both quite specific (see Systems security – why it matters) and quite broad (I don’t just mean computer that sits on your desk or in a data centre, but include phones, routers, aircraft navigation devices – pretty much anything that has a set of chips inside it).  There are surely some systems that you don’t rely on too much to do important things, but in most cases, you’re going to care if they go wrong, or, more relevant to this discussion, if they get compromised.  Even the most benign of systems – a smart light-bulb, for instance – can become a nightmare if compromised.  Even if you don’t particularly care whether you can continue to use it in the way it was intended, there are still worries about its misuse in the case of compromise:

  1. it may become a “jumping off point” for malicious attacks into your network or other systems;
  2. it may be used as part of a botnet, piggybacking on your network to attack other systems (leading to sanctions against your legitimate systems from outside);
  3. it may be used as part of a botnet, using up resources such as network bandwidth, storage or electricity (leading to resource constraints or increased charges).

For any systems dealing with sensitive data – anything from your messages to loved ones on your phone through intellectual property secrets for a manufacturing organisation through to National Security data for government department – these issues are compounded.  In order to protect your system, you can’t just say “this system is secure” (lovely as that would be).  What can you do to start making statement about the general security of a system?

The stack

Systems consist of multiple components, and modern computing systems are typically composed from multiple layers (one of my favourite xkcd comics, Stack, shows some of them).  What’s relevant from the point of view of this article is that, on the whole, the different layers of the stack start up – boot up – from the bottom upwards.  This means, following the “bottom turtle” rule (see the Turtles article referenced above), that we need to ensure that the bottom layer is as secure as possible.  In fact, in order to build a system in which we can have assurance that it will behave as expected and designed (in other words, a system in which we can have a trust relationship), we need to build a Trusted Compute Base.  This should have at least the following set of properties: tamper-evidence, auditability and measurability, all of which are related to each other.

Tamper-evidence

We want to know if the TCB – on which we are building everything else – has a problem.  Specifically, we need a set of layers or components that we are pretty sure have not been compromised, or which, if compromised, will be tamper-evident:

  • fail in expected ways,
  • refuse to start, or
  • flag that they have been compromised.

It turns out that this is not easy, and typically becomes more difficult as you move up the stack – partly because you’re adding more layers, and partly because those layers tend to get more complex.

Our TCB should have the properties listed above (around failure, refusing to start or compromise-flagging), and be as small as possible.  This seems the wrong way around: surely you would want to ensure that as much of your system was trusted as possible?  In fact, what you want is a small, easily measurable and easily auditable TCB on which you can build the rest of your system – from which you can build a “chain of trust” to the other parts of your system about which you care.  Auditability and measurability are the other two properties that you want in a TCB, and these two properties are why open source is a very useful tool in your toolkit when building a TCB.

Auditability (and open source)

Auditability means that you – or someone else who you trust to do the job – can look into the various components of the TCB and assure yourself that they have been written, compiled and  are executing properly.  As I explained in Of projects, products and (security) community, the person may not always be you, or even someone in your organisation, but if you’re using widely deployed open source software, the rest of the community can be doing that auditing for you, which is a win for you and – if you contribute your knowledge back into the community – for everybody else as well.

Auditability typically gets harder the further you go down the stack – partly because you’re getting closer and closer to bits – ones and zeros – and to actual electrons, and partly because there is very little truly open source hardware out there at the moment.  However, the more that we can see and audit of the TCB, the more confidence we can have in it as a building block for the rest of our system.

Measurability (and open source)

The other thing you want to be able to do is ensure that your TCB really is your TCB.  Tamper-evidence is related to this, but that’s a run-time property only (for software components, at least).  Being able to measure when you provision your system and then to check that what you originally loaded is still what you think it should be when you boot it is a very important property of a TCB.  If what you’re running is open source, you can check it yourself, against your own measurements and those of the community, and if changes are made – by you or others – those changes can be checked (as part of auditing) and then propagated through measurement checking to the rest of the community.  Equally important – and much more difficult – is run-time measurability.  This turns out to be very difficult to do, although there are some techniques emerging which are beginning to get traction – for now, we tend to rely on tamper-evidence, which is easier in hardware than software.

Summary

Trusted Compute Bases (TCBs) are a key concept in building systems that we hope will behave in ways we expect – or allow us to find out when they are not.  Tamper-evidence, auditability and measurability are three important properties that they should display, and it turns out that open source is an important factor in helping us ensure two of those.

 

 

 

Of projects, products and (security) community

Not all open source is created (and maintained) equal.

プロジェクトとプロダクトとセキュリティコミュニティと

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

Breaking the security chain(s)

Your environment is n-dimensional – your trust must be, too.

One of the security principles by which we[1] live[2] is that security is only as strong as the weakest link in a chain.  That link is variously identified as:

  • your employees
  • external threat actors
  • all humans
  • lack of training
  • cryptography
  • logging
  • anti-virus
  • auditing capabilities
  • the development lifecycle
  • waterfall methodology
  • passwords
  • any other authentication mechanisms
  • electrical wiring
  • hurricanes
  • earthquakes
  • and pestilence.

Actually, I don’t think I’ve ever seen the last one mentioned, but it’s only a matter of time.  However, very rarely does anybody bother to identify exactly what the chain is that it is being broken by the weakest link splintering into a thousand pieces.

There are a number of candidates that spring to mind:

  1. your application flow.  This is rather an old-fashioned way of thinking of applications: that a program is started, goes through a set of actions, and then terminates, but to think more broadly about it, any action which causes an application to behave in unexpected or unintended ways is a possible security flow, whether that is a monolithic application, a set of microservices or an app on  mobile device.
  2. your software stack.  Depending on how you think about your stack, there are likely to be at least 5, maybe a dozen or maybe even scores of layers in your software stack (for an example with just a few simple layers, see Announcing Enarx).  However you think about it, you need to trust all of those layers to do what who expect them to do.  If one of them is compromised, malicious, or just poorly implemented or maintained, then you have a security issue.
  3. your hardware stack.  There was a time, barely five years ago, when most people (excepting us[1], of course), assumed that hardware did what we thought it was supposed to do, all of the time.  In fact, we should all have known better, given the Clipper Chip and the Pentium bug (to name just to famous examples), but with Spectre, Meltdown and a growing realisation that hardware isn’t as trustworthy as was previously thought, everybody needs to decide exactly what security they can trust in which components.
  4. your operational processes.  You can have the best software and hardware in the world, but if you don’t maintain it and operate it properly, it’s going to be full of holes.  Failing to invest in operations, monitoring, logging, auditing and the rest leaves you wide open.
  5. your supply chain. There’s a growing understanding in the industry that our software and hardware supply chains are possible points of failure[3].  Whether your vendor is entirely proprietary (in which case their security is largely opaque) or open source (in which case you’ve got a chance to be able to see what’s going on), errors or maliciousness in the supply chain can scupper any hopes you had of security for your deployment.
  6. your software and hardware lifecycle.  Developing software?  Patching it?  Upgrading hardware (or software)?  Unit testing?  Unit testingg?  We all know that a failure to manage the lifecycle of our environment can lead to security problems.

The point I’m trying to make above is that there’s no single chain.  Your environment is n-dimensional – your trust must be, too.  If you don’t think about all of these contexts – and there will be more beyond the half-dozen that I’ve just noted – then you can’t have a good chance of managing security in your environment.  I honestly don’t think that there’s any single weakest link in the chain, because there are always already multiple chains in play: our job is to think about as many of them as possible, and then manage an mitigate the risks associated with each.


1 – the mythical “IT security community”.

2 – you’re right: “which we live by” would sound much more natural.

3 – and a growing industry to try to provide fixes.

How not to make a cup of tea

This is an emotive topic.

A few weeks ago, I wrote an article entitled “My 7 rules for remote-work sanity“. This was wildly successful, caused at least one email storm on our company servers, and caused a number of readers to ask me to tackle the question of how to make a cup of tea. Because that isn’t contentious at all.

I’ve resisted that, partly because this is (supposed to be) mainly a security blog, and partly because it would be too easy. Instead, I’ve written this article, which is full of advice and guidance about how not to make a cup of tea or, more accurately, how to make a bad cup of tea. This isn’t hugely related to security either, but as I’ve been writing it, it’s reminded me of how strong the idea of “anti-patterns” is, particularly in security. Sometimes it’s hard to explain to people how they should apply security controls to system, and it’s easier to explain what not to do, so that people can learn from that. This is my (very tenuous) link to security in this article. I hope it (and the rest of the article) serve you well.

I am British, but I travel quite frequently to the United States and to Canada. During my travels, I have been privileged to witness many, many attempts to make tea which have been catastrophic failures. This article is an attempt to celebrate these failures so that people, like me, who have learned over many years how to make tea properly, can celebrate the multitude of incorrect ways in which tea can be ruined.

This is an emotive topic. There are shades of opinion across many issues (upon some of which we will touch during this article), across class lines, geographical lines and even moral lines (I’m thinking here in particular of the question of whether to use cow’s milk – an ethical point for vegans). And, so you understand that I’m not overstating the divides that can occur over this important skill, I’d point out that an entire war was started solely over the belief of a certain group of people who believed that the correct way to make tea was to dump an entire shipment of leaves into a salt-water harbour[1] in a small settlement named after a town in Lincolnshire[2].

Without further ado, let us launch into our set of instructions for how not to make tea (by which I mean “hot black tea” in this context), but with a brief (yet important) digression.

De minimis

There are certain activities associated with the making of tea which I am going to declare “de minimis” – in other words, of minimal importance to the aim of creating a decent cup of tea. These may be areas of conflict within the tea drinking community, but I would argue that the majority of Real Tea Drinkers[tm] would accept that a cup of tea would not be ruined by following any of the options for any of the practices noted below. As this article aims to provide a beginner’s guide to how not to make tea, I contend that your choice of any of the options below will not (at least on its own) cause you to be making not-Tea.

  • Milk, not milk or lemon: I drink my tea with milk. Some drink it without, and some with lemon. Tea can be tea in any of these states.
  • Sugar or not sugar: I used to drink my tea with sugar, but now do not. Tea can be tea with or without sugar.
  • Cup or mug: I prefer a mug for most tea drinking opportunities, but a cup (with saucer) can make a nice change.
  • Bag or loose leaf: we’re getting into contentious territory here. I have recently moved to a strong preference for loose leaf, partly because it tends to be higher quality than bags, but if bags are what you’ve got, then you can make a decent cup of tea with them.
  • In the pot or in the mug: if you’re using loose leaf tea, then it’s in the pot. If it’s in a mug, then it’s a bag. You can, however, make tea in a pot with a bag.
  • Milk first or afterwards: there’s evidence to suggest that putting the milk in first means that it won’t scald, but I’m with George Orwell[3] on this: if you put the tea in first, you can add milk slowly to ensure the perfect strength.
  • Warming the pot: you only need to warm the pot if you’re using bone china. End of.

You might think that I’ve just covered all the important issues of contention above, and that there’s not much point in continuing with this article, but please remember: this article is not aimed at those who wish to make an acceptable cup of tea, but to those who do not wish to make the same. Those who wish to make a cup of non-tea will not be swayed by the points above, and neither should they be. If you belong in this group: whether (for example) a person of North American descent looking to cement your inability to make a cup of tea, or (again, for example) a person of British descent looking for guidance in how to be accepted into North American society by creating a representative cup of non-tea, then, dear reader, please read on.

Instructions

Tea (actual)

The easiest way to fail to make a cup of tea is to choose something which isn’t tea. I’m going to admit to a strong preference to black tea here, and this article is based on how to make a cup of black tea, but I admit to the existence of various other tea types (yellow, white, green, oolong). All of them, however, come from the tea plant, Camellia sinensis. If it doesn’t, then it isn’t tea. Retailers and manufacturers can label it “herbal tea”[4]. Anything which doesn’t come from from the tea plant is an infusion or a tisane.

To be clear, I absolutely include Rooibos or “Red Bush” tea. I tried this once, under the mistaken belief that it was actual tea. The experience did not end well. I’m sure there’s a place for it, but not in my mug (or cup), and it’s not tea. I mean: really.

There are some teas which are made from the tea plant, and have flavourings added. I’m thinking in particular of Earl Grey tea, which has oil of bergamot added to it. I’m not a fan, but that’s partly due to a very bad experience one morning after the imbibing of a significant amount of beverages which definitely weren’t ever pretending to be tea, the results of which also lead to my not drinking gin and tonic anymore[5].

To clarify, then:

  • tea: yes, tea.
  • Earl Grey: yes, tea.
  • rooibos: not tea.
  • fruit tea: not tea.
  • lemon tea: not tea.
  • rhubarb and orange tea: not tea
  • raspberry and old sticks tea: not tea.
  • vanilla tea: not tea.
  • etc., etc.

The very easiest way not to make tea, then, is to choose something which isn’t derived directly from the Camellia sinensis plant. If you’re looking for more adventurous or expert ways not to make tea, however, carry on.

Use non-boiling water

Beyond the obvious “not actually using tea” covered above, the easiest way to fail to make a cup of tea is to use water which is not sufficiently hot. And by “sufficiently hot”, I mean “has just come off the boil”. There’s actual science[6] showing that lots of the important tea-making processes cannot take place at temperatures below 90°C (194°F). To be sure that you’re going to make a cup of bad tea, just be sure not to go with boiling water.

I’ve recently visited a number of establishments in the USA where they offered to make tea with water designed for coffee-making at temperatures of 175°F and 183°F. That’s 79.4°C and 83.9°C. A member of staff at one of these establishments explained that it would be “dangerous” to use water at a higher temperature. I kid you not. They knew that they didn’t need to aim for just off the boil: well under 99°C should do nicely for something almost but not quite like a cup of tea. A warning: many (but not all) outlets of Peet’s Coffee do provide water which is hot enough (often 211-212°F) , if you ask them. You are in significant danger of getting a proper cup of tea if you’re not very careful. This, in my experience, is entirely unique for a coffee chain anywhere in the USA.

Many US hotels will offer to make you a cup of tea in the morning, if you ask nicely enough and sound (passive-aggressively) British enough. You might think that you would be guaranteed a proper cup, but worry not: they will almost certainly use really bad tea, the water will be insufficiently hot and they will definitely not bother to pour it directly over the tea-bag anyway. Hotels are great places to find a really bad cup of tea in the USA.

As a minor corollary of this, another way to fail to make a cup of tea is to boil the water at altitude (e.g. up a mountain) or in an airplane. Because air pressure is reduced in both cases, the boiling point of water is reduced, and so you’re not likely to be able to get the water hot enough, so that’s another great option.

Boil the water incorrectly

What: you didn’t know that you could boil water incorrectly? Oh, but you can. Proper tea is made from water boiled once in a kettle, because if you boil it more than once, something happens: you’ve removed oxygen that was originally dissolved in the water, and this turns out to make it taste bad.

Use the wrong type(s) of milk

The easy way out is to use 2% (semi-skimmed) or 4% (whole fat) cow’s milk, but you don’t want to do that: that’ll give you proper tea. In order not to make tea properly, try UHT milk, almond milk, soy milk or similar. I haven’t tried ewe’s or goat’s milk, but I’m sure it’s easy to mess up a good cup of tea with those.

Condensed milk is an interesting option. I’ve had this in India – where I think they add it instead of sugar – and, well, it didn’t taste like tea to me.

Let it go cold

I’m not even talking about so called “iced tea” here. I’m just talking about tea which has been brewed, then poured, then left to go cold. Yuck.

Brew for too long

Quite apart from the fact that it may go cold, it’s possible to make tea just too strong. There’s a range that most people will accept as proper tea, but you can leave most tea too long, and it will be too strong when you pour it. How long that is will depend partly on preference, and partly on the type of tea you’re using. If you want to ensure you make a bad cup of tea, tune these carefully.

Don’t brew for long enough

Again, it’s easy to get this wrong. Some types of tea brew very quickly, and you’re in danger, for these types, of pouring a decent cup despite your best interests. Use this technique with care, and preferably in conjunction with others of the approaches listed.

Use a little strainer

It is possible to make a decent cuppa with one of those little strainers into which you place your tea leaves, but it’s difficult. The problem seems to be that in order to make a properly strong cup of tea, you’re going to need to put quite a lot of tea leaves into the strainer. This means that the water won’t be able to interact with most of them, as they’ll be stuffed in, and not able to circulate properly. You can try swishing it around a bit, but by the time you’ve got to this, the water will probably be too cold (see above).

Microwave

Microwave your water. This is wrong on so many levels (do a search online).

Use bad tea

Mainly, I think, because most US citizens have no idea how to make a cup of tea properly, they are willing to accept the stuff that passes for “tea bags” in the US as actual tea. There are exceptions, but the standard fare that you’ll find on supermarket shelves isn’t anything like actual tea, so it’s very simple to make a bad cup even if you’ve done everything else right.

My preferred cup of tea

After all of the above, you may be wondering what my preferred cup of tea looks like. Here’s a brief algorithm, which I’m happy to open source:

  1. freshly drawn water in the kettle
  2. kettle boiled
  3. a teaspoon of tea into a small pot (how heaped depends on the type), current favourites include:
  4. pour the boiling water into pot (I tend to use pots with built-in strainers)
  5. allow to brew (how long depends on the type of tea)
  6. pour from the pot into a large mug (not too large, or the tea will get too cold to drink before you get to the bottom)
  7. add semi-skimmed (2%) milk (not UHT)
  8. drink as soon as its cooled enough not to blister the inside of the mouth
  9. leave the bottom few millimetres to avoid ingesting any leaves which may have made their way into the mug.

So there you have it. That’s how I make a cup of tea.


1 – yes, that’s how to spell it.

2 – I’m assuming this was the sole reason. I’m a little hazy on the details, but it all seems to have turned out OK for both sides.

3 – who, given that he wrote both the books Animal Farm and 1984, knew a thing or two about how not to do things.

4 – yes, there’s an “h”, yes it’s aspirated. “‘erbal?” Not unless dropping aitches is part of your general dialect and accent, in which case, fine.

5 – I still drink gin, only not with tonic. It’s the tonic water that I’ve managed to convince my taste buds was the cause of the resulting … problem.

6 – Really, look!