オープンソースと悪人と

この記事は
https://aliceevebob.com/2019/07/30/open-source-and-well-bad-people/
を翻訳したものです。
オープンソースのコードを書いている人にとって、オープンソースソフトというものは誠実なものに見えます。
コードを書いて、皆さんのような善人な方がコードを書き足し、テストをし、文書を作成し、ソフトを使うのです。
オープンソースとは、世界をどんなに良くしてくれていることか!

「悪の帝国と呼ばれる組織や会社」ですら、オープンソースを受け入れ、幸福と愛情あふれる場所になり、コミュニティを支援し、オープンソースは良いものとして採用し布教しているのです。

多くのオープンソースライセンスは、1組織がコードを書き換え、その書き換えたコードをリリースすることなく利益を得ることほぼ不可能なようにできています。
オープンソースの魂はライセンスで、実際に世界を良くしているのです。

この大部分には賛成します。

が、世間で鵜呑みにされたような憶測(それが何であれ)が広まっているのを見ると、不思議に思います。
というのも、オープンソースを使っている全ての人が善人ではないというのは皆さんご存知でしょう。

クラッカー(悪事を行うハッカー)はオープンソースを使います。
薬物の販売人もオープンソースを使っています。
人身売買者もオープンソースを使います。
テロリストだって使っているでしょう。
彼らももしかしたらパッチやテスト、文書作成に貢献しているかもしれません。実際はほとんどないと思いますが。でも一般的に彼らは善人ではないのです。

中には「まあ、中には少しはそういう人もいるかもしれないけど、耐えられる範囲だろう」と肩をすくめてしまうような人もいるでしょう。オープンソースで悪事を働く人に比べたらもっと多くの人がオープンソースによって助かっているでしょうから、引き続き喜んで貢献するでしょうね。
あるいは、オープンソースに貢献するのではなく、どうせ人のためにならないのだから、悪人度合いが低い方を受け入れてしまう、というところでしょうか。

これは実際に有効なことで、 ジョン・スチュアート・ミルの言う功利主義として知られています。(私は哲学者ではないので、非常にざっくりと話していることを了承ください)これはこう説明されることもあります。

「行動は、人類の幸福全体を促進するのにちょうど良く釣り合う」

私もオープンソースが「人類の幸福全体を促進」すればいいと願っています。問題は犯罪者だけが皆さんのオープンソースコードを使うわけではないことです。

実際に「日陰なコト」を行うビジネスもありますし、政府が批判する人間を抑圧することも、警察が市民を監視することもあるでしょう。
これも皆さんコードなのであり、悪事にも使われるのです。

悪事とはなんでしょう。
これは功利主義の哲学に対する反論でよく出ますね。何が良いことで何が悪いことなのかと定義するのは難しいのです。
法を遵守している我々一般の人は、人身売買をするのは悪いことだ、と思います。しかし、中にはグレーゾーンなものもあるのです。

例えばタバコ製造、石油化学の会社、プラスチックの製造業、LGBTQ+の人々を支持しない組織、銃の製造業などです。
意図的に範囲を大きくしています。また、最後の例はあえて選んでいます。

オープンソースの活動を初期に行なっていたのはESRで知られるエリック・レイモンドです。彼は銃を持つ権利をずっと支持しています。彼は公にされている批判をハッカーコミュニティに取り入れ、小銃保持の権利を声に出して支持しています。ESRにとってはこれが「自由」なのです。

彼を批判するつもりはないですが、これは賛成できません。
ただこれで彼が良いと考えているものは私が考えているものと違うことは明確でしょう。
LGBTQ+に関してはとてもリベラルに受け入れていますが、オープンソースのコミュニティにいる人がみんな同じ考えを持っているわけではないでしょう。
オープンソースコミュニティをリベラルだと述べていますが一般論として受け入れられていません。

ハッカーのための辞書として公表されているのですが、平均的なハッカーとは

ふんわりと穏健リベラル。従来の右派左派の政治全体を拒絶するリベラリストの代表を除く。
保守的に一般化して言うとハッカーはどちらかと言うと反独裁主義者。
つまり従来の保守主義と「極」左派は非常に珍しい。ハッカーは非ハッカーよりも大体
a)攻撃的な非政治主義で
b)特異的もしくは特異な政治アイデアを抱き実際日々そう生きようとしている

ジャーゴン・ファイル
少し古いかもしれませんが、この説明は多くのオープンソースコミュニティ内で、コミュニティの一部として意識している人たちが共感するものです。

ただオープンソースのコードの「良い使い方」は「良い組織」が決める、と言うのは、コミュニティとして明らかに賛成できかねます。
もしできたとしても、悪人とされる人々を止められるようなライセンスを作るのは非常に可能性が低いでしょう。
元の記事:
https://aliceevebob.com/2019/07/30/open-source-and-well-bad-people/
2019年7月30日 Mike Bursell

Immutability: my favourite superpower

As a security guy, I approve of defence in depth.

I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time.  I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well.  You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora.  There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.

What does “immutable” mean?  It means that it can’t be changed.  To be more accurate, in a software context, it generally means that something can’t be changed during run-time.

Important digression – constant immutability

I realised as I wrote that final sentence that it might be a little misleading.  Many  programming languages have the concept of “constants”.  A constant is a variable (or set, or data structure) which is constant – that is, not variable.  You can assign a value to a constant, and generally expect it not to change.  But – and this depends on the language you are using – it may be that the constant is not immutable.  This seems to go against common sense[1], but that’s just the way that some languages are designed.  The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.

Operating System immutability

In Silverblue’s case, it’s the operating system that’s immutable.  You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem.  This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced.  It’s not impossible[2], but the risk is significantly lower.

How do you update your system, then?  Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that.  Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system.  This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages.  If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing.  Another environment?  Another image.

We’re more interested in the security properties that this offers us, however.  Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration.  From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.

Container immutability

This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”).  One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.

From our point of view, however, what’s great about containers is that you can choose to use them immutably.  In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern.  The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required.  The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t.  Remember: immutable software gives better knowability, and improves your resistance to run-time compromise.  Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.

This brings us to two of the reasons that you should never run containers with root privilege:

  • there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
  • there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.

Double immutability with Silverblue

I mentioned above that Silverblue runs applications in containers.  This means that you have two levels of security provided as default when you run applications on a Silverblue system:

  1. the operating system immutability;
  2. the container immutability.

As a security guy, I approve of defence in depth, and this is a classic example of that property.  I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.


1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.

2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.

3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません。

今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)

HSM(Hardware Security Module)のお話です。

HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?

その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

 

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。

アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

 

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

 

このアルゴリズムには二つのデータを与えられます:

 

  1. 暗号化したい、平文のデータ
  2. 暗号化するための鍵

 

結果としてでたデータは一つです。

 

  1. 暗号化されたデータ

 

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。

この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

 

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

 

鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

 

鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

 

鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

 

鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

 

HSMは上記の全ての場合に役立ちます。

これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

 

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment

)環境でなければ、の話ですけれどね。

 

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

 

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。

HSMはそのような攻撃を防ぐように作られているのです。

 

ではHSMの定義をお話ししましょう。

 

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

 

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため

FIPS140などの標準化の認可を取得しようと検査を受けています。

 

以下にHSMの主な使用方法を挙げます。

 

鍵の作成

鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

 

鍵の保管

HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

 

暗号化処理

 

鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

 

通常のコンピューティング処理

 

全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。

これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

 

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

 

結論 HSMを使うべき?

 

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

 

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

 

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

 

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。

HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。

 

元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/

2019年6月11日 Mike Bursell

 

タグ:セキュリティ

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません

この記事は https://aliceevebob.com/2019/06/11/whats-an-hsm/ を翻訳したものです。

 
今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)
HSM(Hardware Security Module)のお話です。
HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?
その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。
アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

このアルゴリズムには二つのデータを与えられます:

1. 暗号化したい、平文のデータ
2. 暗号化するための鍵

結果としてでたデータは一つです。

1. 暗号化されたデータ

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。
この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

- 鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

- 鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

- 鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

- 鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

HSMは上記の全ての場合に役立ちます。
これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment
)環境でなければ、の話ですけれどね。

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。
HSMはそのような攻撃を防ぐように作られているのです。

ではHSMの定義をお話ししましょう。

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため
FIPS140などの標準化の認可を取得しようと検査を受けています。

以下にHSMの主な使用方法を挙げます。
鍵の作成
鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

鍵の保管
HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

暗号化処理
鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

通常のコンピューティング処理
全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。
これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

結論 HSMを使うべき?

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。
HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。
元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/
2019年6月11日 Mike Bursell

 

Turtles – and chains of trust

There’s a story about turtles that I want to tell.

One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3].  This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.

As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:

A learned gentleman[5] is giving a public lecture.  He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology.  At the end of the lecture, an elderly lady comes up to him.

“Young man,” she says.

“Yes,” says he.

“That was a very interesting lecture,” she continues.

“I’m glad you enjoyed it,” he replies.

“You are, however, completely wrong.”

“Really?” he says, somewhat taken aback.

“Yes.  All that rubbish about the Earth hovering in space, circling the sun.  Everybody knows that the Earth sits on the back of a turtle.”

The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”

The elderly lady looks at him with a look of disdain.  “What a ridiculous question!  It’s turtles all the way down, of course!”

The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point.  If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware.  Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.

Principles for chains of trust

When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up.  This idea of a chain of trust is very important, and very pervasive, within security.  It allows for some important principles:

  • there has to be a root of trust somewhere (the “bottom turtle”);
  • the chain is only as strong as its weakest link (and attackers will find it);
  • be explicit about each of the links in the chain;
  • realise that some of the links in the chain may change (e.g. if software is updated);
  • be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
  • simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.

Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar).  In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting.  Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank.  In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.

And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust.  Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation?  Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used?  Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.

Chains of trust are everywhere we look.  Some are short, and some are long.  In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low.  This may be due to a variety of factors, including time, expertise and visibility.  But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.


1 – citizens of the US of A,

2 – have a look on a stock photography site like Pixabay if you don’t believe me.

3 – tortoises are land-based, turtles are aquatic, I believe.

4 – Wikipedia has a good article explaining both the concept and the story’s etymology.

5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.

6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!

7 – or is it?  You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle.  For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.

Building Evolutionary Architectures – for security and for open source

Consider the fitness functions, state them upfront, have regular review.

Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.

https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it.  Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.

How, then, did I come across the book?  I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting.  The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.

Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly.  The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.

The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”.  Crucially, these are properties of the architecture – or system – rather than features or specific functionality.  Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests.  Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring.  There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.

Frankly, the above might be enough to recommend the book, but there’s more.  They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:

  1. you may not meet all your fitness functions at the beginning;
  2. applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
  3. your architecture is likely to change over time;
  4. your requirements, and therefore the priority that you give to each fitness function, will change over time;
  5. that even if your fitness functions remain the same, the ways in which you need to monitor them may change.

All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.

As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book.  At times, this felt a little contrived, but the mechanism is generally helpful.  There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts.  Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance.  Relating these to the impact of “architectural coupling” was particularly helpful, in my view.

There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective.  There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.

Why is this a book for security folks?

The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security.  Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix.  The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function.  Taking this as a starting point for a project will help you do two things:

  • avoid focussing on features and functionality, and look at the bigger picture;
  • consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.

Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time.  A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.

Why is this a book for open source folks?

Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles.  This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project.  One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:

  • still relevant;
  • correctly prioritised at this stage in the project;
  • actually being met.

If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.

Open source projects have – quite rightly – various different models of use and intended users.  One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended.  Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out.  One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project.  It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.

Trust & choosing open source

Your impact on open source can be equal to that of others.

信頼と、オープンソースを選ぶということ

A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”.  Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of  very important issues including (unsurprisingly, given the title), a number of issues around trust.  It defines transitive trust thus:

“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”

It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.

At no point in the document does it mention open source software.  To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.

My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source.  It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises.  I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).

What has this to do with trust, specifically transitive trust?  Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it.  Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.

What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software.  This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself.  What are we doing when we say “I trust open source software”?  We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.

There’s actually a lot going on here, some of which is very interesting:

  • we are trusting architects and designers to design software to meet our use cases and requirements;
  • we are trusting developers to implement code well, to those designs;
  • we are trusting developers to review each others’ code;
  • we are trusting documentation folks to document the software correctly;
  • we are trusting testers to write, run and check tests which are appropriate to my use cases;
  • we are trusting those who deploy the code to run in it ways which are similar to my use cases;
  • we are trusting those who deploy the code to report bugs;
  • we are trusting those who receive bug reports to fix them as expected.

There’s more, of course, but that’s definitely enough to get us going.  Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.

In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low.  There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.

That’s because, for open source, I can be any of the entities outlined above.  I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter.  When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive.  You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.

Why, then, would anybody buy or license open source software from a vendor?  Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above.  There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.

Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.


1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal.  You may need help.

2 – see 1.

3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.

What’s an HSM?

HSMs are not right for every project, but form an important part of our armoury.

HSMって何?

Another week, another TLA[1].  This time round, it’s Hardware Security Module: an HSM.  What, then, is an HSM, what is it used for, and why should I care?  Before we go there, let’s think a bit about keys: specifically, cryptographic keys.

The way that most cryptography works these days is that the algorithms to implement a particular primitive[3] are public, and it’s generally accepted that it doesn’t matter whether you know what the algorithm is, or how it works, as it’s the security of the keys that matters.  To give an example: I plan to encrypt a piece of data under the AES algorithm[4], which allows for a particular type of (symmetric) encryption.  There are two pieces of data which are fed into the algorithm:

  1. the data you want to encrypt (the cleartext);
  2. a key that you’ve chosen to encrypt it.

Out comes one piece of data:

  1. the encrypted text (the ciphertext).

In order to decrypt the ciphertext, you feed that and the key into the AES algorithm, and the original cleartext comes out.  Everything’s great – until somebody gets hold of the key.

This is where HSMs come in.  Keys are vital, and they are vulnerable:

  • at creation time – if I can trick you into creating a key some of whose bits I can guess, I increase my chances of being able to decrypt your ciphertext;
  • during use – while you’re doing the encryption or decryption of your data, your key will be in memory, which means that if I can snoop into that memory, I can get it (see also below for information on “side channel attacks”;
  • while stored – unless you protect your key while it’s “at rest”, and waiting to be used, I may have opportunities to get it.
  • while being transferred – if you store your keys somewhere different to the place in which you’re using it, I may have an opportunity to intercept it as it moves to the place it will be used.

HSMs can help in one way or another with all of these pieces, but why do we need them?  The key reason is that there are times when you can’t be certain that the system(s) you are using for creating, using, storing and transferring keys are as secure as you’d like.  If the keys we’re talking about are for encrypting a few emails between you and your spouse, well, you might find it embarrassing if they were compromised, but if these keys are ones from which, say, you derive all of the credit cards chip keys for an entire bank, then you have a rather larger problem.  When it comes down to it, somebody with sufficient privilege on a standard computing system can look at any part of memory – unless there’s a TEE[5] (Oh, how I love my TEE (or do I?)) – and if they can look at the memory, they can see the key.

Worse than this, there are occasions when even if you can’t see into memory, you might be able to derive enough information about a key – or the ciphertext or cleartext – to be able to mount an attack on it.  Attacks of this type are generally called “side channel attacks”, and you can think of them as a little akin to being able to work out the number of cylinders and valves a car[6] engine has by listening to it through the bonnet[7].  The engine leaks information about itself, even though it’s not designed with that in mind.  HSMs are (generally) good at preventing both types of attacks: it’s what they’re designed to do.

Here, then, is a definition:

An HSM is piece of hardware with protected storage which can perform cryptographic operations attached to a system – via a network connection or other connection such as PCI – and which has physical protection from various attacks, from side attacks to somebody physically levering open the case and attaching wires to important components so that they can read the electrical signals.

Many HSMs undergo testing to get certification against certain standards such as “FIPS 140” to show their ability to withstand various types of attack.

Here are the main uses for HSMs.

Key creation

Creation of keys is, as alluded to above, a very important operation, and one where side attacks have proved very effective in the past.  HSMs can provide safe(r) key generation, and ensure appropriate levels of randomness (entropy) for the required strength of key.

Key storage

HSMs are typically designed so that if somebody tries to break into them, they will delete any keys which are stored within them, so they’re a good place to store your keys.

Cryptographic operations

Rather than putting your keys at risk by transferring them to another system, and away from the safety of the HSM, why not move the cleartext to the HSM (encrypted under a transport key, preferably), get the HSM to do the encryption with the keys that it already holds, and then send the ciphertext back (encrypted under a transport key[8])?  This reduces opportunities for attacks during transport and during use, and is a key use for HSMs.

General computing operations

Not all HSMs support this use (almost all will support the others), but if you have sensitive operations with lots of keys and algorithms – which, in the case of AI/ML, for instance, may be sensitive (unlike the cryptographic primitives we were talking about before), then it is possible to write applications specifically to run on an HSM.  This is not a simple undertaking, however, as the execution environment provided is likely to be constrained.  It is difficult to do “right”, and easy to make mistakes which may leave you with a significantly less secure environment than you had thought.

Conclusion – should I use HSMs?

HSMs are excellent as roots of trust for PKI [9] projects and similar.  Using them can be difficult, but most these days should provide a PKCS#11 interface which simplifies the most common operations.  If you have sensitive key or cryptographic requirements, designing HSM use into your system can be a sensible step, but knowing how best to use them must be part of the architecture and design stages, well before implementation.  You should also take into account that operation of HSMs must be managed very carefully, from provisioning through everyday use to de-provisioning.  Use of an HSM in the cloud may make sense, but they are expensive and do not scale particularly well.

HSMs, then, are suited to very particular use cases of highly sensitive data and operations – it is no surprise that their deployment is most common within military, government and financial settings. HSMs are not right for every project, by any means, but form an important part of our armoury for the design and operation of sensitive systems.


1 – Three Letter Acronym[2]

2 – keep up, or we’ll be here for some time.

3 – cryptographic building block.

4 – let’s pretend there’s only one type of AES for the purposes of this example.  In fact, there are a number of nuances around this example which I’m going to gloss over, but which shouldn’t be important for the point I’m making.

5 – Trusted Execution Environment.

6 – automobile, for our North American friends.

7 – hood.  Really, do we have to do this every time?

8 – why do you need to encrypt something that’s already encrypted?  Because you shouldn’t use the same key for two different operations.

9 – Public Key Infrastructure.

Enarxのアナウンス

うまくいけば、この記事は2019年のボストンでのRed Hat Summitのデモに合わせて、投稿されるはずです。

同僚のNathaniel McCallumが行うデモはEnarxを早期具体化したものです。Enarxはここ数ヶ月Red Hatで私たち数名が行ってきたプロジェクトで、世界に公表する準備ができました。コードがあり、デモがあり、GitHubのレポジトリがあり、ロゴもあります。他にはプロジェクトとして何が必要でしょうか。

それは、人です。

何が課題なのか

クラウドやオンプレミス上でシステム(ホスト)でソフトウェア(ワークロード)を実行する場合、たくさんのレイヤーがあります。意識することはほとんどありませんが、事実存在するのです。通常のクラウドの仮想アーキテクチャのレイヤーの例を以下にあげます。

異なった色はレイヤーセット違ったレイヤーやレイヤーセットを「所有」している異なったレイヤーを表しています。

classic-cloud-virt-arch

下記は似ていますが、通常のクラウドコンテナアーキテクチャを描いたものです。

上記のように色によって違ったレイヤーやレイヤーセットの「所有者」を表しています。

cloud-container-arch

これらの所有者は違ったタイプのもので、ハードウェアベンダー、CSP(クラウドサービスプロバイダ)、OEMだったり、OSベンダ、ミドルウェアベンダー、アプリケーションベンダーだったりします。ワークロードの所有者です。それぞれホスト上で実行するワークロードにとって、レイヤーは実際異なっています。もし同じだったとしても、レイヤーインスタンスのバージョンは違っているかもしれませんし、BIOSバージョン、ブートローダー

、カーネルバージョンが違うかもしれません。

さて、色々な意味で皆さんはこれらには懸念を持っていないかもしれないですね。CSPがレイヤーやバージョンの違いを自分たちの方法で気にならないようにしてくれているかもしれません。けれどここはセキュリティを語るブログです。セキュリティに携わる人のためのブログです。つまり、このブログを読んでくださっているかたは気にする、と言うことです。

私たちが気にしたければいけないと言う理由は、バージョンやレイヤーの違いだけではなく、色々なエンティティです。と言うのも機密性の高いワークロードをこのようなスタック上で安心して、信頼して実行しなければいけません。

それぞれのレイヤーとその所有者を信頼し、それらが行いたいことを実行するだけでなく傷つかないようにしなければいけません。これは機密性の高いワークロードを実行する際にはとても大きなことです。

Enarxとは?

Enarxはそのようなレイヤー全ての信頼性を担保する際の問題を明確にするのを目的としたプロジェクトです。

ワークロードを実行する人にレイヤーとその所有者を減らして、信頼すべきものを最小限に抑えようとしたいと考えています。

TEE(Trusted Execution Environments https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/ 参照)を使って、以下のようなアーキテクチャに似たものを提供する予定です。

reduced-arch

このような世界ではCPUとファームウェア、そしてミドルウェアを信用する必要があります。

このミドルウェアがEnarxの部分です。

でもそのほかのレイヤー部分を全て信頼する必要はないのです。アプリケーションの統一性と信頼性を保証するためにTEEを活用するからです。Enarxプロジェクトは、TEEの証明書を提供することで、信頼できる正しいTEE上で実行していることがわかるようにするものです。コードはオープンソース、監査可能です。それによってアプリケーションの直下にあるレイヤーが信頼できるようになります。

最初のコードは公開されていて、(この時点ではAMDのSEV TEEで動きます)皆さんにお知らせできる程度には動きます。

覚えておいてくださいね、あなたのアプリケーションがセキュリティ要件に合致しているかどうかはあなたの責任です🙂

もっと知りたい場合は?

一番簡単な方法は Enarx github https://github.com/enarx をチェックすることです。

今はコードだけですが、これからどんどん情報を足していきます。

が、今しばらくお待ちください。数人しかこのプロジェクトには今時点ではいないのです。ブログは私たちのToDoリストにあったので、私はそこから始めた次第です。

私たちは、コミュニティの方々にプロジェクトに参加していただきたいと思っています。今はとても低いレイヤーでさらに知識を得ていくために尽力しています。もちろん特定のハードウェアも必要になります。そうそう、早期ブートやKVMの低層のハッカーなあなたの参加が特に必要です。

記事にコメントいただければもちろんお答えします。

 

元の記事:

https://aliceevebob.com/2019/05/07/announcing-enarx/

2019年5月7日 Mike Bursell

タグ:セキュリティ、Enarx、オープンソース、クラウド

Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.