7 tips for kicking off an open source project

It’s not really about project mechanics at all

オープンソースプロジェクトを始める7つのアドバイス

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.

 

オープンソースプロジェクトを始める7つのアドバイス

プロジェクト手法ではなく…

私は今Enarxプロジェクトに関わっています。とても深く、です。すでにご存知かもしれませんが、これはオープンソースのプロジェクトで、信頼できないホスト上で機密性の高いワークロードの実行を可能にするプロジェクトです。

 

何年もオープンソースプロジェクトに関わってきましたが、このプロジェクトで初めて私は共同創立者となりました。

私たちは現状、コードや文書を十分用意し、ロゴもステッカーも(これ重要!)用意できている段階です。

 

プロジェクトはLinux Foundationグループ(Confidential Computing Consortium)に含まれるはずなので、順調です。さらにプロジェクトを加速させるためにも内容に関してもお伝えしていったほうがいいでしょう。はっきりさせておくと、Enarxは商用とエンタープライズアプリケーションになるプロジェクトです。十分には成熟しておらず、まだまだハードルやチャレンジがあるかもしれません。さらに言うと、私たちの歩んできた道は全てのプロジェクトも当てはまらないかもしれませんが、他のプロジェクトやこれからプロジェクトを始めようとする人への指針になればと思っています。

 

ここまで来るにはたくさんのサポートがあったことをまずお伝えします。

私はOpenSource.comから始めましたが、ここではたくさんのガイドが載っています。それに従っても間違ってしまうこともあるかもしれません。ただ、以下に考慮すべき点を挙げておきます。

 

1 クリティカルマスを目標に

 

私は幸いにもRed Hatと言う素晴らしい職場で働いています。ここでは全てのものがオープンソースですし、オープンソースとそのコミュニティを非常に重要だと考えています。そこで「クリティカルマス」企業と言うものを耳にしました。物事を実際に行っていくには十分な人々の関心が必要で、人々が無視できないものとする必要があるということです。共同創立者のNathaniel McCallumと私はプロジェクトに情熱的で、組織内でスポンサーを得ることに時間をかけました。(誰のことだかわかりますよね、皆さんに感謝です!)そしてエンジニア達に「売り込み」をして惹き込み、プロジェクトが止まれなくなるくらいほどにしました。

プロジェクトのいくつかは一人二人の貢献者しか得ることができずもたついてしまいますが、人々の興味を惹きつけるには、どんどん進めてくれる、ある程度の人数を集めることが必須なのです。

 

2 デモ

 

人々を巻き込みたければデモをすると良いでしょう。洗練されている必要はなく、しようとしていることが実現可能であること、あなたが成し得たいことを示さなければいけません。初期段階のデモではコマンドラインの出力だけでしょうが、UIプロダクトを提供するのでなければ、それでもいいんです。成し遂げようとすることと情熱、プロジェクトの大切さを伝えることが有益なのです。人は何かを「見たい」「体験したい」ので、形にしてやる気を見せることが近道なのです。

 

3 ライセンスを選ぶ

コードをオープンソースで作ると、他の人にも貢献してもらいたくなるでしょう。これはあまり重要ではなく、正しいオープンソースのライセンスを選択することが他の人の貢献度を高め、定義された用語の理解を深め、その貢献する人働いている組織とその人たち自身の貢献するハードルを下げるのです。

 

4 文書作成

 

開発者文書が最重要だと考えるかもしれません。それなければどうやって人が貢献してコードを書くことができるでしょうか。

 

初めのうちは必要ないと思っています。小さなプロジェクトではコードが何をするものか、何をさせたいか、何が欠けているかいるか、の説明をすることで何人かの人を巻き込むことができます。

しかしコードが何をするものでどう便利か説明する文書がないのに、どうやってたくさんの人が時間を割いてくれるでしょうか。

 

文書といっても、ちゃんとしたマーケティング用のものだったり正式なものである必要はなくて、どうしてそれをしなくてはいけないかと皆さんに伝えるものでなければいけません。

これは一番目のポイント、クリティカルマスに注力することにも通じています。文書、ユースケースを示すことで、「ポイント」でプロジェクトが実現したいことに説得力を持たせるのに役立ちます。

 

私たちはgithubウィキをメインの文書置き場にしていて、作成と同時にアップデートしています。これはもう少し改善できるかと思います。

 

5 見えるプロジェクト

 

プロジェクトがちゃんと見える状態でないと見つけてもらえません。私たちのプロジェクトはとてもラッキーで、Confidential Computing Consortiumができた上にそこで見せられるだけのプラットフォームをすぐに作ることができたので、クリティカルマスに届く状態です。

 

Twitterのアカウントもあり(@enarxproject)このブログOpensource.comで記事も出しています。Red Hatのhttps://next.redhat.com/にてブログを出す機会にも恵まれ、プレスのインタビューも受けましたし出来るだけカンファレンスでも講演しています。私たちにはこのような良い機会がありましたが、全てのプロジェクトに適切なアプローチではないかもしれません。しかし知ってもらうことで、もっとたくさんの人々に貢献してもらえます。

 

6 歓迎しましょう

世間に知って頂けたとしましょう。次は何ができるでしょうか。そう、皆さんにプロジェクトに参加したいと思って頂きたいですよね。歓迎してもらえなければその参加数は少なくなって行きます。また上の方で私が何を言ったかに関わらず、しばらくすると技術文書も必要になります。そしてその人たちがあなたと話し合う方法が必要ですね。そうすることで評価されていると感じますから。

 

私たちの場合、Gitter(https://gitter.im/enarx/)で、毎日のスタンドアップ会議には参加したい人がみんな参加できます。最近ではそ課題データベース(https://github.com/enarx/enarx/issues)をGithubに作成しタコとで、スレッドの会話でタイムゾーンがあることから毎日のスタンドアップ会議の時間が合っていないことが明らかになりました。ので、会議の数を少なくとも週一とする配慮をしたのです。

 

7 仲の良い人と活動しましょう

 

私はとてもとてもEnarxプロジェクトチームのみんなと働くことが楽しいです。楽しく過ごし、冗談をかわし、笑って、共通の目標をシェアしています。Enarxの成功のためです。出来るだけ楽しんですること、それが大切だと思います。特にプロジェクトの初期段階では情熱的な人と楽しく仕事できる人が必要です。例えその人が地理的には数千キロ(マイル?)離れた場所にいても、です。そのように参加できなければ情熱もどんどん先細ってくるでしょうし勢いも失われ、プロジェクトは失敗に終わるでしょう。一緒に活動する人は選べるわけではないでしょうが、できればあなたの仲が良い人を選びましょう。

 

結論:「人」です。

 

この記事を書き始めるまでは気づきませんでしたが、全くプロジェクト手法が問題ではないのです。

 

人、です。

 

読み返せばどのアドバイスにも人の大切さが述べてあり、ライセンスの選び方にも、です。オープンソースプロジェクトとはコードではないのです。人なのです。どのようにシェアし、一緒に活動し交流するかなのです。

 

オープンソースプロジェクトはそれぞれ異なるものでしょうから、この7つのアドバイスが全て当てはまることはないでしょう。間違いなくEnarxはまだ成功と言い切れるものではありませんので、今の段階でこのようなアドバイスをすべきでないのかもしれません。しかし成功してきた今までのオープンソースプロジェクトを思い起こすと、やはり人と言うものはとても大切なのです。

 

元の記事:https://aliceevebob.com/2019/12/17/7-tips-for-kicking-off-an-open-source-project/

2019年12月7日 Mike Bursell

 

タグ:オープンソース

 

Timely risk or risky times?

Being aware of “the long game”.

On Friday, 29th November 2019, Jack Merritt and Saskia Jones were killed in a terrorist attack.  A number of members of the public (some with with improvised weapons) and of the emergency services acted with great heroism.  I wanted to mention the mention the names of the victims and to praise those involved in stopping him before mentioning the name of the attacker: Usman Khan.  The victims, the attacker were taking part in an offender rehabilitation conference to help offenders released from prison to reintegrate into society: Khan had been convicted to 16 years in prison for terrorist offences.

There’s an important formula that everyone involved in risk – and given that IT security is all about mitigating risk, that’s anyone involved in security – should know. It’s usually expressed thus:

Risk = likelihood x impact

Sometimes likelihood is sometimes expressed as “probability”, impact as “consequence” or “loss”, and I’ve seen some other variants as well, but the version above is generally sufficient for most purposes.

Using the formula

How should you use the formula? Well, it’s most useful for comparing risks and deciding how to mitigate them. Humans are terrible at calculating risk, and any tools that help them[1] is good.  In order to use this formula correctly, you want to compare risks over the same time period.  You could say that almost any eventuality may come to pass over the lifetime of the universe, but comparing the risk of losing broadband access to the risk of your lead developer quitting for another company between the Big Bang and the eventual heat death of the universe is probably not going to give you much actionable information.

Let’s look at the two variables that we need to have in order to calculate risk.  We’ll start with the impact, because I want to devote most of this article to the other part: likelihood.

Impact is what the damage will be if the risk happens.  In a business context, you want to look at the risk of your order system being brought down for a week by malicious attackers.  You might calculate that you would lose £15,000 in orders.  On top of that, there might be a loss of reputation which you might calculate at £30,000.  Fixing the problem might add £10,000.  Add these together, and the impact is £55,000.

What’s the likelihood?  Well, remember that we need to consider a particular time period.  What you choose will depend on what you’re interested in, but a classic use is for budgeting, and so the length of time considered is often a year.  “What is the likelihood of my order system being brought down for a week by malicious attackers over the next twelve months?” is the question you want to ask.  If you decide that it’s 0.005 (or 0.5%), then your risk is calculated thus:

Risk = 0.005 x 55,000

Risk = 275

The units don’t really matter, because what you want to do is compare risks.  If the risk of your order system being brought down through hardware failure is higher (say 500), then you should probably balance the amount of resources you assign to mitigate these risks accordingly.

Time, reputation, trust and risk

What I’m interested in is a set of rather more complicated risks, however: those associated with human behaviour.  I’m very interested in trust, and one of the interesting things about trust is how we decide to trust people.  One way is by their reputation: if someone keeps behaving well over a long period, then we tend to trust them more – or if badly, then to trust them less[2].  If we trust someone more, our calculation of risk is likely to be strongly based on that trust, as our view of the likelihood of a behaviour at odds with the reputation that person holds will be informed by that.

This makes sense: in the absence of perfect information about humans, their motivations and intentions, our view of risk must be based on something, and reputation is actually a fairly good measure for that.  We might say that the likelihood of a customer defaulting on payment terms reduces year by year as we start to think of them as a “trusted customer”.  As the likelihood reduces, we may decide to increase the amount we lend to them – and thereby the impact of defaulting – to keep the risk about the same, year on year.

The risk here is what is sometimes called “playing the long game”.  Humans sometimes manipulate their reputation, or build up a reputation, in order to perform an action once they have gained trust.  Online sellers my make lots of “good” sales in order to get a 5 star rating over time, only to wait and then make a set of “bad” sales, where they don’t ship goods at all, and then just pocket the money.  Or, they may make many small sales in order to build up a good reputation, and then use that reputation to make one big sale which they have no intention of fulfilling.  Online selling sites are wise to some of these tricks, and have algorithms to try to protect buyers (in fact, the same behaviour can be used by sellers in some cases), but these are not perfect.

I’d like to come back to the London Bridge attack.  In this case, it seems likely that the attacker bided his time over many years, behaving well, and raising his “reputation” among those who knew him – the prison staff, parole board, rehabilitation conference organisers, etc. – so that he had the opportunity to perform one major action at odds with that reputation.  The heroism of those around him stopped him being as successful as he may have hoped, but still at the cost of two innocent lives and several serious injuries.

There is no easy way to deal with such issues.  We need reputation, and we need to allow people to show that they have changed and can be integrated into society, but when we make risk calculations based on reputation in any sphere, we should take care to consider whether actors are playing a long game, and what the possible ramifications would be if they were to act at odds with that reputation.

I noted above that humans are bad at calculating risk, and to follow our example of the non-defaulting customer, one mistake might be to increase the credit we give to that customer beyond the balance of the increase of reputation: actually accepting higher risk than we would have done previously, because we consider them trustworthy.  If we do this, we’ve ceased to use the risk formula, and have started to act irrationally.  Don’t do that.

 


1 – OK, then: “us”.

2 – I’m writing this in the lead up to a UK General Election, and it occurs to me that we actually don’t apply this to most of our politicians.

Of projects, products and (security) community

Not all open source is created (and maintained) equal.

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

Humans and (being bad at) trust

Why “signing parties” were never a good idea.

I went to a party recently, and it reminded of quite how bad humans are at trust. It was a work “mixer”, and an attempt to get people who didn’t know each other well to chat and exchange some information. We were each given two cards to hang around our necks: one on which to write our own name, and the other on which we were supposed to collect the initials of those to whom we spoke (in their own hand). At the end of the event, the plan was to hand out rewards whose value was related to the number of initials collected. Pens/markers were provided.

I gamed the system by standing by the entrance, giving out the cards, controlling the markers and ensuring that everybody signed card, hence ending up with easily the largest number of initials of anyone at the party. But that’s not the point. Somebody – a number of people, in fact – pointed out the similarities between this and “key signing parties”, and that got me thinking. For those of you not old enough – or not security-geeky enough – to have come across these, they were events which were popular in the late nineties and early parts of the first decade of the twenty-first century[1] where people would get together, typically at a tech show, and sign each other’s PGP keys. PGP keys are an interesting idea whereby you maintain a public-private key pair which you use to sign emails, assert your identity, etc., in the online world. In order for this to work, however, you need to establish that you are who you say you are, and in order for this to work, you need to convince someone of this fact.

There are two easy ways to do this:

  1. meet someone IRL[2], get them to validate your public key, and sign it with theirs;
  2. have someone who knows the person you met in step 1 agree that they can probably trust you, as the person in step 1 did, and they trust them.

This is a form of trust based on reputation, and it turns out that it is a terrible model for trust. Let’s talk about some of the reasons for it not working. There are four main ones:

  • context
  • decay
  • transitive trust
  • peer pressure.

Let’s evaluate these briefly.

Context

I can’t emphasise this enough: trust is always, always contextual (see “What is trust?” for a quick primer). When people signed other people’s key-pairs, all they should really have been saying was “I believe that the identity of this person is as stated”, but signatures and encryption based on these keys was (and is) frequently misused to make statements about, or claim access to, capabilities that were not necessarily related to identity.

I lay some of the fault of this at the US alcohol consumption policy. Many (US) Americans use their driving licence/license as a form of authorisation: I am over this age, and am therefore entitled to purchase alcohol. It was designed to prove that their were authorised to drive, and nothing more than that, but you can now get a US driving licence to prove your age even if you can’t drive, and it can be used, for instance, as security identification for getting on aircraft at airportsThis is crazy, but partly explains why there is such a confusion between identification, authentication and authorisation.

Decay

Trust, as I’ve noted before in many articles, decays. Just because I trust you now (within a particular context) doesn’t mean that I should trust you in the future (in that or any other context). Mechanisms exist within the PGP framework to expire keys, but it was (I believe) typical for someone to resign a new set of keys just because they’d signed the previous set. If they were only being used for identity, then that’s probably OK – most people rarely change their identity, after all – but, as explained above, these key pairs were often used more widely.

Transitive trust

This is the whole “trusting someone because I trust you” problem. Again, if this were only about identity, then I’d be less worried, but given people’s lack of ability to specify context, and their equal inability to communicate that to others, the “fuzziness” of the trust relationships being expressed was only going to increase with the level of transitiveness, reducing the efficacy of the system as a whole.

Peer pressure

Honestly, this occurred to me due to my gaming of the system, as described in the second paragraph at the top of this article. I remember meeting people at events and agreeing to endorse their key-pairs basically because everybody else was doing it. I didn’t really know them, though (I hope) I had at least heard of them (“oh, you’re Denny’s friend, I think he mentioned you”), and I certainly shouldn’t have been signing their key-pairs. I am certain that I was not the only person to fall into this trap, and it’s a trap because humans are generally social animals[3], and they like to please others. There was ample opportunity for people to game the system much more cynically than I did at the party, and I’d be surprised if this didn’t happen from time to time.

Stepping back a bit

To be fair, it is possible to run a model like this properly. It’s possible to avoid all of these by insisting on proper contextual trust (with multiple keys for different contexts), by re-evaluating trust relationships on a regular basis, by being very careful about trusting people just due to their trusting someone else (or refusing to do so at all), and by refusing just to agree to trust someone because you’ve met them and they “seem nice”. But I’m not aware of anyone – anyone – who kept to these rules, and it’s why I gave up on this trust model over a decade ago. I suspect that I’m going to get some angry comments from people who assert that they used (and use) the system properly, and I’m sure that there are people out there who did and do: but as a widespread system, it was only going to work if the large majority of all users treated it correctly, and given human nature and failings, that never really happened.

I’m also not suggesting that we have many better models – but we really, really need to start looking for some, as this is important, and difficult stuff.


1 – I refuse to refer to these years the “aughts”.

2 – In Real Life – this used to be an actual distinction to online.

3 – even a large enough percentage of IT folks to make this a problem.

Breaking the security chain(s)

Your environment is n-dimensional – your trust must be, too.

One of the security principles by which we[1] live[2] is that security is only as strong as the weakest link in a chain.  That link is variously identified as:

  • your employees
  • external threat actors
  • all humans
  • lack of training
  • cryptography
  • logging
  • anti-virus
  • auditing capabilities
  • the development lifecycle
  • waterfall methodology
  • passwords
  • any other authentication mechanisms
  • electrical wiring
  • hurricanes
  • earthquakes
  • and pestilence.

Actually, I don’t think I’ve ever seen the last one mentioned, but it’s only a matter of time.  However, very rarely does anybody bother to identify exactly what the chain is that it is being broken by the weakest link splintering into a thousand pieces.

There are a number of candidates that spring to mind:

  1. your application flow.  This is rather an old-fashioned way of thinking of applications: that a program is started, goes through a set of actions, and then terminates, but to think more broadly about it, any action which causes an application to behave in unexpected or unintended ways is a possible security flow, whether that is a monolithic application, a set of microservices or an app on  mobile device.
  2. your software stack.  Depending on how you think about your stack, there are likely to be at least 5, maybe a dozen or maybe even scores of layers in your software stack (for an example with just a few simple layers, see Announcing Enarx).  However you think about it, you need to trust all of those layers to do what who expect them to do.  If one of them is compromised, malicious, or just poorly implemented or maintained, then you have a security issue.
  3. your hardware stack.  There was a time, barely five years ago, when most people (excepting us[1], of course), assumed that hardware did what we thought it was supposed to do, all of the time.  In fact, we should all have known better, given the Clipper Chip and the Pentium bug (to name just to famous examples), but with Spectre, Meltdown and a growing realisation that hardware isn’t as trustworthy as was previously thought, everybody needs to decide exactly what security they can trust in which components.
  4. your operational processes.  You can have the best software and hardware in the world, but if you don’t maintain it and operate it properly, it’s going to be full of holes.  Failing to invest in operations, monitoring, logging, auditing and the rest leaves you wide open.
  5. your supply chain. There’s a growing understanding in the industry that our software and hardware supply chains are possible points of failure[3].  Whether your vendor is entirely proprietary (in which case their security is largely opaque) or open source (in which case you’ve got a chance to be able to see what’s going on), errors or maliciousness in the supply chain can scupper any hopes you had of security for your deployment.
  6. your software and hardware lifecycle.  Developing software?  Patching it?  Upgrading hardware (or software)?  Unit testing?  Unit testingg?  We all know that a failure to manage the lifecycle of our environment can lead to security problems.

The point I’m trying to make above is that there’s no single chain.  Your environment is n-dimensional – your trust must be, too.  If you don’t think about all of these contexts – and there will be more beyond the half-dozen that I’ve just noted – then you can’t have a good chance of managing security in your environment.  I honestly don’t think that there’s any single weakest link in the chain, because there are always already multiple chains in play: our job is to think about as many of them as possible, and then manage an mitigate the risks associated with each.


1 – the mythical “IT security community”.

2 – you’re right: “which we live by” would sound much more natural.

3 – and a growing industry to try to provide fixes.

“Unlawful, void, and of no effect”

The news from the UK is amazing today: the Supreme Court has ruled that the Prime Minister has failed to “prorogue” Parliament – the in other words, that the members of the House of Commons and the House of Lords are still in session. The words in the title come from the judgment that they have just handed down.

I’m travelling this week, and wasn’t expecting to write a post today, but this triggered a thought in me: what provisions are in place in your organisation to cope with abuses of power and possible illegal actions by managers and executives?

Having a whistle-blowing policy and an independent appeals process is vital. This is true for all employees, but having specific rules in place for employees who are involved in such areas as compliance and implementations involving regulatory requirements is vital. Robust procedures protect not only an employee who finds themself in a difficult position, but, in the long view, the organisation itself. They a can also act as a deterrent to managers and executives considering actions which might, in the absence of such procedures, likely go unreported.

Such procedures are not enough on their own – they fall into the category of “necessary, but not sufficient” – and a culture of ethical probity also needs to be encouraged. But without such a set of procedures, your organisation is at real risk.