E2E encryption in danger (again) – sign the petition

You should sign the petition on the official UK government site.

The UK Government is at it again: trying to require technical changes to products and protocols which will severely impact (read “destroy”) the security of users. This time, it’s the Online Safety Bill and, like pretty much all similar attempts, it requires the implementation of backdoors to things like messaging services. Lots of people have stood up and made the point that this is counter-productive and dangerous – here are a couple of links:

This isn’t the first time I’ve written about this (The Backdoor Fallacy: explaining it slowly for governments and Helping our governments – differently, for a start), and I fear that it won’t be the last. The problem is that none of these technical approaches work: none of them can work. Privacy and backdoors (and this is a backdoor, make no mistake about it) are fundamentally incompatible. And everyone with an ounce (or gram, I suppose) of technical expertise agrees: we know (and we can prove) that what’s being suggested won’t and can’t work.

We gain enormous benefits from technology, and with those benefits come risks, and some downsides which malicious actors exploit. The problem is that you can’t have one without the other. If you try to fix (and this approach won’t fix – it might reduce, but not fix) the problem that malicious actors and criminals use online messaging service, you open out a huge number of opportunities for other actors, including malicious governments (now or in the future) to do very, very bad things, whilst reducing significantly the benefits to private individuals, businesses, human rights organisations, charities and the rest. There is no zero sum game here.

What can you do? You can read up about the problem, you can add your voice to the technical discussions and/or if you’re a British citizen or resident, you should sign the petition on the official UK government site. This needs 10,000 signatures, so please get signing!

What’s a secure channel?

Always beware of products and services which call themselves “secure”. Because they’re not.

A friend asked me what I considered a secure channel a couple of months ago, and it made me think. Many of us have information that we wish to communicate which we’d rather other people can’t look at, for all sorts of reasons. These might range from present ideas for our spouse or partner sent by a friend to my phone to diplomatic communications about espionage targets sent between embassies over the Internet, with lots in between: intellectual property discussions, bank transactions and much else. Sometimes, we want to ensure that people can’t change what’s in the messages we send: it might be OK for other people to know that I pay £300 in rent, but not for them to be able to change the amount (or the bank account into which it goes). These two properties are referred to as confidentiality (keeping information secret) and integrity (keeping information unchangeable), and often you want to combine them – in the case of our espionage plans, I’d prefer that my enemies don’t know what targets are at risk, but also that they don’t change the targets I’ve selected to something less bothersome for them.

Modern encryption systems generally provide both confidentiality and integrity for messages and data, so I’m going to treat these as standard properties for an encrypted channel. Which means that if I use encryption on a channel, it’s secure, right?

Hmm. Let’s step back a bit, because, unfortunately, there’s rather a lot more to unpack than that. Three of the questions we need to tackle should give us pause. They are: “secure from whom?”, “secure for how long?” and “secure where?”. The answers we give to these questions will be important, and though they are all somewhat intertwined, I’m going to deal with them in order, and I’m going to use the examples of the espionage message and the present ideas to discuss them. I’m also going to talk more about confidentiality than integrity – though we’ll assume that both properties are important to what we mean by “secure”.

Secure from whom?

In our examples, we have very different sets of people wanting to read our messages – a nation state and my spouse. Unless my spouse has access to skills and facilities of which I’m unaware (and I wouldn’t put it past her), the resources that she has at her disposal to try to break the security of my communication are both fewer and less powerful than those of the nation state. A nation state may be able to apply cryptologic attacks to messages, attack the software (and even firmware or hardware) implementations of the encryption system, mess with the amount of entropy available for key generation at either or both ends of the channel, perform interception (e.g. Person-In-The-Middle) attacks, coerce the sender or recipient of the message and more. I’m hoping that most of the above are not options for my wife (though coercion might be, I suppose!). The choice of encryption system, including entropy sources, cipher suite(s), hardware and software implementation are all vital in the diplomatic message case, as are vetting of staff and many other issues. In the case of gift ideas for my wife’s birthday, I’m assuming that a standard implementation of a commercial messaging system should be enough.

Secure for how long?

It’s only a few days till my wife’s birthday (yes, I have got her a present, though that does remind me; I need a card…), so I only have to keep the gift ideas secure for a little longer. It turns out that, in this case, the time sensitivity of the integrity of the message is different to that of the confidentiality: even if she managed to change what the gift idea in the message was, it wouldn’t make a difference to what I’ve got her at this point. However, I’d still prefer if she didn’t know what the gift ideas are.

In the case of the diplomatic espionage message, we can assume that confidentiality and the integrity are both important for a much longer time, but we’ll concentrate on the confidentiality. Obviously an attacking country would prefer it if the target were unaware of an attack before it happened, but if the enemy managed to prove an attack was performed by the message sender’s or recipient’s country, even a decade or more in the future, this could also lead to major (and negative) consequences. We want to ensure that whatever steps we take to protect the message are sufficient that access to a copy of the message taken when it was sent (via wire-tapping, for instance) or retrieved at a later date (via access to a message store in the future), is insufficient to allow it to be cracked. This is tricky, and the history of cryptologic attacks on encryption schemes, not to mention human failures (such as leaks) and advances in computation (such as quantum computing) should serve as a strong warning that we need to consider very carefully what mechanisms we should use to protect our messages.

Secure where?

Are the embassies secure? Are all the machines between the embassies secure? Is the message stored before delivery? If so, is it stored on a machine within the embassy or on a server elsewhere? Is it end-to-end encrypted, or is it decrypted before delivery and then re-encrypted (I really, really hope not). While this is unlikely in the case of diplomatic messages, a good number of commercially sensitive messages (including much email) is not end-to-end encrypted, leading to vulnerabilities if someone trying to break the security can get access to the system where they are stored, or intercept them between decryption and re-encryption.

Typically, we have better control over different parts of the infrastructure which carry or host our communications than we do over others. For most of the article above, I’ve generally assumed that the nation state trying to read embassy message is going to have more relevant resources to try to breach the security of the message than my wife does, but there’s a significant weakness in protecting my wife’s gift idea: she has easy access to my phone. I tend to keep it locked, and it has a PIN, but, if I’m honest, I don’t tend to go out of my way to keep her out: the PIN is to deter someone who might steal it. Equally, it’s entirely possible that I may be sharing some material (a video or news article) with her at exactly the time that the gift idea message arrives from our mutual friend, leading her to see the notification. In either case, there’s a good chance that the property of confidentiality is not that strong after all.

Conclusion

I’ve said it before, and I plan to say it again (and again, and again): there is no “secure”. When we talk about secure channels, we must be aware that what we mean should be “channels secured with appropriate measures to protect against the risks associated with the security being compromised”. This is a long way of saying “if I’m protecting diplomatic messages, I need to make greater efforts than if I’m trying to stop my wife finding out ahead of time what she’s getting for her birthday”, but it’s important to understand this. Part of the problem is that we’re bombarded with words like “secure”, which are unqualified, and may lead us to think that they’re absolute, when they’re absolutely not. Another part of the problem is that once we’ve put one type of security in place, particularly when it’s sold or marketed as “best in breed” or “best practice”, that it addresses all of the issues we might have. This is clearly not the case – using the strongest encryption possible for messages between my friend and me isn’t going to stop my wife from knowing I’ve bought her if knows the PIN for my phone. Please, please, consider what you need when you’re protecting your communications (and other data, of course), and always beware of products and services which call themselves “secure”. Because they’re not.

In praise of … the Community Manager

I am not – and could never be – a community manager

This is my first post in a while. Since Hanging up my Red Hat I’ve been busy doing … stuff. Stuff which I hope to be able to speak about soon. But in the meantime, I wanted to start blogging regularly again. Here’s my first post back, a celebration of an important role associated with open source projects: the community manager.

Open source communities don’t just happen. They require work. Sometimes the technical interest in an open source project is enough to attract a group of people to get involved, but after some time, things are going to get too big for those with a particular bent (documentation, coding, testing) to manage the interactions between the various participants, moderate awkward (or downright aggressive) communications, help encourage new members to contribute, raise the visibility of the project into new areas or market sectors and all the other pieces that go into keeping a project healthy.

Enter the Community Manager. The typical community manager is in that awkward position of having lots of responsibility, but no direct authority. Open source projects being what they are, few of them have empowered “officers”, and even when there are governance structures, they tend to operate by consent of those involved – by negotiated, rather than direct, authority. That said, by the point a community manager is appointed for a community manager, it’s likely that at least one commercial entity is sufficiently deep into the project to fund or part-fund the community manager position. This means that the community manager will hopefully have some support from at least one set of contributors, but will still need to build consensus across the rest of the community. There may also be tricky times, also, when the community manager will need to decide whether their loyalties lie with their employer or with the community. A wise employer should set expectations about how to deal with such situations before they arise!

What does the community manager need to do, then? The answer to this will depend on a number of issues, and there is likely to be a balance between these tasks, but here’s a list of some that come to mind[1].

  • marketing/outreach – this is about raising visibility of the project, either in areas where it is already known, or new markets/sectors, but there are lots of sub-tasks such as a branding, swag ordering (and distribution!), analyst and press relations.
  • event management – setting up meetups, hackathons, booths at larger events or, for really big projects, organising conferences.
  • community growth – spotting areas where the project could use more help (docs, testing, outreach, coding, diverse and inclusive representation, etc.) and finding ways to recruit contributors to help improve the project.
  • community lubrication – this is about finding ways to keep community members talking to each other, celebrate successes, mourn losses and generally keep conversations civil at least and enthusiastically friendly at best.
  • project strategy – there are times in a project when new pastures may beckon (a new piece of functionality might make the project exciting to the healthcare or the academic astronomy community for instance), and the community manager needs to recognise such opportunities, present them to the community, and help the community steer a path.
  • product management – in conjunction with project strategy, situations are likely to occur when a set of features or functionality are presented to the community which require decisions about their priority or the ability of the community to resource them. These may even create tensions between various parts of the community, including involved commercial interests. The community manager needs to help the community reason about how to make choices, and may even be called upon to lead the decision-making process.
  • partner management – as a project grows, partners (open source projects, academic institutions, charities, industry consortia, government departments or commercial organisations) may wish to be associated with the project. Managing expectations, understanding the benefits (or dangers) and relative value can be a complex and time-consuming task, and the community manager is likely to be the first person involved.
  • documentation management – while documentation is only one part of a project, it can often be overlooked by the core code contributors. It is, however, a vital resource when considering many of the tasks associated with the points above. Managing strategy, working with partners, creating press releases: all of these need good documentation, and while it’s unlikely that the community manager will need to write it (well, hopefully not all of it!), making sure that it’s there is likely to be their responsibility.
  • developer enablement – this is providing resources (including, but not restricted to, documentation) to help developers (particularly those new to the project) to get involved in the project. It is often considered a good idea to separate this set of tasks out, rather than expecting a separate role to that of a community manager, partly because it may require a deeper technical focus than is required for many of the other responsibilities associated with the role. This is probably sensible, but the community manager is likely to want to ensure that developer enablement is well-managed, as without new developers, almost any project will eventually calcify and die.
  • cat herding – programmers (who make up the core of any project) are notoriously difficult to manage. Working with them – particularly encouraging them to work to a specific set of goals – has been likened to herding cats. If you can’t herd cats, you’re likely to struggle as a community manager!

Nobody (well almost nobody) is going to be an expert in all of these sets of tasks, and many projects won’t need all of them at the same time. Two of the attributes of a well-established community manager are an awareness of the gaps in their expertise and a network of contacts who they can call on for advice or services to fill out those gaps.

I am not – and could never be – a community manager. I don’t have the skills (or the patience), and one of the joys of gaining experience and expertise in the world is realising when others do have skills that you lack, and being able to recognise and celebrate what they can bring to your world that you can’t. So thank you, community managers!


1 – as always, I welcome comments and suggestions for how to improve or extend this list.

7 tips for kicking off an open source project

It’s not really about project mechanics at all

オープンソースプロジェクトを始める7つのアドバイス

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.

 

オープンソースプロジェクトを始める7つのアドバイス

プロジェクト手法ではなく…

私は今Enarxプロジェクトに関わっています。とても深く、です。すでにご存知かもしれませんが、これはオープンソースのプロジェクトで、信頼できないホスト上で機密性の高いワークロードの実行を可能にするプロジェクトです。

 

何年もオープンソースプロジェクトに関わってきましたが、このプロジェクトで初めて私は共同創立者となりました。

私たちは現状、コードや文書を十分用意し、ロゴもステッカーも(これ重要!)用意できている段階です。

 

プロジェクトはLinux Foundationグループ(Confidential Computing Consortium)に含まれるはずなので、順調です。さらにプロジェクトを加速させるためにも内容に関してもお伝えしていったほうがいいでしょう。はっきりさせておくと、Enarxは商用とエンタープライズアプリケーションになるプロジェクトです。十分には成熟しておらず、まだまだハードルやチャレンジがあるかもしれません。さらに言うと、私たちの歩んできた道は全てのプロジェクトも当てはまらないかもしれませんが、他のプロジェクトやこれからプロジェクトを始めようとする人への指針になればと思っています。

 

ここまで来るにはたくさんのサポートがあったことをまずお伝えします。

私はOpenSource.comから始めましたが、ここではたくさんのガイドが載っています。それに従っても間違ってしまうこともあるかもしれません。ただ、以下に考慮すべき点を挙げておきます。

 

1 クリティカルマスを目標に

 

私は幸いにもRed Hatと言う素晴らしい職場で働いています。ここでは全てのものがオープンソースですし、オープンソースとそのコミュニティを非常に重要だと考えています。そこで「クリティカルマス」企業と言うものを耳にしました。物事を実際に行っていくには十分な人々の関心が必要で、人々が無視できないものとする必要があるということです。共同創立者のNathaniel McCallumと私はプロジェクトに情熱的で、組織内でスポンサーを得ることに時間をかけました。(誰のことだかわかりますよね、皆さんに感謝です!)そしてエンジニア達に「売り込み」をして惹き込み、プロジェクトが止まれなくなるくらいほどにしました。

プロジェクトのいくつかは一人二人の貢献者しか得ることができずもたついてしまいますが、人々の興味を惹きつけるには、どんどん進めてくれる、ある程度の人数を集めることが必須なのです。

 

2 デモ

 

人々を巻き込みたければデモをすると良いでしょう。洗練されている必要はなく、しようとしていることが実現可能であること、あなたが成し得たいことを示さなければいけません。初期段階のデモではコマンドラインの出力だけでしょうが、UIプロダクトを提供するのでなければ、それでもいいんです。成し遂げようとすることと情熱、プロジェクトの大切さを伝えることが有益なのです。人は何かを「見たい」「体験したい」ので、形にしてやる気を見せることが近道なのです。

 

3 ライセンスを選ぶ

コードをオープンソースで作ると、他の人にも貢献してもらいたくなるでしょう。これはあまり重要ではなく、正しいオープンソースのライセンスを選択することが他の人の貢献度を高め、定義された用語の理解を深め、その貢献する人働いている組織とその人たち自身の貢献するハードルを下げるのです。

 

4 文書作成

 

開発者文書が最重要だと考えるかもしれません。それなければどうやって人が貢献してコードを書くことができるでしょうか。

 

初めのうちは必要ないと思っています。小さなプロジェクトではコードが何をするものか、何をさせたいか、何が欠けているかいるか、の説明をすることで何人かの人を巻き込むことができます。

しかしコードが何をするものでどう便利か説明する文書がないのに、どうやってたくさんの人が時間を割いてくれるでしょうか。

 

文書といっても、ちゃんとしたマーケティング用のものだったり正式なものである必要はなくて、どうしてそれをしなくてはいけないかと皆さんに伝えるものでなければいけません。

これは一番目のポイント、クリティカルマスに注力することにも通じています。文書、ユースケースを示すことで、「ポイント」でプロジェクトが実現したいことに説得力を持たせるのに役立ちます。

 

私たちはgithubウィキをメインの文書置き場にしていて、作成と同時にアップデートしています。これはもう少し改善できるかと思います。

 

5 見えるプロジェクト

 

プロジェクトがちゃんと見える状態でないと見つけてもらえません。私たちのプロジェクトはとてもラッキーで、Confidential Computing Consortiumができた上にそこで見せられるだけのプラットフォームをすぐに作ることができたので、クリティカルマスに届く状態です。

 

Twitterのアカウントもあり(@enarxproject)このブログOpensource.comで記事も出しています。Red Hatのhttps://next.redhat.com/にてブログを出す機会にも恵まれ、プレスのインタビューも受けましたし出来るだけカンファレンスでも講演しています。私たちにはこのような良い機会がありましたが、全てのプロジェクトに適切なアプローチではないかもしれません。しかし知ってもらうことで、もっとたくさんの人々に貢献してもらえます。

 

6 歓迎しましょう

世間に知って頂けたとしましょう。次は何ができるでしょうか。そう、皆さんにプロジェクトに参加したいと思って頂きたいですよね。歓迎してもらえなければその参加数は少なくなって行きます。また上の方で私が何を言ったかに関わらず、しばらくすると技術文書も必要になります。そしてその人たちがあなたと話し合う方法が必要ですね。そうすることで評価されていると感じますから。

 

私たちの場合、Gitter(https://gitter.im/enarx/)で、毎日のスタンドアップ会議には参加したい人がみんな参加できます。最近ではそ課題データベース(https://github.com/enarx/enarx/issues)をGithubに作成しタコとで、スレッドの会話でタイムゾーンがあることから毎日のスタンドアップ会議の時間が合っていないことが明らかになりました。ので、会議の数を少なくとも週一とする配慮をしたのです。

 

7 仲の良い人と活動しましょう

 

私はとてもとてもEnarxプロジェクトチームのみんなと働くことが楽しいです。楽しく過ごし、冗談をかわし、笑って、共通の目標をシェアしています。Enarxの成功のためです。出来るだけ楽しんですること、それが大切だと思います。特にプロジェクトの初期段階では情熱的な人と楽しく仕事できる人が必要です。例えその人が地理的には数千キロ(マイル?)離れた場所にいても、です。そのように参加できなければ情熱もどんどん先細ってくるでしょうし勢いも失われ、プロジェクトは失敗に終わるでしょう。一緒に活動する人は選べるわけではないでしょうが、できればあなたの仲が良い人を選びましょう。

 

結論:「人」です。

 

この記事を書き始めるまでは気づきませんでしたが、全くプロジェクト手法が問題ではないのです。

 

人、です。

 

読み返せばどのアドバイスにも人の大切さが述べてあり、ライセンスの選び方にも、です。オープンソースプロジェクトとはコードではないのです。人なのです。どのようにシェアし、一緒に活動し交流するかなのです。

 

オープンソースプロジェクトはそれぞれ異なるものでしょうから、この7つのアドバイスが全て当てはまることはないでしょう。間違いなくEnarxはまだ成功と言い切れるものではありませんので、今の段階でこのようなアドバイスをすべきでないのかもしれません。しかし成功してきた今までのオープンソースプロジェクトを思い起こすと、やはり人と言うものはとても大切なのです。

 

元の記事:https://aliceevebob.com/2019/12/17/7-tips-for-kicking-off-an-open-source-project/

2019年12月7日 Mike Bursell

 

タグ:オープンソース

 

What is DoH, and why should I care?

Firefox is beginning to roll out DoH

DoH is DNS-over-HTTPS.  Let’s break that down.

DNS is Domain Network System, and it’s what allows you to type in the server name (e.g. aliceevebob.com or http://www.redhat.com), which typically makes up the key part of a URL, and then get back the set of numbers which your computer needs actually to contact the machine you want it to talk to.  This is because computers don’t actually use the names, they use the numbers, and the mapping between the two can change, for all sorts of reasons (a server might move to another machine, it might be behind a firewall, it might be behind a load-balancer – those sorts of reasons).   These numbers are called “IP addresses”, and are typically[1] what are called “dotted quads”.  An example would be 127.0.0.1 – in fact, this is a special example, because it maps back to your own machine, so if you ask for “localhost”, then the answer that DNS gives you is “127.0.0.1”.  All IP[1] addresses must be in of the type a.b.c.d, where the a, b, c and d are numbers between 0 and 254 (there are some special rules beyond that, but we won’t go into them here).

Now, your computer doesn’t maintain a list of the millions upon millions of server names and their mappings to specific IP address – that would take too much memory, and ages to download.  Instead, if it needs to find a server (to get email, talk to Facebook, download a webpage, etc.), it will go to a “DNS server”.  Most Internet providers will provide their own DNS servers, and there are a number of special DNS servers to which all others connect from time to time to update their records.  It’s a well-established and generally well-run system across the entire Internet.  Your computer will keep a cache of some of the most recently used mappings, but it’s never going to know all of them across the Internet.

What worries some people about the DNS look-up process, however, is that when you do this look-up, anyone who has access to your network traffic can see where you want to go.  “But isn’t secure browsing supposed to stop that?” you might think.  Well, yes and no.  What secure browsing (websites that start “https://”) means is that nobody with access to your network traffic can see what you download from and transmit to the website itself.  But the initial DNS look-up to find out what server your browser should contact is not encrypted. This might generally  be fine if you’re just checking the BBC news website from the UK, but there are certainly occasions when you don’t want this to be the case.  It turns out although DoH doesn’t completely fix the problem of being able to see where you’re visiting, many organisations (think companies, ISPs, those under the control of countries…) try to block where you can even get to by messing with the responses you get to look-ups.  If your computer can’t even work out where the BBC news server is, then how can it visit it?

DoH – DNS-over-HTTPS – aims to fix this problem.  Rather than your browser asking your computer to do a DNS look-up and give it back the IP address, DoH has the browser itself do the look-up, and do it over a secure connection.  That’s what the HTTPS stands for – “HyperText Transfer Protocol Secure” – it’s what your browser does for all of that other secure traffic (look for the green padlock”).  All someone monitoring your network traffic would see is a connection to a DNS server, but not what you’re asking the DNS server itself.  This is a nice fix, and the system (DoH) is already implemented by the well-known Tor browser.

The reason that I’m writing about it now is that Firefox – a very popular open source browser, used by millions of people across the world – is beginning to roll out DoH by default in a trial of a small percentage of users.  If the trial goes well, it will be available to people worldwide.  This is likely to cause problems in some oppressive regimes, where using this functionality will probably be considered grounds for suspicion on its own, but I generally welcome any move which improves the security of everyday users, and this is definitely an example of one of those.


1 – for IPv4.  I’m not going to start on IPv6: maybe another time.

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません。

今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)

HSM(Hardware Security Module)のお話です。

HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?

その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

 

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。

アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

 

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

 

このアルゴリズムには二つのデータを与えられます:

 

  1. 暗号化したい、平文のデータ
  2. 暗号化するための鍵

 

結果としてでたデータは一つです。

 

  1. 暗号化されたデータ

 

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。

この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

 

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

 

鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

 

鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

 

鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

 

鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

 

HSMは上記の全ての場合に役立ちます。

これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

 

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment

)環境でなければ、の話ですけれどね。

 

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

 

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。

HSMはそのような攻撃を防ぐように作られているのです。

 

ではHSMの定義をお話ししましょう。

 

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

 

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため

FIPS140などの標準化の認可を取得しようと検査を受けています。

 

以下にHSMの主な使用方法を挙げます。

 

鍵の作成

鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

 

鍵の保管

HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

 

暗号化処理

 

鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

 

通常のコンピューティング処理

 

全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。

これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

 

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

 

結論 HSMを使うべき?

 

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

 

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

 

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

 

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。

HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。

 

元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/

2019年6月11日 Mike Bursell

 

タグ:セキュリティ

Learn to hack online – h4x0rz and pros

Removing these videos hinders defenders much more significantly than it impairs the attackers.

Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos.  According to The Register, the policy first appeared on the 5th April 2019:

“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Now, I can see why they’ve done this: it’s basic backside-covering.  YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content.  The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly.  I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.

Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.

Hacking[3] videos are different, though.  The most important point is that they have a  legitimate didactic function: in other words, they’re useful for teaching.  Nor do I think that there’s a public outcry from groups wanting them banned.  In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that.  Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge.  This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks.  IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.

“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest.  Well, yes, some will.  But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.

And there is an imbalance between attackers and defenders: this move exacerbates it.  Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems.  By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.

I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams.  We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.

We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round.  To be clear: the (good) many need access to these videos to protect against the (malicious) few.  Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.


1 – it’s pronounced “few-ROAR-ray”.  And “NEEsh”.  And “CLEEK”[2].

2 – yes, I should probably calm down.

3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.

What’s an HSM?

HSMs are not right for every project, but form an important part of our armoury.

HSMって何?

Another week, another TLA[1].  This time round, it’s Hardware Security Module: an HSM.  What, then, is an HSM, what is it used for, and why should I care?  Before we go there, let’s think a bit about keys: specifically, cryptographic keys.

The way that most cryptography works these days is that the algorithms to implement a particular primitive[3] are public, and it’s generally accepted that it doesn’t matter whether you know what the algorithm is, or how it works, as it’s the security of the keys that matters.  To give an example: I plan to encrypt a piece of data under the AES algorithm[4], which allows for a particular type of (symmetric) encryption.  There are two pieces of data which are fed into the algorithm:

  1. the data you want to encrypt (the cleartext);
  2. a key that you’ve chosen to encrypt it.

Out comes one piece of data:

  1. the encrypted text (the ciphertext).

In order to decrypt the ciphertext, you feed that and the key into the AES algorithm, and the original cleartext comes out.  Everything’s great – until somebody gets hold of the key.

This is where HSMs come in.  Keys are vital, and they are vulnerable:

  • at creation time – if I can trick you into creating a key some of whose bits I can guess, I increase my chances of being able to decrypt your ciphertext;
  • during use – while you’re doing the encryption or decryption of your data, your key will be in memory, which means that if I can snoop into that memory, I can get it (see also below for information on “side channel attacks”;
  • while stored – unless you protect your key while it’s “at rest”, and waiting to be used, I may have opportunities to get it.
  • while being transferred – if you store your keys somewhere different to the place in which you’re using it, I may have an opportunity to intercept it as it moves to the place it will be used.

HSMs can help in one way or another with all of these pieces, but why do we need them?  The key reason is that there are times when you can’t be certain that the system(s) you are using for creating, using, storing and transferring keys are as secure as you’d like.  If the keys we’re talking about are for encrypting a few emails between you and your spouse, well, you might find it embarrassing if they were compromised, but if these keys are ones from which, say, you derive all of the credit cards chip keys for an entire bank, then you have a rather larger problem.  When it comes down to it, somebody with sufficient privilege on a standard computing system can look at any part of memory – unless there’s a TEE[5] (Oh, how I love my TEE (or do I?)) – and if they can look at the memory, they can see the key.

Worse than this, there are occasions when even if you can’t see into memory, you might be able to derive enough information about a key – or the ciphertext or cleartext – to be able to mount an attack on it.  Attacks of this type are generally called “side channel attacks”, and you can think of them as a little akin to being able to work out the number of cylinders and valves a car[6] engine has by listening to it through the bonnet[7].  The engine leaks information about itself, even though it’s not designed with that in mind.  HSMs are (generally) good at preventing both types of attacks: it’s what they’re designed to do.

Here, then, is a definition:

An HSM is piece of hardware with protected storage which can perform cryptographic operations attached to a system – via a network connection or other connection such as PCI – and which has physical protection from various attacks, from side attacks to somebody physically levering open the case and attaching wires to important components so that they can read the electrical signals.

Many HSMs undergo testing to get certification against certain standards such as “FIPS 140” to show their ability to withstand various types of attack.

Here are the main uses for HSMs.

Key creation

Creation of keys is, as alluded to above, a very important operation, and one where side attacks have proved very effective in the past.  HSMs can provide safe(r) key generation, and ensure appropriate levels of randomness (entropy) for the required strength of key.

Key storage

HSMs are typically designed so that if somebody tries to break into them, they will delete any keys which are stored within them, so they’re a good place to store your keys.

Cryptographic operations

Rather than putting your keys at risk by transferring them to another system, and away from the safety of the HSM, why not move the cleartext to the HSM (encrypted under a transport key, preferably), get the HSM to do the encryption with the keys that it already holds, and then send the ciphertext back (encrypted under a transport key[8])?  This reduces opportunities for attacks during transport and during use, and is a key use for HSMs.

General computing operations

Not all HSMs support this use (almost all will support the others), but if you have sensitive operations with lots of keys and algorithms – which, in the case of AI/ML, for instance, may be sensitive (unlike the cryptographic primitives we were talking about before), then it is possible to write applications specifically to run on an HSM.  This is not a simple undertaking, however, as the execution environment provided is likely to be constrained.  It is difficult to do “right”, and easy to make mistakes which may leave you with a significantly less secure environment than you had thought.

Conclusion – should I use HSMs?

HSMs are excellent as roots of trust for PKI [9] projects and similar.  Using them can be difficult, but most these days should provide a PKCS#11 interface which simplifies the most common operations.  If you have sensitive key or cryptographic requirements, designing HSM use into your system can be a sensible step, but knowing how best to use them must be part of the architecture and design stages, well before implementation.  You should also take into account that operation of HSMs must be managed very carefully, from provisioning through everyday use to de-provisioning.  Use of an HSM in the cloud may make sense, but they are expensive and do not scale particularly well.

HSMs, then, are suited to very particular use cases of highly sensitive data and operations – it is no surprise that their deployment is most common within military, government and financial settings. HSMs are not right for every project, by any means, but form an important part of our armoury for the design and operation of sensitive systems.


1 – Three Letter Acronym[2]

2 – keep up, or we’ll be here for some time.

3 – cryptographic building block.

4 – let’s pretend there’s only one type of AES for the purposes of this example.  In fact, there are a number of nuances around this example which I’m going to gloss over, but which shouldn’t be important for the point I’m making.

5 – Trusted Execution Environment.

6 – automobile, for our North American friends.

7 – hood.  Really, do we have to do this every time?

8 – why do you need to encrypt something that’s already encrypted?  Because you shouldn’t use the same key for two different operations.

9 – Public Key Infrastructure.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.