Open source and – well, bad people

For most people writing open source, it – open source software – seems like an unalloyed good.  You write code, and other nice people, like you, get to add to it, test it, document it and use it.  Look what good it can do to the world!  Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source.  Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed.  The very soul of open source – the licence – is doing our work for us: improving the world.

And on the whole, I’d agree.  But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder.  Because I know, and you know, when you think about it, that not everybody who uses open source is a good person.  Crackers (that’s “bad hackers”) use open source.  Drug dealers use open source.  People traffickers use open source.  Terrorists use open source.  Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people.  These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”.  You’re happy to continue contributing to open source because many more people are helped by it than harmed by it.  The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1].  This is sometimes described as:

“Actions are right in proportion as they tend to promote overall human happiness.”

I certainly hope that open source does tend to promote overall human happiness.  The problem is that criminals are not the only people who will be using open source – your open source – code.  There will be businesses whose practices are shady, governments that  oppress their detractors, police forces that spy on the citizens they watch.  This is your code, being used to do bad things.

But what even are bad things?  This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad.  We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:

  • tobacco manufacturers;
  • petrochemical and fracking companies;
  • plastics manufacturers;
  • organisations who don’t support LGBTQ+ people;
  • gun manufacturers.

There’s quite a range here, and that’s intentional.  Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3].  He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”.  For ESR, “it’s all about freedom”.  I disagree, although I don’t feel the need to attack him for it.  But it’s clear that his view about what constitutes good is different to mine.  I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view.  Although we tend to characterise the open source community as liberal, this has never been a good generalisation.  According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:

Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.

This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community.  Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation.  Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.

I still think, though, that I’m not too worried.  I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).


1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.

2 – you should be used to this by now: UK spelling throughout.

3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page

First aid – are you ready?

Your using the defibrillator is the best chance that the patient has of surviving.

Disclaimer: I am not a doctor, nor a medical professional. I will attempt not to give specific medical or legal advice in this article: please check your local medical and legal professionals before embarking on any course of action about which you are unsure.

This is, generally, a blog about security – that is, information security or cybersecurity – but I sometimes blog about other things. This is one of those articles. It’s still about security, if you will – the security and safety of those around you. Here’s how it came about: I recently saw a video on LinkedIn about a restaurant manager performing Abdominal Thrusts (it’s not called the Heimlich Manoeuvre any more due to trademarking) on a choking customer, quite possibly saving his life.

And I thought: I’ve done that.

And then I thought: I’ve performed CPR, and used a defibrillator, and looked after people who were drunk or concussed, and helped people having a diabetic episode, and encouraged a father to apply an epipen[1] to a confused child suffering from anaphylactic shock, and comforted a schoolchild who had just had an epileptic fit, and attended people in more than one car crash (typically referred to as an “RTC”, or “Road Traffic Collision” in the UK these days[2]).

And then I thought: I should tell people about these stories. Not to boast[3], but because if you travel a lot, or you commute to work, or you have a family, or you work in an office, or you ever go out to a party, or you play sports, or engage in hobby activities, or get on a plane or train or boat or drive anywhere, then there’s a decent chance that you may come across someone who needs your help, and it’s good – very good – if you can offer them some aid. It’s called “First Aid” for a reason: you’re not expected to know everything, or fix everything, but you’re the first person there who can provide aid, and that’s the best the patient can expect until professionals arrive.

Types of training

There are a variety of levels of first aid training that might be appropriate for you. These include:

  • family and children focussed;
  • workplace first aid;
  • hobby, sports and event first aid;
  • ambulance and local health service support and volunteering.

There’s an overlap between all of these, of course, and what you’re interested in, and what’s available to you, will vary based on your circumstances and location. There may be other constraints such as age and physical ability or criminal background checks: these will definitely be dependent on your location and individual context.

I’m what’s called, in the UK, a Community First Responder (CFR). We’re given some specific training to help provide emergency first aid in our communities. What exactly you do depends on your local ambulance trust – I’m with the East of England Ambulance Service Trust, and I have a kit with items to allow basic diagnosis and treatment which includes:

  • a defibrillator (AED) and associated pads, razors[4], shears, etc.
  • a tank of oxygen and various masks
  • some airway management equipment whose name I can never remember
  • glucogel for diabetic treatment
  • a pulsoximeter for heartrate and blood oxygen saturation measurement
  • gloves
  • bandages, plasters[6]
  • lots of forms to fill in
  • some other bits and pieces.

I also have a phone and a radio (not all CFRs get a radio, but our area is rural and has particularly bad mobile phone reception.

I’m on duty as I type this – I work from home, and my employer (the lovely Red Hat) is cool with my attending emergency calls in certain circumstances – and could be called out at any moment to an emergency in about a 10 mile/15km radius. Among the call-outs I’ve attended are cardiac arrests (“heart attacks”), fits, anaphylaxis (extreme allergic reactions), strokes, falls, diabetics with problems, drunks with problems, major bleeding, patients with difficulty breathing or chest pains, sepsis, and lots of stuff which is less serious (and which has maybe been misreported). The plan is that if it’s considered a serious condition, it looks like I can get there before an ambulance, or if the crew is likely to need more hands to help (for treating a full cardiac arrest, a good number of people can really help), then I get dispatched. I drive my own car, I’m not allowed sirens or lights, I’m not allowed to break the speed limit or go through red lights and I don’t attend road traffic collisions. I volunteer whatever hours fit around my job and broader life, I don’t get paid, and I provide my own fuel and vehicle insurance. I get anywhere from zero to four calls a day (but most often zero or one).

There are volunteers in other fields who attend events, provide sports or hobby first aid (I did some scuba diving training a while ago), and there are all sorts of types of training for workplace first aid. Most workplaces will have designated first aiders who can be called on if there’s a problem.

The minimum to know

The people I’ve just noted above – the trained ones – won’t always be available. Sometimes, you – with no training – will be the first on scene. In most jurisdictions, if you attempt first aid, the law will look kindly on you, even if you don’t get it all perfect[7]. In some jurisdictions, there’s actually an expectation that you’ll step in. What should you know? What should you do?

Here’s my view. It’s not the view of a professional, and it doesn’t take into account everybody’s circumstances. Again, it’s my view, and it’s that you should consider enough training to be able to cope with two of the most common – and serious – medical emergencies.

  1. Everybody should know how to deal with a choking patient.
  2. Everybody should know how do to CPR (Cardiopulmonary resuscitation) – chest compressions, at minimum, but with artificial respiration if you feel confident.

In the first of these cases, if someone is choking, and they continue to fail to breathe, they will die.

In the second of these cases, if someone’s heart has stopped beating, they are dead. Doing nothing means that they stay that way. Doing something gives them a chance.

There are videos and training available on the Internet, or provided by many organisations.

The minimum to try

If you come across somebody who is in cardiac arrest, call the emergency services. Dispatch someone (if you’re not alone) to try to find a defibrillator (AED) – the emergency services call centre will often help with this, or there’s an app called “GoodSam” which will locate one for you.

Use the defibrillator.

They are designed for untrained people. You open it up, and it will talk to you. Do what it says.

Even if you don’t feel confident giving CPR, use a defibrillator.

I have used a defibrillator. They are easy to use.

Use that defibrillator.

The defibrillator is not the best chance that the patient has of surviving: your using the defibrillator is the best chance that the patient has of surviving.

Conclusion

Providing first aid for someone in a serious situation doesn’t always work. Sometimes people die. In fact, in the case of a cardiac arrest (heart attack), the percentage of times that CPR is successful is low – even in a hospital setting, with professionals on hand. If you have tried, you’ve given them a chance. It is not your fault if the outcome isn’t perfect. But if you hadn’t tried, there was no chance.

Please respect and support professionals, as well. They are often busy and concerned, and may not have the time to thank you, but your help is appreciated. We are lucky, in our area, that the huge majority of EEAST ambulance personnel are very supportive of CFRs and others who help out in an emergency.

If this article has been interesting to you, and you are considering taking some training, then get to the end of the post, share it via social media(!), and then search online for something appropriate to you. There are many organisations who will provide training – some for free – and many opportunities for volunteering. You know that if a member of your family needed help, you would hope that somebody was capable and willing to provide it.

Final note – if you have been affected by anything in this article, please find some help, whether professional or just with friends. Many of the medical issues I’ve discussed are distressing, and self care is important (it’s one of the things that EEAST takes seriously for all its members, including its CFRs).


1 – a special adrenaline-administering device (don’t use somebody else’s – they’re calibrated pretty carefully to an individual).

2 – calling it an “accident” suggests it was no-one’s fault, when often, it really was.

3 – well, maybe a little bit.

4 – to shave hairy chests – no, really.

5 – to cut through clothing. And nipples chains, if required. Again, no, really.

6 – “Bandaids” for our US cousins.

7 – please check your local jurisdiction’s rules on this.

Immutability: my favourite superpower

As a security guy, I approve of defence in depth.

I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time.  I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well.  You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora.  There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.

What does “immutable” mean?  It means that it can’t be changed.  To be more accurate, in a software context, it generally means that something can’t be changed during run-time.

Important digression – constant immutability

I realised as I wrote that final sentence that it might be a little misleading.  Many  programming languages have the concept of “constants”.  A constant is a variable (or set, or data structure) which is constant – that is, not variable.  You can assign a value to a constant, and generally expect it not to change.  But – and this depends on the language you are using – it may be that the constant is not immutable.  This seems to go against common sense[1], but that’s just the way that some languages are designed.  The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.

Operating System immutability

In Silverblue’s case, it’s the operating system that’s immutable.  You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem.  This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced.  It’s not impossible[2], but the risk is significantly lower.

How do you update your system, then?  Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that.  Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system.  This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages.  If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing.  Another environment?  Another image.

We’re more interested in the security properties that this offers us, however.  Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration.  From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.

Container immutability

This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”).  One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.

From our point of view, however, what’s great about containers is that you can choose to use them immutably.  In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern.  The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required.  The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t.  Remember: immutable software gives better knowability, and improves your resistance to run-time compromise.  Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.

This brings us to two of the reasons that you should never run containers with root privilege:

  • there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
  • there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.

Double immutability with Silverblue

I mentioned above that Silverblue runs applications in containers.  This means that you have two levels of security provided as default when you run applications on a Silverblue system:

  1. the operating system immutability;
  2. the container immutability.

As a security guy, I approve of defence in depth, and this is a classic example of that property.  I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.


1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.

2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.

3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません。

今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)

HSM(Hardware Security Module)のお話です。

HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?

その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

 

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。

アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

 

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

 

このアルゴリズムには二つのデータを与えられます:

 

  1. 暗号化したい、平文のデータ
  2. 暗号化するための鍵

 

結果としてでたデータは一つです。

 

  1. 暗号化されたデータ

 

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。

この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

 

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

 

鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

 

鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

 

鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

 

鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

 

HSMは上記の全ての場合に役立ちます。

これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

 

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment

)環境でなければ、の話ですけれどね。

 

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

 

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。

HSMはそのような攻撃を防ぐように作られているのです。

 

ではHSMの定義をお話ししましょう。

 

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

 

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため

FIPS140などの標準化の認可を取得しようと検査を受けています。

 

以下にHSMの主な使用方法を挙げます。

 

鍵の作成

鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

 

鍵の保管

HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

 

暗号化処理

 

鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

 

通常のコンピューティング処理

 

全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。

これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

 

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

 

結論 HSMを使うべき?

 

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

 

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

 

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

 

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。

HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。

 

元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/

2019年6月11日 Mike Bursell

 

タグ:セキュリティ

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません

この記事は https://aliceevebob.com/2019/06/11/whats-an-hsm/ を翻訳したものです。

 
今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)
HSM(Hardware Security Module)のお話です。
HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?
その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。
アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

このアルゴリズムには二つのデータを与えられます:

1. 暗号化したい、平文のデータ
2. 暗号化するための鍵

結果としてでたデータは一つです。

1. 暗号化されたデータ

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。
この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

- 鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

- 鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

- 鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

- 鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

HSMは上記の全ての場合に役立ちます。
これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment
)環境でなければ、の話ですけれどね。

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。
HSMはそのような攻撃を防ぐように作られているのです。

ではHSMの定義をお話ししましょう。

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため
FIPS140などの標準化の認可を取得しようと検査を受けています。

以下にHSMの主な使用方法を挙げます。
鍵の作成
鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

鍵の保管
HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

暗号化処理
鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

通常のコンピューティング処理
全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。
これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

結論 HSMを使うべき?

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。
HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。
元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/
2019年6月11日 Mike Bursell

 

Learn to hack online – h4x0rz and pros

Removing these videos hinders defenders much more significantly than it impairs the attackers.

Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos.  According to The Register, the policy first appeared on the 5th April 2019:

“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Now, I can see why they’ve done this: it’s basic backside-covering.  YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content.  The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly.  I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.

Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.

Hacking[3] videos are different, though.  The most important point is that they have a  legitimate didactic function: in other words, they’re useful for teaching.  Nor do I think that there’s a public outcry from groups wanting them banned.  In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that.  Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge.  This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks.  IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.

“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest.  Well, yes, some will.  But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.

And there is an imbalance between attackers and defenders: this move exacerbates it.  Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems.  By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.

I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams.  We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.

We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round.  To be clear: the (good) many need access to these videos to protect against the (malicious) few.  Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.


1 – it’s pronounced “few-ROAR-ray”.  And “NEEsh”.  And “CLEEK”[2].

2 – yes, I should probably calm down.

3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.

Turtles – and chains of trust

There’s a story about turtles that I want to tell.

One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3].  This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.

As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:

A learned gentleman[5] is giving a public lecture.  He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology.  At the end of the lecture, an elderly lady comes up to him.

“Young man,” she says.

“Yes,” says he.

“That was a very interesting lecture,” she continues.

“I’m glad you enjoyed it,” he replies.

“You are, however, completely wrong.”

“Really?” he says, somewhat taken aback.

“Yes.  All that rubbish about the Earth hovering in space, circling the sun.  Everybody knows that the Earth sits on the back of a turtle.”

The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”

The elderly lady looks at him with a look of disdain.  “What a ridiculous question!  It’s turtles all the way down, of course!”

The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point.  If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware.  Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.

Principles for chains of trust

When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up.  This idea of a chain of trust is very important, and very pervasive, within security.  It allows for some important principles:

  • there has to be a root of trust somewhere (the “bottom turtle”);
  • the chain is only as strong as its weakest link (and attackers will find it);
  • be explicit about each of the links in the chain;
  • realise that some of the links in the chain may change (e.g. if software is updated);
  • be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
  • simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.

Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar).  In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting.  Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank.  In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.

And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust.  Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation?  Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used?  Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.

Chains of trust are everywhere we look.  Some are short, and some are long.  In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low.  This may be due to a variety of factors, including time, expertise and visibility.  But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.


1 – citizens of the US of A,

2 – have a look on a stock photography site like Pixabay if you don’t believe me.

3 – tortoises are land-based, turtles are aquatic, I believe.

4 – Wikipedia has a good article explaining both the concept and the story’s etymology.

5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.

6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!

7 – or is it?  You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle.  For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.