A cybersecurity tip from Hazzard County

Don’t place that bet in Boss Hogg’s betting saloon: you know he’s up to no good!

It’s a slightly guilty secret, but I used to love watching The Dukes of Hazzard in the early 80’s (the first series started in late 1979, but I suspect that it didn’t make it to the UK until the next year at the earliest).  It all seemed very glamourous, and there were lots of fast car chases.  And a basset hound, which was an extra win.  To say this was early days for cybersecurity would be an understatement, and though there are references in the Wikipedia plot summaries to computers, I can’t honestly say that I remember any of those particular episodes.

One episode has stuck with me, however, for reasons that I can’t fathom.  It’s called “Hazzard Hustle” and (*SPOILER ALERT*) in it, Boss Hogg sets up a crooked betting saloon.  The swindle (if I remember it correctly) is that he controls and delays the supposedly live feeds to the TVs in the saloon, which means that he has access to results before they come in.  Needless to say, the Duke boys (probably aided and abetted by Daisy Duke) get the better of him in the end, and everything turns out OK (for them, not Boss Hogg).

“What can this have to do with cybersecurity?” you have every right to ask.  Well, the answer is reporting and monitoring channels.  Monitoring is important because without it, there is no way for us to check that what we believe should be happening actually is.  The opportunities for direct sensory monitoring of actions in computer-based systems are limited: if I request via a web browser that a banking application transfers funds between one account and another, the only visible effect that I am likely to see is an acknowledgement on the screen. Until I actually try to spend or withdraw that money, I realistically have no way to be assured that the transactions has taken place.

Let’s take an example from the human realm.  It is as if I have a trust relationship with somebody around the corner of a street, out of view, that she will raise a red flag at the stroke of noon, and I have a friend, standing on the corner, who will watch her, and tell me when and if she raises the flag. I may be happy with this arrangement, but only because I have a trust relationship to the friend: that friend is acting as a trusted channel for information.

The word “friend” was chosen carefully above, because there is a trust relationship already implicit in the term. The same is not true for the word “somebody”, which I used to describe the person who was to raise the flag. The situation as described above is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that he will pass the information correctly. But what if my friend is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend’s motivations in respect to correct reporting are not neutral.

The channels for reporting on actions – monitoring them – are vitally important within cybersecurity, and it is both easy and dangerous to fall into the trap of assuming that they are neutral, and that the only important one is between me and the acting party. In reality, the trust relationship that I have to a set of channels is key to the maintenance of the trust relationships that I have to the key entity that they monitor. In trust relationships involving computer systems, there are often multiple entities or components involved in actions, and these form a “chain of trust”, where each link depends on the other, and the chain is typically only as strong as the weakest of its links.  Don’t forget that.  Oh, and don’t place that bet in Boss Hogg’s betting saloon: you know he’s up to no good!

Should I back up to iCloud?

Don’t walk into this with your eyes closed.

This is a fairly easy one to answer. If the response to either of these questions is “yes”, then you probably shouldn’t.

1. Do you have any sensitive data that you would be embarrassed to be seen by any agent of the US Government?
2. Are you a non-US citizen?

This may seem somewhat inflammatory, so let’s look into what’s going on here.

It was widely reported last week that Apple has decided not to implement end-to-end encryption for back-ups from devices to Apple’s iCloud.  Apparently, the decision was made after Apple came under pressure from the FBI, who are concerned that their ability to access data from suspects will be reduced.  This article is not intended to make any judgments about either Apple or any law enforcement agencies, but I had a request from a friend (you know who you are!) for my thoughts on this.

The main problem is that Apple (I understand – I’m not an Apple device user) do a great job of integrating all the services that they offer, and making them easy to use across all of your Apple products.  iCloud is one of these services, and it’s very easy to use.  Apple users have got used to simplicity of use, and are likely to use this service by default.  I understand this, but there’s a classic three-way tug of war for pretty much all applications or services, and it goes like this: you get to choose two out of the following three properties of a system, application or service, but only two of them.

  1. security
  2. ease of use
  3. cost

Apple make things easy to use, and quite often pretty secure, and you pay for this, but the specific cost (in inconvenience, legal fees, political pressure, etc.) of making iCloud more secure seems to have outweighed the security in this situation, and led them to decide not to enable end-to-end encryption.

So, it won’t be as secure as you might like.  Do you care?  Well, if you have anything you’d be embarrassed for US government agents to know about – and beyond embarrassed, if you have anything which isn’t necessary entirely, shall we say, legal – then you should know that it’s not going to be difficult for US government agents such as the FBI to access it.  This is all very well, but there’s a catch for those who aren’t in such a position.

The catch is that the protections offered to protect the privacy of individuals, though fairly robust within the US, are aimed almost exclusively at US citizens.  I am in no sense a lawyer, but as a non-US citizen, I would have zero confidence that it would be particularly difficult for any US government agent to access any information that I had stored on any iCloud account that I held.  Think about that for a moment.  The US has different standards to some other countries around, for instance, drug use, alcohol use, sexual practices and a variety of other issues.  Even if those things are legal in your country, who’s to say that they might be used now or in the future to decide whether you should be granted a visa to the US, or even allowed entry at all.  There’s already talk of immigration officials checking your social media for questionable material – extending this to unencrypted data held in the US is far from out of the question.

So this is another of those issues where you need to make a considered decision.  But you do need to make a decision: don’t walk into this with your eyes closed, because once Apple has the data, there’s realistically no taking it back from the US government.

 

 

 

Coming to you in Japanese

We are now multi-lingual.

I have an exciting announcement, which is that starting this week, some of the articles on this blog will also be in Japanese.  My very talented Red Hat colleague Yuki Kubota showed an interest in translating some which she thought might be of interest to Japanese readers, and I jumped at the chance.  I’m very thrilled and humbled.

We’re still ironing out the process, but hopefully (if you already read Japanese), you’ll be able to read the following articles.

We’ll try to add the tag “Japanese” to each of these, as well.

So, a huge thank you to Yuki: we’d love comments – in English or Japanese!

Are you positive?

What do pregnancy tests and the Ukrainian aircraft missile strike have in common?

Not everything in life is nicely binary, much as we[1] might like it to be. There are shades of grey[2] in many aspects of life, and though humans can often cope with uncertainty, computer systems are less good at it: they generally want a “yes” or “no” answer. This means that decisions sometimes need to be made on incomplete evidence, and, well, that means that the answers aren’t always correct. There’s a whole area of computer science related to this: fuzzy logic.

Let’s look into what the options are. Assuming that we’re looking two options: “yes” (a positive) and “no” (a negative). That means that there are two ways in which the answer can be incorrect:

  1. a “yes” answer was incorrectly chosen (false positive);
  2. a “no” answer was incorrectly chosen (false negative).

An example to allow us to explore this is pregnancy. It’s generally agreed that you can’t be a little bit pregnant: if you take a test, any result it gives you needs to be either positive or negative. If you are pregnant, and a test result comes back negative, then that’s a false negative. If you are not pregnant, and a test comes back positive, that’s a false positive. The implications of a false positive or a false negative can both be pretty major – as anybody who has received one will tell you. I spent a little time online trying to find expected false positive and false negatives for pregnancy tests, but it turns out that the rates are so dependent on a variety of factors that it was difficult to find a sensible answer[3].

A tragic recent example of a false positive took place on Wednesday, 8th January 2020, when a Ukrainian International Airlines flight was shot down by an Iranian missile, killing all 176 people on board. It appears that an air defence radar system misidentified the aircraft as a cruise missile. As the radar system was looking for a positive identification of a threat, this can be counted as a false positive.

What might have been the alternative in this case? If the aircraft actually had been a cruise missile, but was identified as a civilian aircraft, this would have been a false negative, and the impact might well have been significant damage to an Iranian military installation.

Which is the most damaging? Well, in the case of the aircraft, it would seem pretty clear to most observers that the false positive would be worse, but from a military point of view, that might not be the case. Maybe the impact of a missile strike on a major military installation might be considered worse than the civilian loss of life in the other case. In this case, as in many others, a decision needs to be made as to which is most important to reduce: the chance of a false negative or the chance of a false positive? In a perfect world, of course, there would be no false results, negative or positive. The problem with many systems that take analogue[4] inputs and turn them into digital outputs in this way is that avoiding false results is very costly, and sometimes impossible. Even worse news is that reducing probability of one of the two types of false result tends to increase the probability of the other.

A classic example of this is in the use of biometrics for user identification. Fingerprints, facial recognition, iris scanning and similar techniques have to balance the likelihood of a false positive with a false negative. Which is worse: the chance that the CEO will not be able to update the payroll details, or that a rogue employee will update her details to improve her salary package?[5]

One good piece of news is that AI/ML (Artificial Intelligence/Machine Learning) is improving the performance of biometric systems and, in fact, other areas of computing where “fuzzy logic” is required. In most cases, humans are still better at reducing messy sets of information to yes/no results, but that is changing, and where multiple automated decisions need to be made, then AI/ML is worth considering.

Whenever you are dealing with “messy” data[6] which needs to be reduced to a “yes/no” or “positive/negative” binary result, you need to consider the likelihood of false positives or negatives. Not only do you need to consider the likelihood of each, but also the impact of each. Once you have understood these, you can then decide which you want to try to minimise, and what techniques you should use to do so.

We may be stuck with false results, but we need to understand what our choices are, and how we can get the best outcomes available from messy data.


1 – in talking security, but I’m sure this goes for lots of other people, too.

2. “gray” for our non-Commonwealth readers.

3. good advice seems to be to test several times over several days.

4. “analog”, I suppose – see [2].

5. this is one of the reasons that authentication systems generally use two factors from the three “something you are”, “something you know”, “something you have”.

6. most real-world data, to be honest.

5 resolutions for travellers in 2020

Enjoy the time when you’re not travelling

I’m not a big one for New Year[1] resolutions.  To give you an example, my resolution for 2019 was “not to be mocked by my wife or daughters”.  Given that one of them (my daughters, that is) is a teenager, and the other nearly so, this went about as well as you might expect.  At the beginning of 2018, I wrote a blog post with the top 5 resolutions for security folks.  However, if I re-use the same ones this time round, somebody’s bound to notice[2], so I’m going to come up with some different ones[3].  I do quite a lot of travel, so I thought I’d provide my top 5 resolutions for this year, which I hope will be useful not only for me, but also others.

(I’ve written another article that covers in more depth some of the self-care aspects of this topic which you may find helpful: Of headphones, caffeine and self-care.)

1. Travel lighter

For business trips, I’ve tended to pack a big, heavy laptop, with a big, heavy power “brick” and cable, and then lots of other charging-type cables of different sizes and lengths, and a number of different plugs to fit everything into.  Honestly, there’s just no need for much of it, so this year, I suggest that we all first take stock, and go through all of those cables and see which ones we actually need.  Maybe take one spare for each USB type, but no more.  And we only need the one plug – that nice multi-socket one with a couple of USB sockets will do fine.  And if we lose it or forget it, the hotel will probably have one we can borrow, or we can get one as we go through the next airport.

And the laptop?  Well, I’ve just got a little Chromebook.  There are a variety of these: I managed to pick up a Pixelbook second-hand, with warranty, for about 40% off, and I love it.  I’m pretty sure that I can use it for all the day-to-day tasks I need to perform while travelling, and, as a bonus, the power connection is smaller and lighter than the one for my laptop.  I’ve picked up a port extender (2 x USB C, 1 x USB A, 1 x Ethernet, 1 x HDMI), and I think I’m sorted.  I’m going to try leaving the big laptop at home, and see what happens.

2. Take time

I’m not just talking about leaving early to get to the airport – though that is my standard practice – but also about just, well, taking more time about things.  It’s easy to rush here and there, and work yourself into a state[3], or feel that you need to fill every second of every day with something work-related, when you wouldn’t do that if you were at home.  It may be stepping aside to let other people off the plane, and strolling to the ground transportation exit, rather than hurrying there, or maybe stopping for a few minutes to look at some street art or enjoy the local architecture – whatever it is, give yourself permission not to hurry and not to rush, but just breathe and let the rest of the world slip by, even if it’s just for a few seconds.

3. Look after yourself

Headphones are a key tool for help me look after myself – and one of the things I won’t be discarding as part of my “travel lighter” resolution.  Sometimes I need to take myself away from the hubbub and to chill.  But they are just a tool: I need to remember that I need to stop, and put them on, and listen to some music.  It’s really easy to get caught up in the day, and the self-importance of being the Business Traveller, and forget that I’m not superhuman (and that my colleagues don’t expect me to be).  Taking time is the starting point – and sometimes all you have time for – but at some point you need to stop completely and do something for yourself.

4. Remember you’re tired

Most of us get grumpy when we’re tired[4].  And travelling is tiring, so when you’re at the end of a long trip, or just at the beginning of one, after a long day in cars and airports and planes, remember that you’re tired, and try to act accordingly.  Smile.  Don’t be rude.  Realise that the hotel receptionist is doing their best to sort your room out, or that the person in front of you in the queue for a taxi is just as frustrated with their four children as you are (well, maybe not quite as much).  When you get home, your partner or spouse has probably been picking up the slack of all the things that you’d normally do at home, so don’t snap at them: be nice, show you care.  Whatever you’re doing, expect things to take longer: you’re not at the top of your game.  Oh, and restrict alcohol intake, and go to bed early instead.  Booze may feel like it’s going to help, but it’s really, really not.

5. Enjoy not travelling

My final resolution was going to be “take exercise”, and this still matters, but I decided that even more important is the advice to enjoy the time when you’re not travelling.  Without “down-time”, travelling becomes – for most of us at least – a heavier and heavier burden.  It’s so easy, on returning from a work trip, to head straight back into the world of emails and documents and meetings, maybe catching up over the weekend on those items that you didn’t get done because you were away.  Don’t do this – or do it very sparingly, and if you can, claw back the time over the next few days, maybe taking a little longer over a cup of tea or coffee, or stopping yourself from checking work emails one evening.  Spend time with the family[5], hang out with some friends, run a 5k, go to see a film/movie, play some video games, complete that model railway set-up you’ve been working on[7].  Whatever it is that you’re doing, let your mind and your body know that you’re not “on-the-go”, and that it’s time to recover some of that energy and be ready when the next trip starts.  And you know it will, so be refreshed, and be ready.


1 – I’m using the Western (Gregorian calendar), so this is timely.  If you’re using a different calendar, feel free to adjust.

2 – the list is literally right there if you follow the link.

3 – I considered reversing the order, but the middle one would just stay the same.

4 – I wondered if this is just me, but then remembered the stressed faces of those on aircraft, in airports and checking into hotels, and thought, “no, it’s not”.  And I am informed (frequently) by my family that this is definitely the case for me.

5 – if you have one[6].

6 – and if that’s actually a relaxing activity…

7 – don’t mock: it takes all kinds.

2019年はEnarxの年でした

2020年はデモなど色々なプランを考えています!

 

私にとって2019年はEnarxプロジェクトがほとんどでした。

他のしなければいけない業務もあって、例えば顧客会議、IBM(7月に私の勤めるRed Hatを買収してます)の業務、Kubernetesのセキュリティやパートナー企業と協業など重要なことは色々ありました。しかしEnarxが2019年のハイライトです。

 

年始に私たちは実現できることがあると確信し、内部のリーダーシップチームに対して、達成可能であることの証明を課されました。

その課題に対して、私たちはAMDのSEVチップと五月のボストンでのRed Hat Summitでデモを行い、このブログでアナウンスをしました。

IntelのSGXチップセットと10月のリヨンでのOpen Source Summitでフォローアップをしています。2019年のEnarxの開発でとても大切なことだったと考えています。

 

チーム

 

Enarxは私だけのものではもちろん、ありません。Nathaniel McCallumと共にプロジェクトの共同創立者の一人であることは非常に誇りです。ここまで達成できたのは多くのチームメンバーのおかげですし、オープンソースプロジェクトとして貢献し使用している皆様のおかげです。貢献者ページにはたくさんのメンバーの名前がありますが、まだ全員の名前が挙がっているわけではありません。また、Red Hat内外の何人かの方から頂いたプロジェクトに対するアドバイス、サポートとスポンサリングはとても大切なものです。その皆様の名前を言う許可は得ていないので、ここではお話しせず、丁重に扱う事とします。皆様のサポートとそのお時間を頂けたことに非常に感謝しています。

 

ユースケースとパートナー

 

2019年に成し得た重要なことの一つに、皆さんがどのように「野良状態で」Enarxを使いたいのかをまとめたことと、その比較的詳細な分析を行い、書き上げたことです。

その全てが公開されたわけではないですが、(私が任されていることなんですけどもね)これは実際にEnarxを使用したいと考えているパートナーを見つけるのに不可欠です。まだ公表できませんが、皆さんも聞いたことがあるグローバル企業のいくつかから、また将来的に増えるであろうスタートアップ企業からも、とても興味深いユースケースが挙がってきています。このように興味を持っていただくことは、ロジェクトの実用化に不可欠で、Enarxはただエンジニアの情熱から飛び出しただけのプロジェクトではないと言う事なのです。

 

外部を見ると

 

2019年の重大イベントはLinux FoundationのOpen Source SummitでのConfidential Computing Consortiumの発表でした。私たちRed HatではEnarxはこの新しいグループにぴったりだと考えており、10月の正式発足でプレミアメンバーになったことを嬉しく思っています。これを書いている2019年12月31日時点では、会員数は21、このコンソーシアムは幅広い業界で懸念と興味を惹きつけるものだと言うことがはっきりしてきました。Enarxの信念と目的が裏付けされていると言うことです。

 

2019年に成し遂げたのはコンソーシアムへの参加だけではありません。カンファレンスで講演を行い、このブログ上やNext.redhat.comまたOpensource.comで記事を発表、プレスとの会見、ウェブキャストなどです。一番大切なのは六角形のステッカーを作ったことでしょう!(欲しい方がいらっしゃったらご連絡ください)

 

最後に大切なことを一つ。私たちはプロジェクトを公表していきます。内製のプロジェクトからRed Hat外の参加を促進するために活動しています。詳細は12月17日のBlogをご覧ください。

 

アーキテクチャとコード

 

他に何かあるでしょうか。そうだ、コードですね。そしていくつかのコンポーネントの成熟しつつあるアーキテクチャセットです。

私たちは当然これら全てを外部に公表するつもりですが、まだできていない状態です。すべきことが本当にたくさんあるのです。私たちは皆さんが使用できるようにコードを公開することに尽力していて、2020年に向けデモやそれ以外の大きな計画を立てています。

 

最後に

 

他にも大切なことはもちろんあり、私がWileyから出版するトラスト(信頼性)に関連する本を書いていることです。これはEnarxに深く関連するものです。基本的に、技術はとても「クール」なものですが、Enarxプロジェクトは既存の需要に見合うものですから、Nathanielと私はクラウドやIoT、エッジ、その他機密情報とアルゴリズムが実装される全てのワークロードの管理方法を変えていくいい機会だと考えています。

 

このブログはセキュリティに関するものですが、トラスト(信頼性)と言うものはとても重要な部分だと考えています。Enarxはそれにぴったりと合うのです。ですから、これからも信頼性とEnarxに関するポストをしていきます。Enarx.ioの最新情報に注目していてください。

 

元の記事:https://aliceevebob.com/2019/12/31/2019-a-year-of-enarx/

2019年12月31日 Mike Bursell

 

タグ:セキュリティ、Enarx、オープンソース、クラウド

 

2019: a year of Enarx

We have big plans for demos and more in 2020

2019年はEnarxの年でした

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.