2019年はEnarxの年でした

2020年はデモなど色々なプランを考えています!

 

私にとって2019年はEnarxプロジェクトがほとんどでした。

他のしなければいけない業務もあって、例えば顧客会議、IBM(7月に私の勤めるRed Hatを買収してます)の業務、Kubernetesのセキュリティやパートナー企業と協業など重要なことは色々ありました。しかしEnarxが2019年のハイライトです。

 

年始に私たちは実現できることがあると確信し、内部のリーダーシップチームに対して、達成可能であることの証明を課されました。

その課題に対して、私たちはAMDのSEVチップと五月のボストンでのRed Hat Summitでデモを行い、このブログでアナウンスをしました。

IntelのSGXチップセットと10月のリヨンでのOpen Source Summitでフォローアップをしています。2019年のEnarxの開発でとても大切なことだったと考えています。

 

チーム

 

Enarxは私だけのものではもちろん、ありません。Nathaniel McCallumと共にプロジェクトの共同創立者の一人であることは非常に誇りです。ここまで達成できたのは多くのチームメンバーのおかげですし、オープンソースプロジェクトとして貢献し使用している皆様のおかげです。貢献者ページにはたくさんのメンバーの名前がありますが、まだ全員の名前が挙がっているわけではありません。また、Red Hat内外の何人かの方から頂いたプロジェクトに対するアドバイス、サポートとスポンサリングはとても大切なものです。その皆様の名前を言う許可は得ていないので、ここではお話しせず、丁重に扱う事とします。皆様のサポートとそのお時間を頂けたことに非常に感謝しています。

 

ユースケースとパートナー

 

2019年に成し得た重要なことの一つに、皆さんがどのように「野良状態で」Enarxを使いたいのかをまとめたことと、その比較的詳細な分析を行い、書き上げたことです。

その全てが公開されたわけではないですが、(私が任されていることなんですけどもね)これは実際にEnarxを使用したいと考えているパートナーを見つけるのに不可欠です。まだ公表できませんが、皆さんも聞いたことがあるグローバル企業のいくつかから、また将来的に増えるであろうスタートアップ企業からも、とても興味深いユースケースが挙がってきています。このように興味を持っていただくことは、ロジェクトの実用化に不可欠で、Enarxはただエンジニアの情熱から飛び出しただけのプロジェクトではないと言う事なのです。

 

外部を見ると

 

2019年の重大イベントはLinux FoundationのOpen Source SummitでのConfidential Computing Consortiumの発表でした。私たちRed HatではEnarxはこの新しいグループにぴったりだと考えており、10月の正式発足でプレミアメンバーになったことを嬉しく思っています。これを書いている2019年12月31日時点では、会員数は21、このコンソーシアムは幅広い業界で懸念と興味を惹きつけるものだと言うことがはっきりしてきました。Enarxの信念と目的が裏付けされていると言うことです。

 

2019年に成し遂げたのはコンソーシアムへの参加だけではありません。カンファレンスで講演を行い、このブログ上やNext.redhat.comまたOpensource.comで記事を発表、プレスとの会見、ウェブキャストなどです。一番大切なのは六角形のステッカーを作ったことでしょう!(欲しい方がいらっしゃったらご連絡ください)

 

最後に大切なことを一つ。私たちはプロジェクトを公表していきます。内製のプロジェクトからRed Hat外の参加を促進するために活動しています。詳細は12月17日のBlogをご覧ください。

 

アーキテクチャとコード

 

他に何かあるでしょうか。そうだ、コードですね。そしていくつかのコンポーネントの成熟しつつあるアーキテクチャセットです。

私たちは当然これら全てを外部に公表するつもりですが、まだできていない状態です。すべきことが本当にたくさんあるのです。私たちは皆さんが使用できるようにコードを公開することに尽力していて、2020年に向けデモやそれ以外の大きな計画を立てています。

 

最後に

 

他にも大切なことはもちろんあり、私がWileyから出版するトラスト(信頼性)に関連する本を書いていることです。これはEnarxに深く関連するものです。基本的に、技術はとても「クール」なものですが、Enarxプロジェクトは既存の需要に見合うものですから、Nathanielと私はクラウドやIoT、エッジ、その他機密情報とアルゴリズムが実装される全てのワークロードの管理方法を変えていくいい機会だと考えています。

 

このブログはセキュリティに関するものですが、トラスト(信頼性)と言うものはとても重要な部分だと考えています。Enarxはそれにぴったりと合うのです。ですから、これからも信頼性とEnarxに関するポストをしていきます。Enarx.ioの最新情報に注目していてください。

 

元の記事:https://aliceevebob.com/2019/12/31/2019-a-year-of-enarx/

2019年12月31日 Mike Bursell

 

タグ:セキュリティ、Enarx、オープンソース、クラウド

 

2019: a year of Enarx

We have big plans for demos and more in 2020

2019年はEnarxの年でした

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.

 

 

Of projects, products and (security) community

Not all open source is created (and maintained) equal.

プロジェクトとプロダクトとセキュリティコミュニティと

Open source is a  good thing.  Open source is a particularly good thing for security.  I’ve written about this before (notably in Disbelieving the many eyes hypothesis and The commonwealth of Open Source), and I’m going to keep writing about it.  In this article, however, I want to talk a little more about a feature of open source which is arguably both a possible disadvantage and a benefit: the difference between a project and a product.  I’ll come down firmly on one side (spoiler alert: for organisations, it’s “product”), but I’d like to start with a little disclaimer.  I am employed by Red Hat, and we are a company which makes money from supporting open source.  I believe this is a good thing, and I approve of the model that we use, but I wanted to flag any potential bias early in the article.

The main reason that open source is good for security is that you can see what’s going on when there’s a problem, and you have a chance to fix it.  Or, more realistically, unless you’re a security professional with particular expertise in the open source project in which the problem arises, somebody else has a chance to fix it. We hope that there are sufficient security folks with the required expertise to fix security problems and vulnerabilities in software projects about which we care.

It’s a little more complex than that, however.  As an organisation, there are two main ways to consume open source:

  • as a project: you take the code, choose which version to use, compile it yourself, test it and then manage it.
  • as a product: a vendor takes the project, choose which version to package, compiles it, tests it, and then sells support for the package, typically including docs, patching and updates.

Now, there’s no denying that consuming a project “raw” gives you more options.  You can track the latest version, compiling and testing as you go, and you can take security patches more quickly than the product version may supply them, selecting those which seem most appropriate for your business and use cases.  On the whole, this seems like a good thing.  There are, however, downsides which are specific to security.  These include:

  1. some security fixes come with an embargo, to which only a small number of organisations (typically the vendors) have access.  Although you may get access to fixes at the same time as the wider ecosystem, you will need to check and test these (unless you blindly apply them – don’t do that), which will already have been performed by the vendors.
  2. the huge temptation to make changes to the code that don’t necessarily – or immediately – make it into the upstream project means that you are likely to be running a fork of the code.  Even if you do manage to get these upstream in time, during the period that you’re running the changes but they’re not upstream, you run a major risk that any security patches will not be immediately applicable to your version (this is, of course, true for non-security patches, but security patches are typically more urgent).  One option, of course, if you believe that your version is likely to consumed by others, is to make an official fork of project, and try to encourage a community to grow around that, but in the end, you will still have to decide whether to support the new version internally or externally.
  3. unless you ensure that all instances of the software are running the same version in your deployment, any back-porting of security fixes to older versions will require you to invest in security expertise equal or close to equal to that of the people who created the fix in the first place.  In this case, you are giving up the “commonwealth” benefit of open source, as you need to pay experts who duplicate the skills of the community.

What you are basically doing, by choosing to deploy a project rather than a product is taking the decision to do internal productisation of the project.  You lose not only the commonwealth benefit of security fixes, but also the significant economies of scale that are intrinsic to the vendor-supported product model.  There may also be economies of scope that you miss: many vendors will have multiple products that they support, and will be able to apply security expertise across those products in ways which may not be possible for an organisation whose core focus is not on product support.

These economies are reflected in another possible benefit to the commonwealth of using a vendor: the very fact that multiple customers are consuming their products mean that they have an incentive and a revenue stream to spend on security fixes and general features.  There are other types of fixes and improvements on which they may apply resources, but the relative scarcity of skilled security experts means that the principle of comparative advantage suggests that they should be in the best position to apply them for the benefit of the wider community[1].

What if a vendor you use to provide a productised version of an open source project goes bust, or decides to drop support for that product?  Well, this is a problem in the world of proprietary software as well, of course.  But in the case of proprietary software, there are three likely outcomes:

  • you now have no access to the software source, and therefore no way to make improvements;
  • you are provided access to the software source, but it is not available to the wider world, and therefore you are on your own;
  • everyone is provided with the software source, but no existing community exists to improve it, and it either dies or takes significant time for a community to build around it.

In the case of open source, however, if the vendor you have chosen goes out of business, there is always the option to use another vendor, encourage a new vendor to take it on, productise it yourself (and supply it to other organisations) or, if the worst comes to the worst, take the internal productisation route while you search for a scalable long-term solution.

In the modern open source world, we (the community) have got quite good at managing these options, as the growth of open source consortia[2] shows.  In a consortium, groups of organisations and individuals cluster around a software project or set of related projects to encourage community growth, alignment around feature and functionality additions, general security work and productisation for use cases which may as yet be ill-defined, all the while trying to exploit the economies of scale and scope outlined above.  An example of this would be the Linux Foundation’s Confidential Computing Consortium, to which the Enarx project aims to be contributed.

Choosing to consume open source software as a product instead of as a project involves some trade-offs, but from a security point of view at least, the economics for organisations are fairly clear: unless you are in position to employ ample security experts yourself, products are most likely to suit your needs.


1 – note: I’m not an economist, but I believe that this holds in this case.  Happy to have comments explaining why I’m wrong (if I am…).

2 – “consortiums” if you really must.

What is confidential computing?

Industry interest has been high, and overwhelmingly positive.

On Wednesday, 21st August, 2019 (just under a week ago, at time of writing), Jim Zemlin of the Linux Foundation announced the intent to form the Confidential Computing Consortium, with members including Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.  I’m particularly proud as Red Hat (my employer) is one of those[1], and I spent the preceding few weeks and days working very hard to ensure that we would be listed as one of the planned founding members.

“Confidential Computing” sounds like a lofty goal, and it is.  We’ve known for ages that you should encrypt sensitive data at rest (in storage), in transit (on the network), but confidential computing, as defined by the consortium, is about doing the same for sensitive data – and algorithms – in use.  The consortium plans to encourage industry to use hardware technologies generally called Trust Execution Environments to allow applications and processes to be encrypted as they are running.

This may sound somewhat familiar to those who follow my blog, and it should: Enarx, an open source project launched by Red Hat, was announced as one of the projects that should be part of the initial launch.  I’ve written about Enarx in several places:

Additionally, you’ll find lots of information on the introduction page of the Enarx wiki.

The press release from the Linux Foundation lists the following goals for the Confidential Computing Consortium (my emboldening):

The Confidential Computing Consortium will bring together hardware vendors, cloud providers, developers, open source experts and academics to accelerate the confidential computing market; influence technical and regulatory standards; and build open source tools that provide the right environment for TEE development. The organization will also anchor industry outreach and education initiatives.

Enarx, of course, fits perfectly into this description, as per the text in bold.  Beyond that, however, is the alignment that there is with the other aims of the Enarx project, and the opportunities with which a wider consortium presents us.  The addition of hardware vendors gives us – and the other participants – opportunities to discuss implementations (hardware and software) in an open environment, cloud providers and other users will give us great use cases, and academic involvement broadens the likelihood of quick access to new ideas and research.

We also expect industry and regulatory standards to be forthcoming, and a need for education as the more sectors and industries engage with confidential computing: the consortium provides a framework to engage in related activities.

It’s early days for the Confidential Computing Consortium, but I’m really hopeful and optimistic.  Already, the openness displayed between the planned members on both technical and non-technical collaboration has gone far beyond what I would have expected.  The industry interest – as evidenced by press and community activities – has been high, and overwhelmingly positive. Fans of Enarx – and confidential computing generally – should be excited by the prospect of greater visibility and collaboration.  After all, isn’t that what open source is about in the first place?


1 – this seems like a good place to point out that the views in this article and blog are my own, and may not represent those of my employer, of the Confidential Computing Consortium, the Linux Foundation or any other body.

Equality in volunteering and open source

Volunteering favours the socially privileged

Volunteering is “in”. Lots of companies – particularly tech companies, it seems – provide incentives to employees to volunteer for charities, NGOs and other “not-for-profits”. These incentives range from donations matching to paid volunteer days to matching hours worked for a charity with a cash donation.

Then there’s other types of voluntary work: helping out at a local sports club, mowing a neighbour’s lawn or fetching their groceries, and, of course, a open source, which we’ll be looking at in some detail. There are almost countless thousands of projects which could benefit from your time.

Let’s step back first and look at the benefits of volunteering. The most obvious, if course, is the direct benefit to the organisation, group or individual of your time and/or expertise. Then there’s the benefits to the wider community. Having people volunteering their time to help out with various groups – particularly those with whom they would have little contact in other circumstances – helps social cohesion and encourages better understanding of differing points of view as you meet people, and not just opinions.

Then there’s the benefit to you. Helping others feels great, looks good on your CV[1], can give you more skills, and make you friends – quite apart from the benefit I mentioned above about helping you to understand differing points of view. On the issue of open source, it’s something that lots of companies – certainly the sorts of companies with which I’m generally involved – are interested in, or even expect to see on your CV. Your contributions to open source projects are visible – unlike whatever you’ve been doing in most other jobs – they can be looked over, they show a commitment and are also a way of gauging your enthusiasm, expertise and knowledge in particular areas. All this seems to make lots of sense, and until fairly recently, I was concerned when I was confronted with a CV which didn’t have any open source contributions that I could check.

The inequality of volunteering

And then I did some reading by a feminist open sourcer (I’m afraid that I can’t remember who it was[3]), and did a little more digging, and realised that it’s far from that simple. Volunteering is an activity which favours the socially privileged – whether that’s in terms of income, gender, language or any other number of indicator. That’s particularly true for software and open source volunteering.

Let me explain. We’ll start with the gender issue. On average, you’re much less likely to have spare time to be involved in an open source project if you’re a woman, because women, on average, have more responsibilities in the home, and less free time. They are also globally less likely to have access to computing resources with which to contribute. due to wage discrepancies. Even beyond that, they are less likely to be welcomed into communities and have their contributions valued, whilst being more likely to attract abuse.

If you are in a low income bracket, you are less likely to have time to volunteer, and again, to have access to the resources needed to contribute.

If your first language is not English, you are less likely to be able to find an accepting project, and more likely to receive abuse for not explaining what you are doing.

If your name reflects a particular ethnicity, you may not be made to feel welcome in some contexts online.

If you are not neurotypical (e.g. you have Aspergers or are on the autism spectrum, or if you are dyslexic), you may face problems in engaging in the social activities – online and offline – which are important to full participation in many projects.

The list goes on. There are, of course, many welcoming project and communities that attempt to address all of these issues, and we must encourage that. Some people who are disadvantaged in terms of some of the privilege-types that I’ve noted above may actually find that open source suits them very well, as their privilege can be hidden online in ways in which it could not be in other settings, and that some communities make a special effort to be welcoming and accepting.

However, if we just assume – that’s unconscious bias, folks – that volunteering, and specifically open source volunteering, is a sine qua non for “serious” candidates for roles, or a foundational required expertise for someone we are looking to employ, then we set a dangerous precedent, and run a very real danger of reinforcing privilege, rather than reducing it.

What can we do?

First, we can make our open source projects more welcoming, and be aware of the problems that those from less privileged groups may face. Second, we must be aware, and make our colleagues aware, that when we are interviewing and hiring, lack evidence of volunteering is not evidence that the person is not talented, enthusiastic or skilled. Third, and always, we should look for more ways to help those who are less privileged than us to overcome the barriers to accessing not only jobs but also volunteering opportunities which will benefit not only them, but our communities as a whole.


1 – Curriculum vitae[2].

2 – Oh, you wanted the Americanism? It’s “resume” or something similar, but with more accents on it.

3 – a friend reminded me that it might have been this: https://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community

On conversation and the benefits of boasting

On Monday and Tuesday this week I’m attending DevSecCon in Boston – a city which is much more pleasant when it’s not raining or snowing, which it often seems to be doing while I’m here.  There are a bunch of interesting talks[1] and workshops, and I was asked, at the last minute, to facilitate an “Open Space Discussion” at the end of the first day (as two people hadn’t arrived as expected).  Facilitating discussions is about not talking all the time, but encouraging other people to talk[2]: my approach to this is to tell a story, and then encourage them to share stories.

People enjoy listening to stories, and people enjoy telling stories, and there is a type of story that is particularly useful and important in the world of work: “war-stories”.  Within the IT industry, at least, this refers to stories about experiences – usually bad experiences – from our day-to-day working lives.  They are often used to illustrate a point or lend experiential weight to an opinion being put forward. But they are also great learning experiences.

What I learned yesterday – or re-learned – is the immense value of conversation with our peers in a neutral setting, with no formal bounds or difference in “rank”.  We had at least one participant who was only two years out of college, participants with 25-30 years of experience, a CISO of a major healthcare provider, a CEO, DevOps engineers, customer-facing people, security people, non-security people, people with Humanities[4] degrees, people with Computer Science degrees.  We were about twelve people, and everybody contributed, to greater or lesser degrees.  I hope that we managed to maintain a conversation where age and numbers of years in the industry were unimportant, but the experiences shared were.

And I learned about other people’s opinions, their viewpoints, their experiences, their tips for what works – and doesn’t work – and made, I hope, some new friends.  Certainly some new peers.  What we talked about isn’t vitally important to this article[5]: the important thing was the conversation, and the stories they told that brought their shared wisdom to the table.  I felt, by the end of the session, that we had added something to the commonwealth of knowledge within the industry

I was looking for a way to close the session as we were moving to the end, and hit upon something which seemed to work: I encouraged everybody to spend 30 seconds or so to tell the group about an incident in their career that they are proud of.  We got some great stories, and not only did we learn from them, but I think it’s really important that we get the chance to express our pride in the things that we’ve done.  We rarely get the chance to boast, or to let people outside our general circle know why we think we should be valued.  There’s nothing wrong with being proud of the things we’ve done, but we’re often – usually – discouraged from doing so.  It was great to have people share their various experiences of personal expertise, and to think about how they would use them to further their career.  I didn’t force everybody to speak – and was thanked by one of the silent participants later – and it’s important to realise that not everybody will be happy doing so.  But I think that the rapport that we’d built as a group meant that more people were happy to contribute something than would have considered it at the beginning of the session.  I left with a respect for all of the participants, and a realisation of the importance of shared experience.

 


1 – I gave a talk based on my blog article Why I love technical debtI found it interesting…

2 – based on this definition, it may surprise regular readers – and people who know me IRL[3] – that I’d even consider participating, let alone facilitating.

3 – does anybody use this term anymore?

4 – Liberal Arts/Social Sciences.

5 – but included:

  • the impact of different open source licences
  • how legal teams engage with open source questions
  • how to encourage more conversation between technical and legal folks
  • the importance of systems engineering
  • how to talk to customers and vendors
  • how to build teams through social participation[6]
  • the NIST 800 series and other models to consider security
  • risk: how to talk about it, measure it, discuss it with other functions within the organisation.

6 – the word “beer” came up.  From somebody else, on this occasion.

 

Talking in school

Learning by teaching

A few months ago, I was asked by a teacher at a local school to come in and talk to year 10 and year 11 students (aged 14-16 or so) about my job, what I do, my background, how I got into my job and to give any further thoughts and advice.  Today I got the chance to go in and talk to them.

I very much enjoyed myself[1], and hopefully it was interesting for the pupils as well.  I went over my past – from being “a bit of a geek at school” through to some of the stuff I need to know to do my job now – and also talked about different types of work within IT security.  I was at pains to point out that you don’t need to be a great mathematician or even a great coder to get a career in IT security, and talked a lot about the importance of systems – which absolutely includes people.

What went down best – as is the case with pretty much any crowd – was stories.  “War stories”, as they’re sometimes called, about what situations you’ve come across, how you dealt with them, how other people reacted, and the lessons you’ve learned from them, give an immediacy and relevance that just can’t be beaten.  I was careful not to make them very technical – and one about a member of staff who had lost weight while on holiday and got stuck in a two-door man-trap (which included a weight sensor) went down particularly well[3].

The other thing that was useful – and which isn’t always going to work in a C-level meeting, for instance – was some exercises. Codes and ciphers are always interesting, so I started with a ROT13, then a Caesar cipher, then a simple key, then a basic alphabet substitution.  We talked about letter frequency, repeated words, context and letter groupings, and the older group solved all of the puzzles, which was excellent.

There was time for some questions, too, which included:

  • “how much do you get paid?”  Somewhat cheeky, this one, but I answered by giving them a salary range for a job which someone had contacted me about, but which I’d not followed up on – and gave no indications of the reasons for rejecting it
  • “do you need an IT or computing degree?”  No, though it can be helpful.
  • “do you need a degree at all?”  No, and though it can be difficult to get on without one, there are some very good apprentice schemes out there.

I went into the school to try to help others learn, but it was a very useful experience for me, too.  Although all of the pupils there are taking a computing class by choice, not all of them were obviously engaged.  But that didn’t mean that they weren’t paying attention: one of the pupils with the least “interested” body language was the fastest at answering some of the questions.  Some of the pupils there had similar levels of understanding around IT security to some C-levels who aren’t in IT.  Thinking about pace, about involving members of the audience who weren’t necessarily paying attention – all of these were really useful things for me to reflect on.

So – if you get the chance[4] – consider contacting a local school or college and seeing if they’d like someone to talk to them about what you do.  Making it interesting, be ready to move on where topics aren’t getting the engagement you’d hope, and be ready for some questions.  I can pretty much guarantee that you’ll learn something.


1 – one of my daughters, who attends the school, gave me very strict instructions about not talking to her, her friends or anyone she knew[2].

2 – (which I have every intention of ignoring, but sadly, I didn’t see her or any of her friends that I recognised.  Maybe next time.)

3 – though possibly not with the senior manager who had to come out on a Sunday to rescue him and reset the system.

4 – and you’re willing to engage a tough audience.

The commonwealth of Open Source

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

“But surely Open Source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they’ve written?”

Hands up who’s heard this?*  I’ve been talking to customers – yes, they let me talk to customers sometimes – and to folks in the Field**, and this is one that comes up, it turns out, quite frequently.  I talked in a previous post (“Disbelieving the many eyes hypothesis“) about how Open Source software – particularly security software – doesn’t get to be magically more secure than proprietary software, and talked a little bit there about how I’d still go with Open Source over proprietary every time, but the way that I’ve heard the particular question – about OSS being less secure – suggests to me that we there are times when we don’t just need to a be able to explain why Open Source needs work, but also to be able to engage actively in Apologetics***.  So here goes.  I don’t expect it to be up to Newton’s or Wittgenstein’s levels of logic, but I’ll do what I can, and I’ll summarise at the bottom so that you’ve got a quick list of the points if you want it.

The arguments

First of all, we should accept that no software is perfect******.  Not proprietary software, not Open Source software.  Second, we should accept that there absolutely is good proprietary software out there.  Third, on the other hand, there is some very bad Open Source software.  Fourth, there are some extremely intelligent, gifted and dedicated architects, designers and software engineers who create proprietary software.

But here’s the rub.  Fifth – the pool of people who will work on or otherwise look at that proprietary software is limited.  And you can never hire all the best people.  Even in government and public sector organisations – who often have a larger talent pool available to them, particularly for *cough* security-related *cough* applications – the pool is limited.

Sixth – the pool of people available to look at, test, improve, break, re-improve, and roll out Open Source software is almost unlimited, and does include the best people.  Seventh – and I love this one: the pool also includes many of the people writing the proprietary software.  Eighth – many of the applications being written by public sector and government organisations are open sourced anyway these days.

Ninth – if you’re worried about running Open Source software which is unsupported, or comes from dodgy, un-provenanced sources, then good news: there are a bunch of organisations******* who will check the provenance of that code, support, maintain and patch it.  They’ll do it along the same type of business lines that you’d expect from a proprietary software provider.  You can also ensure that the software you get from them is the right software: the standard technique is for them to sign bundles of software so that you can check that what you’re installing isn’t just from some random bad person who’s taken that code and done Bad Things[tm] with it.

Tenth – and here’s the point of this post – when you run Open Source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into, and adding to, the commonwealth of knowledge and expertise and experience that is Open Source.  And which is only made greater by your doing so.  If you do this yourself, or through one of the businesses who will support that Open Source software********, you are part of this commonwealth.  Things get better with Open Source software, and you can see them getting better.  Nothing is hidden – it’s, well, “open”.  Can things get worse?  Yes, they can, but we can see when that happens, and fix it.

This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.

I know that I need to be careful about the use of the “commonwealth” as a Briton: it has connotations of (faded…) empire which I don’t intend it to hold in this case.  It’s probably not what Cromwell*********, had in mind when he talked about the “Commonwealth”, either, and anyway, he’s a somewhat … controversial historical figure.  What I’m talking about is a concept in which I think the words deserve concatenation – “common” and “wealth” – to show that we’re talking about something more than just money, but shared wealth available to all of humanity.

I really believe in this.  If you want to take away a religious message from this blog, it should be this**********: the commonwealth is our heritage, our experience, our knowledge, our responsibility.  The commonwealth is available to all of humanity.  We have it in common, and it is an almost inestimable wealth.

 

A handy crib sheet

  1. (Almost) no software is perfect.
  2. There is good proprietary software.
  3. There is bad Open Source software.
  4. There are some very clever, talented and devoted people who create proprietary software.
  5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
  6. The corresponding pool of people for Open Source is virtually unlimited…
  7. …and includes a goodly number of the talent pool of people writing proprietary software.
  8. Public sector and government organisations often open source their software anyway.
  9. There are businesses who will support Open Source software for you.
  10. Contribution – even usage – adds to the commonwealth.

*OK – you can put your hands down now.

**should this be capitalised?  Is there a particular field, or how does it work?  I’m not sure.

***I have a degree in English Literature and Theology – this probably won’t surprise some of the regular readers of this blog****.

****not, I hope, because I spout too much theology*****, but because it’s often full of long-winded, irrelevant Humanities (US Eng: “liberal arts”) references.

*****Emacs.  Every time.

******not even Emacs.  And yes, I know that there are techniques to prove the correctness of some software.  (I suspect that Emacs doesn’t pass many of them…)

*******hand up here: I’m employed by one of them, Red Hat, Inc..  Go have a look – fun place to work, and we’re usually hiring.

********assuming that they fully abide by the rules of the Open Source licence(s) they’re using, that is.

*********erstwhile “Lord Protector of England, Scotland and Ireland” – that Cromwell.

**********oh, and choose Emacs over vi variants, obviously.

Staying on topic – speaking at conferences

Just to be entirely clear: I hate product pitches.

As I mentioned last week, I’ve recently attended the Open Source Summit and Linux Security Summit.  I’m also currently submitting various speaking sessions to various different upcoming events, and will be travelling to at least one more this year*.  So conferences are on my mind at the moment.  There seem to be four main types of conference:

  1. industry – often combined with large exhibitions, the most obvious of these in the security space would be Black Hat and RSA.  Sometimes, the exhibition is the lead partner: InfoSec has a number of conference sessions, but the main draw for most people seems to be the exhibition.
  2. project/language – often associated with Open Source, examples would be Linux Plumbers Conference or the Openstack Summit.
  3. company – many companies hold their own conferences, inviting customers, partners and employees to speak.  The Red Hat Summit is a classic in this vein, but Palo Alto has Ignite, and companies like Gartner run focussed conferences through the year.  The RSA Conference may have started out like this, but it’s now so generically security that it doesn’t seem to fit**.
  4. academic – mainly a chance for academics to present papers, and some of these overlap with industry events as well.

I’ve not been to many of the academic type, but I get to a smattering of the other types a year, and there’s something that annoys me about them.

Before I continue, though, a little question; why do people go to conferences?  Here are the main reasons*** that I’ve noticed****:

  • they’re a speaker
  • they’re an exhibitor with a conference pass (rare, but it happens, particularly for sponsors)
  • they want to find out more about particular technologies (e.g. containers or VM orchestrators)
  • they want to find out more about particular issues and approaches (e.g. vulnerabilities)
  • they want to get career advice
  • they fancy some travel, managed to convince their manager that this conference was vaguely relevant and got the travel approval in before the budget collapsed*****
  • they want to find out more about specific products
  • the “hallway track”.

A bit more about the last two of these – in reverse order.

The “hallway track”

I’m becoming more and more convinced that this is often the most fruitful reason for attending a conference.  Many conferences have various “tracks” to help attendees decide what’s most relevant to them.  You know the sort of thing: “DevOps”, “Strategy”, “Tropical Fish”, “Poisonous Fungi”.  Well, the hallway track isn’t really a track: it’s just what goes on in hallways: you meet someone – maybe at the coffee stand, maybe at a vendor’s booth, maybe asking questions after a session, maybe waiting in the queue for conference swag – and you start talking.  I used to feel guilty when this sort of conversation led me to miss a session that I’d flagged as “possibly of some vague interest” or “might take some notes for a colleague”, but frankly, if you’re making good technical or business contacts, and increasing your network in a way which is beneficial to your organisation and/or career, then knock yourself out*******: and I know that my boss agrees.

Finding out about specific products

Let’s be clear.  The best place to this is usually at a type 2 or a type 3 conference.  Type 3 conferences are often designed largely to allow customers and partners to find out the latest and greatest details about products, services and offerings, and I know that these can be very beneficial.  Bootcamp-type days, workshops and hands-on labs are invaluable for people who want to get first-hand, quick and detailed access to product details in a context outside of their normal work pattern, where they can concentrate on just this topic for a day or two.  In the Open Source world, it’s more likely to be a project, rather than a specific vendor’s project, because the Open Source community is generally not overly enamoured by commercial product pitches.  Which leads me to my main point: product pitches.

Product pitches – I hate product pitches

Just to be entirely clear: I hate product pitches.  I really, really do.  Now, as I pointed out in the preceding paragraph, there’s a place for learning about products.  But it’s absolutely not in a type 1 conference.  But that’s what everybody does – even (and this is truly horrible) in key notes.  Now, I really don’t mind too much if a session title reads something like “Using Gutamaya’s Frobnitz for token ring network termination” – because then I can ignore if it’s irrelevant to me.  And, frankly, most conference organisers outside type 3 conferences actively discourage that sort of thing, as they know that most people don’t come to those types of conferences to hear them.

So why do people insist on writing session titles like “The problem of token ring network termination – new approaches” and then pitching their product?  They may spend the first 10 minutes (if you’re really, really lucky) talking about token ring network termination, but the problem is that they’re almost certain to spend just one slide on the various approaches out there before launching into a commercial pitch for Frobnitzes********* for the entirety of the remaining time.  Sometimes this is thinly veiled as a discussion of a Proof of Concept or customer deployment, but is a product pitch nevertheless: “we solved this problem by using three flavours of Frobnitzem, and the customer was entirely happy, with a 98.37% reduction in carpet damage due to token ring leakage.”

Now, I realise that vendors need to sell products and/or services.  But I’m convinced that the way to do this is not to pitch products and pretend that you’re not.  Conference attendees aren’t stupid**********: they know what you’re doing.  Don’t be so obvious.  How about actually talking about the various approaches to token ring network termination, with the pluses and minuses, and a slide at the end in which you point out that Gutamaya’s solution, Frobnitz, takes approach y, and has these capabilities?  People will gain useful technical knowledge!  Why not talk about that Proof of Concept, what was difficult and how there were lessons to be learned from your project – and then have a slide explaining how Frobnitz fitted quite well?  People will take away lessons that they can apply to ther project, and might even consider Gutamaya’s Frobnitz range for it.  Even better, you could tell people how it wasn’t a perfect fit (nothing ever is, not really), but you’ve learned some useful lessons, and plan to make some improvements in the next release (“come and talk to me after the session if you’d like to know more”).

For me, at least, being able to show that your company has the sort of technical experts who can really explain and delve into issues which are, of course, relevant to your industry space, and for which you have a pretty good product fit is much, much more likely to get real interest in you, your product and your company.  I want to learn: not about your product, but about the industry, the technologies and maybe, if you’re lucky, about why I might consider your product next time I’m looking at a problem.  Thank you.


*Openstack Summit in Sydney.  Already getting quite excited: the last Openstack Summit I attended was interesting, and it’s been a few years since I was in Sydney.  Nice time of the year…

**which is excellent – as I’ll explain.

***and any particular person going to any particular conference may hit more than one of these.

****I’d certainly be interested about what I’ve missed.  I considered adding “they want to collect lots of swag”, but I really hope that’s not one.

*****to be entirely clear: I don’t condone this particular one******.

******particularly as my boss has been known to read this blog.

*******don’t, actually.  I’ve concussed myself before – not at a conference, to be clear – and it’s not to be recommended.  I remember it as feeling like being very, very jetlagged and having to think extra hard about things that normally would come to me immediately********.

********my wife tells me I just become very, very vague.  About everything.

*********I’ve looked it up: apparently the plural should be “Frobnitzem”.  You have my apologies.

**********though if they’ve been concussed, they may be acting that way temporarily.

Disbelieving the many eyes hypothesis

There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth.

大勢がレビューしていても信用しないという仮説

Writing code is hard.  Writing secure code is harder: much harder.  And before you get there, you need to think about design and architecture.  When you’re writing code to implement security functionality, it’s often based on architectures and designs which have been pored over and examined in detail.  They may even reflect standards which have gone through worldwide review processes and are generally considered perfect and unbreakable*.

However good those designs and architectures are, though, there’s something about putting things into actual software that’s, well, special.  With the exception of software proven to be mathematically correct**, being able to write software which accurately implements the functionality you’re trying to realise is somewhere between a science and an art.  This is no surprise to anyone who’s actually written any software, tried to debug software or divine software’s correctness by stepping through it.  It’s not the key point of this post either, however.

Nobody*** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible.  It is for this reason that code review is a core principle of software development.  And luckily – in my view, at least – much of the code that we use these days in our day-to-day lives is Open Source, which means that anybody can look at it, and it’s available for tens or hundreds of thousands of eyes to review.

And herein lies the problem.  There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it.  This is a myth.  A dangerous myth.  The problems with this view are at least twofold.  The first is the “if you build it, they will come” fallacy.  I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it****.  In the same way, the number of Open Source projects was (maybe) once so small that there was a good chance that people might look at and review your code.  Those days are past – long past.  Second, for many areas of security functionality – crypto primitives implementation is a good example – the number of suitably qualified eyes is low.

Don’t think that I am in any way suggesting that the problem is any lesser in proprietary code: quite the opposite.  Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased.  “Proprietary code is more secure” is less myth, more fake news.  I completely understand why companies like to keep their security software secret – and I’m afraid that the “it’s to protect our intellectual property” line is too often a platitude they tell themselves, when really, it’s just unsafe to release it.  So for me, it’s Open Source all the way when we’re looking at security software.

So, what can we do?  Well, companies and other organisations that care about security functionality can – and have, I believe a responsibility to – expend resources on checking and reviewing the code that implements that functionality.  That is part of what Red Hat, the organisation for whom I work, is committed to doing.  Alongside that, we, the Open Source community, can – and are – finding ways to support critical projects and improve the amount of review that goes into that code*****.  And we should encourage academic organisations to train students in the black art of security software writing and review, not to mention highlighting the importance of Open Source Software.

We can do better – and we are doing better.  Because what we need to realise is that the reason the “many eyes hypothesis” is a myth is not that many eyes won’t improve code – they will – but that we don’t have enough expert eyes looking.  Yet.


* Yeah, really: “perfect and unbreakable”.  Let’s just pretend that’s true for the purposes of this discussion.

** …and which still relies on the design and architecture actually to do what you want – or think you want – of course, so good luck.

*** nobody who’s actually written more than about 5 lines of code (or more than 6 characters of Perl)

**** I added one.  They came.  It was like some sort of magic.

***** see, for instance, the Linux Foundation‘s Core Infrastructure Initiative