Open source projects, embargoes and NDAs

Vendors may only disclose to project members under an NDA.

This article is a companion piece to one that I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and another which I wrote more recently, Security disclosure or vulnerability management?. I urge you to read those first, as they provide background and context to this article which I don’t plan to reiterate in full. The focus of this article is to encourage open source projects to consider a specific aspect of the security disclosure/vulnerability management process: people.

A note about me and the background for this article. I’m employed by Red Hat – a commercial vendor of open source software – and I should make the standard disclosure that the views expressed in this article don’t necessarily reflect those of Red Hat (though they may!). The genesis of this article was a conversation I had as part of my membership of the Confidential Computing Consortium, of which Enarx (a project of which I’m a co-founder) is a member. During a conversation with other members of the Technical Advisory Board, we were discussing the importance of having a vulnerability management process (or whatever you wish to call it!), and a colleague from another organisation raised a really interesting point: what about NDAs?

What’s an NDA?

Let’s start with the question: “what’s an NDA”? “NDA” stands for Non-Disclosure Agreement, and it’s a legal document signed by one or more individuals or organisations agreeing not to disclose confidential material[1]. NDAs have a bad reputation in the wider world, as they’re sometimes used as “gagging orders” within legal settlements by powerful people or organisations in cases of, for instance, harassment. They are, however, not all bad! In fact, within the technology world, at least, they’re a very important tool for business.

Let’s say that company A is a software vendor, and creates software that works with hardware (a video card, let’s say) from company B. Company B wants to ensure that when its new product is released, there is software ready to make use of it, so that consumers will want to use it. In order to do that, Company A will need to know the technical specifications of the video card, but Company B wants to ensure that these aren’t leaked to the press – or competitors – before release. NDAs to the rescue! Company A can sign an NDA with Company B, agreeing not to disclose technical details about Company B’s upcoming products and plans to anyone outside Company A until the plans are public. If it’s a “two-way NDA”, and Company B signs it as well, then Company A can tell Company B about their plans for selling and marketing their new software, for instance, to help provide business partnership opportunities. If either of parties breaks the terms of the agreement, then it’s time for expensive legal proceedings (which nobody wants) and the likelihood that any trust from one to the other will break down, leading to repercussions later.

NDAs are common in this sort of context, and employees of the companies bound by NDAs are expected to abide by them, and also not to “carry over” confidential information to a new employer if they get a new job.

What about open source?

One of the things about open source is that it’s, well, “open”. So you’d think that NDAs would be a bad thing for open source. I’d argue, however, that they have actually served a very useful purpose in the development and rise of the open source ecosystem. The reason for this is that companies are often reluctant to have individuals sign NDAs with them, and individuals are often reluctant to sign NDAs. From the company’s point of view, NDAs are more difficult to enforce against individuals, and from the individual’s point of view, the legal encumbrance of signing an NDA – or multiple NDAs – may seem daunting.

Companies, however, are used to NDAs, and have legal departments to manage them. Back in the 1990s, I remember there being a long lag time between new hardware becoming available, and the open source support for it emerging. What did emerge was typically buggy, often reverse-engineered, and rarely took advantages of the newest features offered by the hardware. From around 2000, this has changed significantly: you can expect the latest hardware capabilities to emerge into, say, the Linux kernel, pretty much as soon as the supporting hardware is available. This is to some extent because there are now a large number of companies who employ engineers to work on open source. The hardware vendors have NDAs with these companies, which means that the engineers can find out ahead of time about new features, and have support ready for them when they launch.

There’s a balancing act here, of course, because sometimes providing code implementations and making them open can give away details of hardware features before the hardware vendors are ready to disclose them. This means that some of the engineering needs to take place “behind closed doors” and only be merged into projects once the hardware vendor is ready to announce their specifications to the world. This isn’t as open as one might like. On balance, though, I’d argue that this approach has provided a net benefit to the open source community, allowing hardware vendors to keep competitive information confidential, whilst allowing the open source community access to the latest and greatest hardware features at, or soon after, release.

NDAs and security

So far, we’ve not mentioned security: where does it come in? Well, say that you’re a hardware vendor, and you discover (or someone discloses to you) that there’s a vulnerability in one of your products. What do you do? Well, you don’t want to announce it to the world until a fix is ready (remember the discussion of embargoes in my previous article), but if there are open source projects which need to be involved in getting a fix out, then you need to be able to talk to them.

The key word in that last sentence is “them”. Most – one could say all – successful open source projects have members employed by a variety of different companies and organisations, as well as some who are not employed by any – or work on the project outside of any employer-related time or agreement. The hardware vendor is extremely keen to ensure that any embargo that is created around the work on this vulnerability is adhered to, and the standard way to enforce (or at least legally encourage) such adherence is through NDAs. This concern is understandable, and justified: there have been some high profile examples of information about vulnerabilities being leaked to the press before fixes were fully ready to be rolled out.

The chances, however, of the vendor having an NDA with all of the organisations and companies represented by engineers on a particular project is likely to be low, and there may also be individuals who are not associated with any organisation or company in the context of this project. Setting up NDAs can be slow and costly – and sometimes impossible – and the very fact of setting up an NDA might point to a vulnerability’s existence! Vendors, then, may make the choice only to disclose to open source project members who are governed by an NDA – whether through their employer or an NDA with them as an individual.

Process and NDAs

What does that mean to open source projects? Well, it means that if you’re working with specific hardware vendors – or even dependent on specific software vendors – then you may need to think very carefully about who is on your security/vulnerability response/management team (let’s refer to this as the “VMT” – “Vulnerability Management Team” – for simplicity). You may need to ensure that you have competent security engineers both with relevant expertise and who are also under an NDA (corporate or individual) with the hardware vendors with whom you work. Beyond that, you need to ensure that they are on your VMT, or can be tasked to your VMT if required.

This may be a tough ask, particularly for small projects. However, the world of enterprise software is currently going through a process of realising quite how important certain projects are to the underpinnings of their operation, and this is a time to consider whether core open source projects with security implications need more support from large organisations. Encouraging more involvement from organisations should not only help deal with this issue – that vendors may not talk to individuals on your project who are not under NDA with them. Of course, getting more contributors can have positive and negative effects in terms of governance and velocity, but you really do need to think about what would happen if your project were affected by a hardware (or upstream software) vulnerability, and were unable to react as you had nobody able to be involved in discussions.

The other thing you need to do is ensure that the process you have in place for reporting of vulnerabilities is flexible enough to manage this issue. There should be at least one person on the VMT to whom reports can be made who is under the relevant NDA(s): remember that for a vendor, even the act of getting in touch with a project may be enough to give away information about possible vulnerabilities. Although having anonymous VMT may seem attractive – so that you are not giving away too much information about your project’s governance – there can be times when a vendor (or individual) wishes to contact a named individual. If that individual is not part of the VMT, or has no way to trigger the process to deploy an appropriate member of the project to the VMT, then you are in trouble.

I hope that more open source projects will put VMTs and associated processes in place, as I wrote in my previous article. I believe that it’s also important that we consider the commercial and competitive needs of the ecosystem in which we operate, and that means being ready to react within the context of NDAs, embargoes and vulnerabilities. If you are part of an open source project, I urge you to discuss this issue – both internally and also with any vendors whose hardware or software you use.

Note: I would like to thank the members of the Technical Advisory Board of the CCC for prompting the writing of this article, and in particular the member who brought up the issue, which (to my shame) I had not considered in detail before. I hope this treatment of questions arising meets with their approval!


1 – Another standard disclosure: I am not a lawyer! This is my personal description, and should be used as the basis for any legal advice, etc., etc..

Security disclosure or vulnerability management?

Which do I need for an open source project?

This article is a companion piece to one I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and I urge you to read that first, as it provides background and context to this article which I don’t plan to reiterate in full.

In that previous article, I mentioned that many open source projects have a security disclosure process, and most of the rest of the article was basically a list of decisions and steps that you might find in such a process. There’s another term that you might hear, however, which is a Vulnerability Management Process, or “VMP”. While a security disclosure process can be defined as a type of VMP, there are subtle differences to what these two processes might look like, and what they might mean, or be seen to mean, so I think it’s worth spending a little time examining possible differences before we continue.

I just did a “Google fight”[1] for “security disclosure” vs “vulnerability management”, and the former “won” by a ratio of around 5:3. I suspect that this is largely because security disclosures tend to sound exciting and are more “sexy” for headlines than are articles about managing things – even if those things are vulnerabilities.

I’m torn between the two, because both terms highlight important aspects – or reflect different viewpoints – of the same important basic idea: if a bad thing is discovered that involves your product or project, then you need to fix it. I’m going to come at this, as usual for me, from the point of view of open source, so let me give my personal feelings about each.

Security disclosure process

First off, I like the fact that the word “security” is front and centre here. Saying security focusses the mind in ways which “vulnerability” may not, and while a vulnerability may be the thing that we’re addressing, the impact that we’re trying to mitigate is on the security associated with the project or product. The second thing that I like about this phrase is the implication that disclosure is what we are aiming for. Now, this fits well with an open source mindset, but I wonder whether the accent is somewhat different for those who come from a more proprietary background. Where I read “a process to manage telling people about a security problem”, I suspect that others may read “a process to manage the fact that someone has told people about a security problem.” I urge everyone to move to the first point of view, for two reasons:

  1. I believe that security is best done in the open – though we need to find ways to protect people while fixes are being put in place and disseminated;
  2. if we don’t encourage people to come to us – project maintainers, product managers, architects, technical leads – first, in the belief that fixes will be managed as per point 1 above, and also that credit will be given where it’s due, then those who discover vulnerabilities will have little incentive to follow the processes we put in place. If this happens, they are more likely to disclose to the wider world before us, making providing and propagating fixes in a timely fashion much more difficult.

Vulnerability management process

This phrase shares the word “process” with the previous one, and, combined with the world “management”, conveys the importance of working through the issue at hand. It also implies to me, at least, that there is other work to be done around the vulnerability rather than just letting everybody know about it (“disclosure”). This may seem like a bad thing (see above), but on the other hand, acknowledging that vulnerabilities do need managing seems to be a worthwhile thing to signal. What worries me, however, is that managing can be seen to imply “sweeping under the carpet” – in other words, making the problem go away.

Tied with this is something about the word “vulnerability” in this context which is holds both negative and positive connotations. The negative is that one would say “it’s just another vulnerability”, underplaying the security aspect of what it represents. The positive is that sometimes, vulnerabilities are not all of the same severity – some aren’t that serious, compared to others – and it’s important to recognise this, and to have a process which allows you to address all severities of problem, part of which process is to rank and probably prioritise them – often known as “triaging”, a term borrowed from the medical world.

A third option?

There’s at least one alternative. Though it scores lower than either “vulnerability management” or “security disclosure” in a Google fight (which, as I mentioned in the footnotes, isn’t exactly a scientific measure), a “vulnerability disclosure” process is another option. Although it doesn’t capture the “security” aspect, it does at least imply disclosure, which I like.

Which do I need?

My main focus for this article – and my passion and background – is open source, so the next question is: “which do I need for an open source project?” The answer, to some degree, is “either” – or “any of them”, if you include the third option. Arguably, as noted above, a “security disclosure process” is a type of vulnerability management process anyway, but I think that in the open source world particularly, the implication of working towards disclosure – towards openness – is important. The open source community is very sensitive to words, and any suggestion of cover-up is unlikely to be welcomed, even if such an implication was entirely unintentional.

One point that I think it’s worth making is that I believe that pretty much any component or library that you are working on may have security implications down the road[2]. This means that there should be some process in place to deal with vulnerabilities or security issues. To be clear: many of these will be discovered by contributors to the project themselves, rather than external researchers or bug-hunters, but that doesn’t mean that such vulnerabilities are a) less important than externally found ones; OR b) less in need of a process for dealing with them.

A good place to look at what questions to start asking is my previous article, but I strongly recommend that every open source project should have some sort of process in place to deal with vulnerabilities or other issues which may have a security impact. I’ve also written a follow-up article with some options about how to deal with different types of disclosure, vulnerability and process, which you can find here: Open source projects, embargoes and NDAs.


1 – try it: Googlefight.com – it’s a fun (if unscientific) method to gauge the relative popularity of two words or terms on the web.

2 – who would ever have thought that font rendering software could lead to critical security issue?

5 Rust reflections (from Java)

I’m a (budding) Rustacean.

It’s been a long time since I properly learned a new language – computer or human. Maybe 25 years. That language was Java, and although I’ve had to write little bits of C (very, very little) and Javascript in the meantime, the only two languages I’ve written much actual code in have been Perl and Java. As I’ve posted before, I’m co-founder of a project called Enarx (latest details here), which is written almost entirely in Rust. These days I call myself an “architect”, and it’s been quite a long time since I wrote any production code. In the lead up to Christmas last year (2019), I completed the first significant project I’ve written in quite a few years: an implementation of a set of algorithms around a patent application, in Java. It was a good opportunity to get my head back into code, and I was quite pleased with it. I wrote it with half a mind to compile it into WebAssembly as a candidate workload for Enarx, but actually compiling it turned out to be a bit of a struggle, and work got in the way, so completed a basic implementation, checked it into a private github repository and generally forgot about it.

My involvement with the Enarx project so far has been entirely design and architecture work – plus documentation, marketing, evangelism, community work and the rest, but no coding. I have suggested, on occasion – almost entirely in jest – that I commit some code to the project in Perl, and it’s become a bit of a running joke that this would be the extent of any code I submitted, as possibly my involvement with the project, as it would be immediately rejected. And then, about a week and a half ago, I decided to learn Rust. And then to rewrite (including, where necessarily refactoring) that Java project I wrote a few months ago. Here are some of my thoughts on Rust, from the point of a view of a Java developer with a strong Object-Oriented background.

1. Rust feels familiar

Although many of the tutorials and books you’ll find out there are written with C and C++in mind, there’s enough similarity with Java to make the general language feel familiar. The two tutorials I’ve been using the most are The Rust Programming Language online and Programming Rust in dead tree format, and the latter makes frequent references to similarities and differences to and from other languages, including not only C, C++ and Java, but also Python, Javascript and others. Things like control structures and types are similar enough to Java that they’re generally simple to understand, and although there are some major differences, you should be able to get your head round the basics of the language pretty simply. Beware, however: one of the biggest initial problems I’ve been having is that Rust sometimes feels too familiar, so I start trying to do things in the wrong way, have to back out, and try to work out a better way: a way which is more idiomatic to Rust. I have a long way to go on this!

2. References make sense

In Rust, you end up having to use references. Frankly, referencing and de-referencing variables was something that never made much sense to me when I looked at C or C++, but this time, it feels like I get it. If you’re used to passing Java variables by reference and value, and know when you need to take steps to do so differently in specific situations, then you’re ready to start understanding Rust references. The other thing you need to understand is why Rust needs you to use them: it’s because Rust is very, very careful about memory management, and you don’t have a Garbage Collector to clean up after you wherever you go (as in Java). You can’t just pass Strings (for instance) around willy-nilly: Rust is going to insist that you know the lifetime of a variable, and think about when it’s ready to be “dropped”. This means that you need to think hard about scope, and introduces a complex concept: ownership.

3. Ownership will make sense

Honestly, I’m not there yet. I’ve been learning and coding in Rust for under two weeks, and I’m beginning to get my head around ownership. For me (as, I suspect, for many newcomers), this is the big head-shift around moving to Rust from Java or most other languages: ownership. As I mentioned above, you need to understand when a variable is going to be used, and how long it will live. There’s more to it than that, however, and really getting this is something which feels a little foreign to me as a Java developer: you need to understand about the stack and the heap, a distinction which was sufficiently concealed from me by Java, but something which many C and C++ developers will probably understand much more easily. This isn’t the place to explain the concept (I’ve found the diagrams in Programming Rust particularly helpful), but in order to manage the lifetime of variables in memory, Rust is going to need to know what component owns each one. This gets complicated when you’re used to creating objects and instantiating them with variables from all over the place (as in Java), and requires some significant rethinking. Combining this with explicit marking of lifetimes is the biggest conceptual change that I’m having to perform right now.

4. Cargo is helpful

I honestly don’t use the latest and greatest Java tools properly. When I started to learn Java, it wasn’t even in 1.0, and by the time I finished writing production code on a regular basis, there wasn’t yet any need to pick up the very latest tooling, so it may be that Java is better at this than I remember, but the in-built tools for managing the various pieces of a Rust project, including dependencies, libraries, compilation and testing, are a revelation. The cargo binary just does the right thing, and it’s amazing to watch it do its job when it realises that you’ve made a change to your dependencies, for instance. It will perform automatic tests, optimise automatically, produce documentation – so many useful tasks, all within one package. Combine this with git repositories, and managing projects becomes saner and easier.

5. The compiler is amazing

Last, but very far from least, is the compiler. I love the Rust compiler: it really, really tries to help you. The members of the community[1] that makes and maintains clearly go out of their way to provide helpful guidance to correct you when you make mistakes – and I, for one, have been making many of them. Rather than the oracular pronouncements that may be familiar from other languages’ compilers, you’ll get colour-coded text with warnings and errors, and suggestions as to what you might actually be trying to do. You will even be given output such as For more information about this error, try rustc --explain E0308. When you do try this, you get (generally!) helpful explanation and code snippets. Sometimes, particularly when you’re still working your way into the language, it’s not always obvious what you’re doing wrong, but wading through the errors can help you get your head round the concepts in a way which feels very different to messages I’m used to getting from javac, for example.

Conclusion

I don’t expect ever to be writing lots of production Rust, nor ever truly to achieve guru status – in Rust or any other language, to be honest – but I really think that Rust has a lot to be said for it. Throughout my journey so far, I’ve been nodding my head and thinking “that’s a good way to do that”, or “ah, that makes so much more sense than the way I’m used to”. This isn’t an article about why Rust is such a good language – there are loads of those – nor about the best way to learn Rust – there are lots of those, too – but I can say that I’m enjoying it. It’s challenging, but one thing that the tutorials, books and other learning materials are all strong on is explaining the reasons for the choices that Rust makes, and that’s certainly been helpful to me, both in tackling my frustrations, but also in trying to internalise some of the differences between Java and Rust.

If I can get my head truly into Rust, I honestly don’t think I’m likely to write any Java ever again. I’m not sure I’ve got another 25 years of coding in me, but I think that I’m with Rust for the long haul now. I’m a (budding) Rustacean.


1 – Rust, of course, is completely open source, and the community support for it seems amazing.

An Enarx milestone: binaries

Demoing the same binary in very different TEEs.

This week is Red Hat Summit, which is being held virtually for the first time because of the Covid-19 crisis. The lock-down has not affected the productivity of the Enarx team, however (at least not negatively), as we have a very exciting demo that we will be showing at Summit. This post should be published at 1100 EDT, 1500 BST, 1400 GMT on Tuesday, 2020-04-28, which is the time that the session which Nathaniel McCallum and I recorded will be released to the world. I hope to be able to link to that once it’s released to the world. But what will we be showing?

Well, to set the scene, and to discover a little more about the Enarx project, you might want to read these articles first (also available in Japanese – visit each article of a link):

Enarx, as you’ll discover, is about running workloads in TEEs (Trusted Execution environments), using WebAssembly, in what we call “Keeps”. It’s a mammoth job, particularly as we’re abstracting away the underlying processor architectures (currently two: Intel’s SGX and AMD’s SEV), so that you, the user, don’t need to worry about them: all you need to do is write and compile your application, then request that it be deployed. Enarx, then, has lots of moving parts, and one of the key tasks for us has been to start the work to abstract away the underlying processor architectures so that we can prepare the runtime layers on top. Here’s a general picture of the software layers, and how they sit on top of the hardware platforms:

What we’re announcing – and demoing – today is that we have an initial implementation of code to allow us to abstract away process-based and VM-based types of architecture (with examples for SGX and SEV), so that we can do this:

This seems deceptively simple, but what’s actually going on under the covers is rather more than is exposed in the picture above. The reality is more like this:

This gives more detail: the application that’s running on both architectures (SGX on the left, SEV on the right) is the very same ELF static-PIE binary. To be clear, this is not only the same source code, compiled for different platforms, but exactly the same binary, with the very same hash signature. What’s pretty astounding about this is that in order to make it run on both platforms, the engineering team has had to write two sets of seriously low-level code, including more than a little Assembly language, providing the “plumbing” to allow the binary to run on both.

This is a very big deal, because although we’ve only implemented a handful of syscalls on each platform – enough to make our simple binary run and print out a message – we now have a framework on which we know we can build. And what’s next? Well, we need to expand that framework so that we can then build the WebAssembly layers which will allow WebAssembly applications to run on top:

There’s a long way to go, but this milestone shows that we have an initial framework which we can improve, and on which we can build.

What’s next?

What’s exciting about this milestone from our point of view is that we think it puts Enarx at a stage where more people can join and take part. There’s still lots of low-level work to be done, but it’s going to be easier to split up now, and also to start some of the higher level work, too. Enarx is completely open source, and we do all of our design work in the open, along with our daily stand-ups. You’re welcome to browse our documentation, RFCs (mostly in draft at the moment), raise issues, and join our calls. You can find loads more information on the Enarx wiki: we look forward to your involvement in the project.

Last, and not least, I’d like to take a chance to note that we now have testing/CI/CD resources available for the project with both Intel SGX and AMD SEV systems available to us, all courtesy of Packet. This is amazingly generous, and we both thank them and encourage you to visit them and look at their offerings for yourself!

3 open/closed Covid-19 contact tracing questions

All projects are not created equal.

One of the cheering things about the pandemic crisis in which we find ourselves is the vast up-swell of volunteering that we are seeing across the world. We are seeing this equally across the IT sector, and one of the areas where work is being done is in apps to help track Covid-19. Specifically, there is an interest in Covid-19 contact tracing, or tracking, apps for our mobile[0] phones. These aren’t apps which keep an eye on whether you’ve observed lock-down procedures, but which attempt to work out who has been in contact with whom, and work out from that, once we know that one person is infected with Covid-19, what the likely spread of the virus will be.

There are lots of contact tracing initiatives out there, from Pep-Pt from the European Union to Singapore’s TraceTogether, from the University of Washington’s PACT to MIT’s PACT[1]. Google and Apple are – unprecedentedly – working on an app together. There are lots of ways of comparing these apps and projects, but in today’s article, I want to suggest three measures which can help you consider them from the point of view of “openness”. As regular readers of this blog will know, I’m a big fan of open source – not just for software, but for data, management and the rest – and I believe that there’s also a strong correlation here with civil or human rights. There are lots of ways to compare these apps, but these three measures are not too technical, and can help us get a grip on the likelihood that some of the apps (and associated projects) may impinge on privacy and other issues about which we care. I don’t want the data generated from apps that I download onto my phone to be used now or in the future to curtail my, or other people’s civil or human rights, for blackmail or even for unapproved commercial gain.

1. Open source

Our first question must be: “is the app open source?” If the answer is “no”, then we have no way to know what is being captured, and therefore how it is being used. If the app is closed source, it could be collecting any data from pretty much any measuring device on our phones, including photo, video, audio, Bluetooth, wifi, temperature, GPS or accelerometer. We can try restricting access to these measurements, but such controls have not always been effective, understanding the impact of turning them off is rarely simple, and people frankly rarely bother to check them anyway. Equally bad is the fact that with closed source, you can’t have any idea of how good the security is, nor any chance to criticise and improve it. This is something about which I’ve written many times, including in my articles Disbelieving the many eyes hypothesis and Trust & choosing open source. Luckily, it seems that the majority of contact tracing apps are open source, but please be careful, and reject any which are not.

2 Centralised or distributed

In order to make sense of all the data that these apps collect, there needs to be a centralised[2] store where it can be processed, right? It’s common sense.

Actually, no. Although managing and processing data in one place can be much easier, there are ways to store data in a distributed manner, and allow the sorts of processing needed for contact tracing to take place. It may be more complex, but it also makes it much, much more difficult for governments, corporations or malicious actors to misuse this information. And we should be clear that this will be what happens if the data is made available. Maybe the best governments and the best corporations will be well-behaved by their standards, but a) those are not necessarily the standards that I or others will endorse and b) what about malicious actors and governments and corporations which are not “the best”?

3 Location or proximity tracking

This might seem like another obvious choice: if you want to be finding out who was in contact with whom, then the way to do it is see who was where, and when. GPS tracking – and associated technologies like wifi access point location tracking – combined with easily available time data, would give the ability to work out who was in a particular place at the same time as other people. This is true, but it also provides enormous opportunities for misuse, particularly when the data is held centrally (see above). An alternative is to use sensors like Bluetooth or NFC[3], to allow phones to collect information about other phones (or devices) with which they have been in contact and when. This is more easily anonymised – or pseudonymised – allowing information to be passed to the owners of those phones, but at the same time more difficult to misuse by governments, corporations and malicious actors.

There are other issues to consider, one of which is that these sensors were not designed for this type of use, and we may be sacrificing accuracy if we choose this option. On the other hand, many interactions between people occur indoors, where GPS is much less effective anyway, and these types of technologies may help.

You could argue that this measurement is not about “openness” in itself, but it is a key indicator to whether the information collected can be used in ways which are far from open.

Conclusion

There are many other questions we can ask about Covid-19 contact tracing apps, some of which are related to openness, and some of which are not. These include:

  • Coverage
    • not all demographics have – or use – phones as much as the rest of the population, including the poor, the elderly, and certain religious groups. How effective will such projects be if they have reduced access to these groups?
    • older devices may have less accurate sensors, or not have some of the capabilities required by the apps. What is more, there may be a correlation between use of these older devices with some of the demographics noted above.
    • some people rarely update the apps on their phones, so even if they load an initial version of an app, newer versions, with functionality or security improvements, are likely to be unequally distributed across the set of devices.
  • Removal – how easy will it be to remove the application fully, what are the consequences of not doing so, and how likely are people to do so anyway[4]?
  • Will use of these apps by mandatory or voluntary? If the former, there are serious concerns about civil or human rights, not to mention the problems noted above about coverage.

All of these questions are important, but not directly related to the question of the “openness” of the apps and projects. However, we have, right now, some great opportunities to work with and influence some really important projects for public health and well-being, and I believe that it is important that we consider the questions I’ve raised about openness before endorsing, installing or using any of the apps that are being created.


0 – or “cell”, if you’re in North America.

1 – yes, they chose the same acronym. Yes, it is confusing.

2 – or, I supposed, “centralized”, depending on your geography.

3 – “Near Field Communication” – the same capability used when you do contactless payment with your phone or credit/debit card.

4 – how many apps do you still have on your phone that you’ve not even opened for 3 months? Yup, me too.

Post-Covid, post-open?

We are inventive, we are used to turning technologies to good.

The world of lockdown to which we’re becoming habituated at the moment has produced some amazing upsides. The number of people volunteering, the resurgence of local community initiatives, the selfless dedication of key workers across the world and the recognition of their sacrifice by the general public are among the most visible. As many regular readers of this blog are likely to be aware, there has also been an outpouring of interest and engagement in software- and hardware-related projects to help, from infection-tracking apps to 3D-printing of PPE[0]. Companies have made training and educational materials available for free, and there are attempts around the world to engage and contribute to the public commonwealth.

Sadly, not all of the news is good. There has been a rise in phishing attacks, and the lack of appropriate or sufficient security in commonly-used apps such as Zoom has become frightenly evident[1]. There’s an article to write here about the balance between security, usability and cost, but I’m going to save that for another day.

Somewhere in the middle, between the obvious positives and obvious negatives, there are some developments which most of us probably accept at necessary, but which aren’t things that we’d normally welcome. Beyond the obvious restrictions on movement and public gatherings, there are a number of actions which governments, in particular, a retaking which have generally negative impacts on human rights and civil liberties, as outlined in this piece by The Guardian. The article lists numerous examples of governments imposing, or considering the imposition of, measures which would normally be quickly attacked by human rights groups, and resisted by most citizens. Despite the headline, which suggests that the article will deal with how difficult these measures will be to remove after the end of the crisis, there is actually little discussion, beyond a note that “[w]hether that surveillance is eventually rolled back will depend on public oversight.”

I think that we need to go beyond just “oversight” and start planning now for public action. In the communities in which I live and work, there is a general expectation that the world – software, management, government, data – is becoming more, not less open. We are in grave danger of losing that openness even once the need for these government measures diminish. Governments – who will see the wider intelligence-gathering and control opportunities of these changes – will espouse the view that “we need these measures in place in order to be able to react quickly if the same thing happens again”, and, if we’re not careful, public sentiment, bruised and bloodied by the pandemic, will quietly acquiesce, and we will see improvements in human and civil rights rolled back decades, and damaged further by the availability of cheap, mobile, networked technology.

If we believe that openness is a public good, then we need to think how to counter the arguments which we will hear from governments, and be ready to be vocal – not just with counter-arguments, but with counter-proposals. This pandemic is unlike either of the World Wars of the 20th Century, when a clear ending was marked, and there was the opportunity (sadly denied to many citizens of the former USSR) to regain civil liberties and roll back the restrictions of the war years. Nor is it even like the aftermath of the 9/11, that event which has impacted the intelligence and security landscape of the past two decades, where there is (was?) at least a set of (posited) human foes to target. In the case of the Covid-19 pandemic, the “enemy” is amorphous and will be around for decades to come. The measures to combat it – and its successors – will only be slowly reduced, and some will not be.

We need to fight against those measures which are unnecessary, and we need to find alternatives – transparent, public alternatives – to measures which may have some positive effects, but whose overall impact on society and human rights is clearly negative. In a era where big data is becoming pervasive, and the tools to mine it tractable, we need to provide international mechanisms to share and use that data in ways which do not benefit any single government, bloc, or section of society. We are inventive, we are used to turning technologies to good. This is the time we need to do it, and do it quickly. We can make a difference by being open, but we need to start now.


0 – Personal Protection Equipment.

1 – although note that the company is reported to be making improvements to at least one area of concern to some – routing of traffic through China.

No security without an architecture

Your diagrams don’t need to be perfect. But they do need to be there.

I attended a virtual demo this week. It didn’t work, but none of us was stressed by that: it was an internal demo, and these things happen. Luckily, the members of the team presenting the demo had lots of information about what it would have shown us, and a particularly good architectural diagram to discuss. We’ve all been in the place where the demo doesn’t work, and all felt for the colleague who was presenting the slidedeck, and on whose screen a message popped up a few slides in, saying “Demo NO GO!” from one of her team members.

After apologies, she asked if we wanted to bail completely, or to discuss the information they had to hand. We opted for the latter – after all, most demos which aren’t foregrounding user experience components don’t show much beyond terminal windows that most of us could fake up in half an hour or so anyway. She answered a couple of questions, and then I piped up with one about security.

This article could have been about the failures in security in a project which was showing an early demo: another example of security being left till late (often too late) in the process, at which point it’s difficult and expensive to integrate. However, it’s not. It clear that thought had been given to specific aspects of security, both on the network (in transit) and in storage (at rest), and though there was probably room for improvement (and when isn’t there?), a team member messaged me more documentation during the call which allowed me to understand of the choices the team had made.

What this article is about is the fact that we were able to have a discussion at all. The slidedeck included an architecture diagram showing all of the main components, with arrows showing the direction of data flows. It was clear, colour-coded to show the provenance of the different components, which were sourced from external projects, which from internal, and which were new to this demo. The people on the call – all technical – were able to see at a glance what was going on, and the team lead, who was providing the description, had a clear explanation for the various flows. Her team members chipped in to answer specific questions or to provide more detail on particular points. This is how technical discussions should work, and there was one thing in particular which pleased me (beyond the fact that the project had thought about security at all!): that there was an architectural diagram to discuss.

There are not enough security experts in the world to go around, which means that not every project will have the opportunity to get every stage of their design pored over by a member of the security community. But when it’s time to share, a diagram is invaluable. I hate to think about the number of times I’ve been asked to look at project in order to give my thoughts about security aspects, only to find that all that’s available is a mix of code and component documentation, with no explanation of how it all fits together and, worse, no architecture diagram.

When you’re building a project, you and your team are often so into the nuts and bolts that you know how it all fits together, and can hold it in your head, or describe the key points to a colleague. The problem comes when someone needs to ask questions of a different type, or review the architecture and design from a different slant. A picture – an architectural diagram – is a great way to educate external parties (or new members of the project) in what’s going on at a technical level. It also has a number of extra benefits:

  • it forces you to think about whether everything can be described in this way;
  • it forces you to consider levels of abstraction, and what should be shown at what levels;
  • it can reveal assumptions about dependencies that weren’t previously clear;
  • it is helpful to show data flows between the various components
  • it allows for simpler conversations with people whose first language is not that of your main documentation.

To be clear, this isn’t just a security problem – the same can go for other non-functional requirements such as high-availability, data consistency, performance or resilience – but I’m a security guy, and this is how I experience the issue. I’m also aware that I have a very visual mind, and this is how I like to get my head around something new, but even for those who aren’t visually inclined, a diagram at least offers the opportunity to orient yourself and work out where you need to dive deeper into code or execution. I also believe that it’s next to impossible for anybody to consider all the security implications (or any of the higher-order emergent characteristics and qualities) of a system of any significant complexity without architectural diagrams. And that includes the people who designed the system, because no system exists on its own (or there’s no point to it), so you can’t hold all of those pieces in your head of any length of time.

I’ve written before about the book Building Evolutionary Architectures, which does a great job in helping projects think about managing requirements which can morph or change their priority, and which, unsurprisingly, makes much use of architectural diagrams. Enarx, a project with which I’m closely involved, has always had lots of diagrams, and I’m aware that there’s an overhead involved here, both in updating diagrams as designs change and in considering which abstractions to provide for different consumers of our documentation, but I truly believe that it’s worth it. Whenever we introduce new people to the project or give a demo, we ensure that we include at least one diagram – often more – and when we get questions at the end of a presentation, they are almost always preceded with a phrase such as, “could you please go back to the diagram on slide x?”.

I nearly published this article without adding another point: this is part of being “open”. I’m a strong open source advocate, but source code isn’t enough to make a successful project, or even, I would add, to be a truly open source project: your documentation should not just be available to everybody, but accessible to everyone. If you want to get people involved, then providing a way in is vital. But beyond that, I think we have a responsibility (and opportunity!) towards diversity within open source. Providing diagrams helps address four types of diversity (at least!):

  • people whose first language is not the same as that of your main documentation (noted above);
  • people who have problems reading lots of text (e.g. those with dyslexia);
  • people who think more visually than textually (like me!);
  • people who want to understand your project from different points of view (e.g. security, management, legal).

If you’ve ever visited a project on github (for instance), with the intention of understanding how it fits into a larger system, you’ll recognise the sigh of relief you experience when you find a diagram or two on (or easily reached from) the initial landing page.

And so I urge you to create diagrams, both for your benefit, and also for anyone who’s going to be looking at your project in the future. They will appreciate it (and so should you). Your diagrams don’t need to be perfect. But they do need to be there.

2019年はEnarxの年でした

2020年はデモなど色々なプランを考えています!

 

私にとって2019年はEnarxプロジェクトがほとんどでした。

他のしなければいけない業務もあって、例えば顧客会議、IBM(7月に私の勤めるRed Hatを買収してます)の業務、Kubernetesのセキュリティやパートナー企業と協業など重要なことは色々ありました。しかしEnarxが2019年のハイライトです。

 

年始に私たちは実現できることがあると確信し、内部のリーダーシップチームに対して、達成可能であることの証明を課されました。

その課題に対して、私たちはAMDのSEVチップと五月のボストンでのRed Hat Summitでデモを行い、このブログでアナウンスをしました。

IntelのSGXチップセットと10月のリヨンでのOpen Source Summitでフォローアップをしています。2019年のEnarxの開発でとても大切なことだったと考えています。

 

チーム

 

Enarxは私だけのものではもちろん、ありません。Nathaniel McCallumと共にプロジェクトの共同創立者の一人であることは非常に誇りです。ここまで達成できたのは多くのチームメンバーのおかげですし、オープンソースプロジェクトとして貢献し使用している皆様のおかげです。貢献者ページにはたくさんのメンバーの名前がありますが、まだ全員の名前が挙がっているわけではありません。また、Red Hat内外の何人かの方から頂いたプロジェクトに対するアドバイス、サポートとスポンサリングはとても大切なものです。その皆様の名前を言う許可は得ていないので、ここではお話しせず、丁重に扱う事とします。皆様のサポートとそのお時間を頂けたことに非常に感謝しています。

 

ユースケースとパートナー

 

2019年に成し得た重要なことの一つに、皆さんがどのように「野良状態で」Enarxを使いたいのかをまとめたことと、その比較的詳細な分析を行い、書き上げたことです。

その全てが公開されたわけではないですが、(私が任されていることなんですけどもね)これは実際にEnarxを使用したいと考えているパートナーを見つけるのに不可欠です。まだ公表できませんが、皆さんも聞いたことがあるグローバル企業のいくつかから、また将来的に増えるであろうスタートアップ企業からも、とても興味深いユースケースが挙がってきています。このように興味を持っていただくことは、ロジェクトの実用化に不可欠で、Enarxはただエンジニアの情熱から飛び出しただけのプロジェクトではないと言う事なのです。

 

外部を見ると

 

2019年の重大イベントはLinux FoundationのOpen Source SummitでのConfidential Computing Consortiumの発表でした。私たちRed HatではEnarxはこの新しいグループにぴったりだと考えており、10月の正式発足でプレミアメンバーになったことを嬉しく思っています。これを書いている2019年12月31日時点では、会員数は21、このコンソーシアムは幅広い業界で懸念と興味を惹きつけるものだと言うことがはっきりしてきました。Enarxの信念と目的が裏付けされていると言うことです。

 

2019年に成し遂げたのはコンソーシアムへの参加だけではありません。カンファレンスで講演を行い、このブログ上やNext.redhat.comまたOpensource.comで記事を発表、プレスとの会見、ウェブキャストなどです。一番大切なのは六角形のステッカーを作ったことでしょう!(欲しい方がいらっしゃったらご連絡ください)

 

最後に大切なことを一つ。私たちはプロジェクトを公表していきます。内製のプロジェクトからRed Hat外の参加を促進するために活動しています。詳細は12月17日のBlogをご覧ください。

 

アーキテクチャとコード

 

他に何かあるでしょうか。そうだ、コードですね。そしていくつかのコンポーネントの成熟しつつあるアーキテクチャセットです。

私たちは当然これら全てを外部に公表するつもりですが、まだできていない状態です。すべきことが本当にたくさんあるのです。私たちは皆さんが使用できるようにコードを公開することに尽力していて、2020年に向けデモやそれ以外の大きな計画を立てています。

 

最後に

 

他にも大切なことはもちろんあり、私がWileyから出版するトラスト(信頼性)に関連する本を書いていることです。これはEnarxに深く関連するものです。基本的に、技術はとても「クール」なものですが、Enarxプロジェクトは既存の需要に見合うものですから、Nathanielと私はクラウドやIoT、エッジ、その他機密情報とアルゴリズムが実装される全てのワークロードの管理方法を変えていくいい機会だと考えています。

 

このブログはセキュリティに関するものですが、トラスト(信頼性)と言うものはとても重要な部分だと考えています。Enarxはそれにぴったりと合うのです。ですから、これからも信頼性とEnarxに関するポストをしていきます。Enarx.ioの最新情報に注目していてください。

 

元の記事:https://aliceevebob.com/2019/12/31/2019-a-year-of-enarx/

2019年12月31日 Mike Bursell

 

タグ:セキュリティ、Enarx、オープンソース、クラウド

 

2019: a year of Enarx

We have big plans for demos and more in 2020

2019年はEnarxの年でした

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.

 

 

7 tips for kicking off an open source project

It’s not really about project mechanics at all

オープンソースプロジェクトを始める7つのアドバイス

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.