Open source projects, embargoes and NDAs

Vendors may only disclose to project members under an NDA.

This article is a companion piece to one that I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and another which I wrote more recently, Security disclosure or vulnerability management?. I urge you to read those first, as they provide background and context to this article which I don’t plan to reiterate in full. The focus of this article is to encourage open source projects to consider a specific aspect of the security disclosure/vulnerability management process: people.

A note about me and the background for this article. I’m employed by Red Hat – a commercial vendor of open source software – and I should make the standard disclosure that the views expressed in this article don’t necessarily reflect those of Red Hat (though they may!). The genesis of this article was a conversation I had as part of my membership of the Confidential Computing Consortium, of which Enarx (a project of which I’m a co-founder) is a member. During a conversation with other members of the Technical Advisory Board, we were discussing the importance of having a vulnerability management process (or whatever you wish to call it!), and a colleague from another organisation raised a really interesting point: what about NDAs?

What’s an NDA?

Let’s start with the question: “what’s an NDA”? “NDA” stands for Non-Disclosure Agreement, and it’s a legal document signed by one or more individuals or organisations agreeing not to disclose confidential material[1]. NDAs have a bad reputation in the wider world, as they’re sometimes used as “gagging orders” within legal settlements by powerful people or organisations in cases of, for instance, harassment. They are, however, not all bad! In fact, within the technology world, at least, they’re a very important tool for business.

Let’s say that company A is a software vendor, and creates software that works with hardware (a video card, let’s say) from company B. Company B wants to ensure that when its new product is released, there is software ready to make use of it, so that consumers will want to use it. In order to do that, Company A will need to know the technical specifications of the video card, but Company B wants to ensure that these aren’t leaked to the press – or competitors – before release. NDAs to the rescue! Company A can sign an NDA with Company B, agreeing not to disclose technical details about Company B’s upcoming products and plans to anyone outside Company A until the plans are public. If it’s a “two-way NDA”, and Company B signs it as well, then Company A can tell Company B about their plans for selling and marketing their new software, for instance, to help provide business partnership opportunities. If either of parties breaks the terms of the agreement, then it’s time for expensive legal proceedings (which nobody wants) and the likelihood that any trust from one to the other will break down, leading to repercussions later.

NDAs are common in this sort of context, and employees of the companies bound by NDAs are expected to abide by them, and also not to “carry over” confidential information to a new employer if they get a new job.

What about open source?

One of the things about open source is that it’s, well, “open”. So you’d think that NDAs would be a bad thing for open source. I’d argue, however, that they have actually served a very useful purpose in the development and rise of the open source ecosystem. The reason for this is that companies are often reluctant to have individuals sign NDAs with them, and individuals are often reluctant to sign NDAs. From the company’s point of view, NDAs are more difficult to enforce against individuals, and from the individual’s point of view, the legal encumbrance of signing an NDA – or multiple NDAs – may seem daunting.

Companies, however, are used to NDAs, and have legal departments to manage them. Back in the 1990s, I remember there being a long lag time between new hardware becoming available, and the open source support for it emerging. What did emerge was typically buggy, often reverse-engineered, and rarely took advantages of the newest features offered by the hardware. From around 2000, this has changed significantly: you can expect the latest hardware capabilities to emerge into, say, the Linux kernel, pretty much as soon as the supporting hardware is available. This is to some extent because there are now a large number of companies who employ engineers to work on open source. The hardware vendors have NDAs with these companies, which means that the engineers can find out ahead of time about new features, and have support ready for them when they launch.

There’s a balancing act here, of course, because sometimes providing code implementations and making them open can give away details of hardware features before the hardware vendors are ready to disclose them. This means that some of the engineering needs to take place “behind closed doors” and only be merged into projects once the hardware vendor is ready to announce their specifications to the world. This isn’t as open as one might like. On balance, though, I’d argue that this approach has provided a net benefit to the open source community, allowing hardware vendors to keep competitive information confidential, whilst allowing the open source community access to the latest and greatest hardware features at, or soon after, release.

NDAs and security

So far, we’ve not mentioned security: where does it come in? Well, say that you’re a hardware vendor, and you discover (or someone discloses to you) that there’s a vulnerability in one of your products. What do you do? Well, you don’t want to announce it to the world until a fix is ready (remember the discussion of embargoes in my previous article), but if there are open source projects which need to be involved in getting a fix out, then you need to be able to talk to them.

The key word in that last sentence is “them”. Most – one could say all – successful open source projects have members employed by a variety of different companies and organisations, as well as some who are not employed by any – or work on the project outside of any employer-related time or agreement. The hardware vendor is extremely keen to ensure that any embargo that is created around the work on this vulnerability is adhered to, and the standard way to enforce (or at least legally encourage) such adherence is through NDAs. This concern is understandable, and justified: there have been some high profile examples of information about vulnerabilities being leaked to the press before fixes were fully ready to be rolled out.

The chances, however, of the vendor having an NDA with all of the organisations and companies represented by engineers on a particular project is likely to be low, and there may also be individuals who are not associated with any organisation or company in the context of this project. Setting up NDAs can be slow and costly – and sometimes impossible – and the very fact of setting up an NDA might point to a vulnerability’s existence! Vendors, then, may make the choice only to disclose to open source project members who are governed by an NDA – whether through their employer or an NDA with them as an individual.

Process and NDAs

What does that mean to open source projects? Well, it means that if you’re working with specific hardware vendors – or even dependent on specific software vendors – then you may need to think very carefully about who is on your security/vulnerability response/management team (let’s refer to this as the “VMT” – “Vulnerability Management Team” – for simplicity). You may need to ensure that you have competent security engineers both with relevant expertise and who are also under an NDA (corporate or individual) with the hardware vendors with whom you work. Beyond that, you need to ensure that they are on your VMT, or can be tasked to your VMT if required.

This may be a tough ask, particularly for small projects. However, the world of enterprise software is currently going through a process of realising quite how important certain projects are to the underpinnings of their operation, and this is a time to consider whether core open source projects with security implications need more support from large organisations. Encouraging more involvement from organisations should not only help deal with this issue – that vendors may not talk to individuals on your project who are not under NDA with them. Of course, getting more contributors can have positive and negative effects in terms of governance and velocity, but you really do need to think about what would happen if your project were affected by a hardware (or upstream software) vulnerability, and were unable to react as you had nobody able to be involved in discussions.

The other thing you need to do is ensure that the process you have in place for reporting of vulnerabilities is flexible enough to manage this issue. There should be at least one person on the VMT to whom reports can be made who is under the relevant NDA(s): remember that for a vendor, even the act of getting in touch with a project may be enough to give away information about possible vulnerabilities. Although having anonymous VMT may seem attractive – so that you are not giving away too much information about your project’s governance – there can be times when a vendor (or individual) wishes to contact a named individual. If that individual is not part of the VMT, or has no way to trigger the process to deploy an appropriate member of the project to the VMT, then you are in trouble.

I hope that more open source projects will put VMTs and associated processes in place, as I wrote in my previous article. I believe that it’s also important that we consider the commercial and competitive needs of the ecosystem in which we operate, and that means being ready to react within the context of NDAs, embargoes and vulnerabilities. If you are part of an open source project, I urge you to discuss this issue – both internally and also with any vendors whose hardware or software you use.

Note: I would like to thank the members of the Technical Advisory Board of the CCC for prompting the writing of this article, and in particular the member who brought up the issue, which (to my shame) I had not considered in detail before. I hope this treatment of questions arising meets with their approval!


1 – Another standard disclosure: I am not a lawyer! This is my personal description, and should be used as the basis for any legal advice, etc., etc..

Security disclosure or vulnerability management?

Which do I need for an open source project?

This article is a companion piece to one I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and I urge you to read that first, as it provides background and context to this article which I don’t plan to reiterate in full.

In that previous article, I mentioned that many open source projects have a security disclosure process, and most of the rest of the article was basically a list of decisions and steps that you might find in such a process. There’s another term that you might hear, however, which is a Vulnerability Management Process, or “VMP”. While a security disclosure process can be defined as a type of VMP, there are subtle differences to what these two processes might look like, and what they might mean, or be seen to mean, so I think it’s worth spending a little time examining possible differences before we continue.

I just did a “Google fight”[1] for “security disclosure” vs “vulnerability management”, and the former “won” by a ratio of around 5:3. I suspect that this is largely because security disclosures tend to sound exciting and are more “sexy” for headlines than are articles about managing things – even if those things are vulnerabilities.

I’m torn between the two, because both terms highlight important aspects – or reflect different viewpoints – of the same important basic idea: if a bad thing is discovered that involves your product or project, then you need to fix it. I’m going to come at this, as usual for me, from the point of view of open source, so let me give my personal feelings about each.

Security disclosure process

First off, I like the fact that the word “security” is front and centre here. Saying security focusses the mind in ways which “vulnerability” may not, and while a vulnerability may be the thing that we’re addressing, the impact that we’re trying to mitigate is on the security associated with the project or product. The second thing that I like about this phrase is the implication that disclosure is what we are aiming for. Now, this fits well with an open source mindset, but I wonder whether the accent is somewhat different for those who come from a more proprietary background. Where I read “a process to manage telling people about a security problem”, I suspect that others may read “a process to manage the fact that someone has told people about a security problem.” I urge everyone to move to the first point of view, for two reasons:

  1. I believe that security is best done in the open – though we need to find ways to protect people while fixes are being put in place and disseminated;
  2. if we don’t encourage people to come to us – project maintainers, product managers, architects, technical leads – first, in the belief that fixes will be managed as per point 1 above, and also that credit will be given where it’s due, then those who discover vulnerabilities will have little incentive to follow the processes we put in place. If this happens, they are more likely to disclose to the wider world before us, making providing and propagating fixes in a timely fashion much more difficult.

Vulnerability management process

This phrase shares the word “process” with the previous one, and, combined with the world “management”, conveys the importance of working through the issue at hand. It also implies to me, at least, that there is other work to be done around the vulnerability rather than just letting everybody know about it (“disclosure”). This may seem like a bad thing (see above), but on the other hand, acknowledging that vulnerabilities do need managing seems to be a worthwhile thing to signal. What worries me, however, is that managing can be seen to imply “sweeping under the carpet” – in other words, making the problem go away.

Tied with this is something about the word “vulnerability” in this context which is holds both negative and positive connotations. The negative is that one would say “it’s just another vulnerability”, underplaying the security aspect of what it represents. The positive is that sometimes, vulnerabilities are not all of the same severity – some aren’t that serious, compared to others – and it’s important to recognise this, and to have a process which allows you to address all severities of problem, part of which process is to rank and probably prioritise them – often known as “triaging”, a term borrowed from the medical world.

A third option?

There’s at least one alternative. Though it scores lower than either “vulnerability management” or “security disclosure” in a Google fight (which, as I mentioned in the footnotes, isn’t exactly a scientific measure), a “vulnerability disclosure” process is another option. Although it doesn’t capture the “security” aspect, it does at least imply disclosure, which I like.

Which do I need?

My main focus for this article – and my passion and background – is open source, so the next question is: “which do I need for an open source project?” The answer, to some degree, is “either” – or “any of them”, if you include the third option. Arguably, as noted above, a “security disclosure process” is a type of vulnerability management process anyway, but I think that in the open source world particularly, the implication of working towards disclosure – towards openness – is important. The open source community is very sensitive to words, and any suggestion of cover-up is unlikely to be welcomed, even if such an implication was entirely unintentional.

One point that I think it’s worth making is that I believe that pretty much any component or library that you are working on may have security implications down the road[2]. This means that there should be some process in place to deal with vulnerabilities or security issues. To be clear: many of these will be discovered by contributors to the project themselves, rather than external researchers or bug-hunters, but that doesn’t mean that such vulnerabilities are a) less important than externally found ones; OR b) less in need of a process for dealing with them.

A good place to look at what questions to start asking is my previous article, but I strongly recommend that every open source project should have some sort of process in place to deal with vulnerabilities or other issues which may have a security impact. I’ve also written a follow-up article with some options about how to deal with different types of disclosure, vulnerability and process, which you can find here: Open source projects, embargoes and NDAs.


1 – try it: Googlefight.com – it’s a fun (if unscientific) method to gauge the relative popularity of two words or terms on the web.

2 – who would ever have thought that font rendering software could lead to critical security issue?

More Rusty thoughts

I do feel that I’m now actually programming in Rust. And I like it.

I wrote an article a couple of weeks ago – 5 Rust reflections (from Java) – about learning Rust, and specifically, about moving to Rust from Java. I talked about five particular points:

  1. Rust feels familiar
  2. References make sense
  3. Ownership will make sense
  4. Cargo is helpful
  5. The compiler is amazing

I absolutely stand by all of these, but I’ve got a little more to say, because I now feel like a Rustacean[1], in that:

  • I don’t feel like programming in anything else ever again;
  • I’ve moved away from simple incantations.

What do I mean by these two statemetents? Well, the first is pretty simple: Rust feels like the place to be. It’s well-structured, it’s expressive, it helps you do the right thing[2], it’s got great documentation and tools, and there’s a fantastic community. And, of course, it’s all open source, which is something that I care about deeply.

And the second thing? Well, I decided that in order to learn Rust properly, I should take an existing project that I had originally written in Java and reimplement it in hopefully fairly idiomatic Rust. Sometime in the middle of last week, I started fixing mistakes – and making mistakes – around implementation, rather than around syntax. And I wasn’t just copying text from tutorials or making minor, seemingly random changes to my code based on the compiler output. In other words, I was getting things to compile, understanding why they compiled, and then just making programming mistakes[3].

This is a big step forward. When you start learning a language, it’s easy just to copy and paste text that you’ve seen elsewhere, or fiddle with unfamiliar constructs until they – sort of – work. Using code – or producing code -that you don’t really understand, but seems to work, is sometimes referred to as “using incantations” (from the idea that most magicians in fiction, film and gaming reciti collections of magic words which “just work” without really understanding what they’re doing or what the combination of words actually means). Some languages[4] are particularly prone to this sort of approach, but many – most? – people learning a new language will be prone to doing this when they start out, just because they want things to work.

And last night, I was up till 1am implementing a new feature – accepting command-line input – which I really couldn’t get my head round. I’d spent quite a lot of time on it (including looking for, and failing to find, some appropriate incantations), and then asked for some help on a rust-lang channel inhabited by some people I know. A number of people had made some suggestions about what had been going wrong, and one person in particular was enormously helpful in picking apart some of the suggestions so that I understood them better. He explained quite a lot, but finished with “I don’t know the return type of the hash function you’re calling – I think this is a good spot for you to figure this piece out on your own.”

This was just what I needed – and any learner of anything, including programming languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to stay down and see if I could work things out for myself. And I did. I took the suggestions that people had made, understood out what they were doing, tried to divine what they should be doing, worked out how they should be doing it, and then found the right way of making it happen.

I’ve still got lots to learn, and I’ll make lots of mistakes still, but I now feel that I’m in a place to find my way through those mistakes (with a little help along the way, probably – thanks to everyone who’s already pointed me in the right direction). But I do feel that I’m now actually programming in Rust. And I like it.


1 – this is what Rust programmers call themselves.

2 – it’s almost impossible to stop people doing the wrong thing entirely, but encouraging people do to the right thing is great. In fact, Rust goes further, and actually makes it difficult to do the wrong thing in many situations. You really have to try quite hard to do bad things in Rust.

3 – I found a particularly egregious off-by-one error in my code, for instance, which had nothing to do with Rust, and everything to do with my not paying enough attention to the program flow.

4 – *cough* Perl *cough*

Thunderspy – should I care?

Thunderspy is a nasty attack, but easily prevented.

There’s a new attack out there which is getting quite a lot of attention this week. It’s called Thunderspy, and it uses the Thunderbolt port which is on many modern laptops and other computers to suck data from your machine. I thought that it might be a good issue to cover this week, as although it’s a nasty attack, there are easy ways to defend yourself, some of which I’ve already covered in previous articles, as they’re generally good security practice to follow.

What is Thunderspy?

Thunderspy is an attack on your computer which allows an attacker with moderate resources to get at your data under certain circumstances. The attacker needs:

  • physical access to your machine – not for long (maybe five minutes), but they do need it. This type of attack is sometimes called an “evil maid” attack, as it can be carried out by hotel staff with access to your room;
  • the ability to take your computer apart (a bit) – all we’re talking here is a screwdriver;
  • a little bit of hardware – around $400 worth, according to one source;
  • access to some freely available software;
  • access to another computer at the same time.

There’s one more thing that the attacker needs, and that’s for you to leave your computer on, or in suspend mode. I’ve discussed different power modes before (in 3 laptop power mode options), and mentioned, as well, that leaving your machine in suspend mode is generally a bad idea (in 7 security tips for travelling with your laptop). It turns out I was right.

What’s the bad news?

Well, there’s quite a lot of bad news:

  • lots of machines have Thunderbolt ports (you can find pictures of both the port and connectors on Wikipedia’s Thunderbolt page, in case you’re not sure whether your machine is affected);
  • machines are vulnerable even if you have full disk encryption;
  • Windows machines are vulnerable;
  • Linux machines are vulnerable;
  • Macintosh machines are vulnerable;
  • most machines with a Thunderbolt port from 2011 onwards are vulnerable;
  • although protection is available on some newer machines (from around 2019)
    • the extent of its efficacy is unclear;
    • lots of manufacturers don’t implement it;
  • some protections that you can turn on break USB and other functionality;
  • one variant of the attack breaks Thunderbolt security permanently, meaning that the attacker won’t need to take your computer apart at all for subsequent attacks: they just need physical access to the port whilst your machine it turned on (or in suspend mode).

The worst thing to note is that full disk encryption does not help you if your computer is turned on or in suspend mode.

Note – I’ve been unable to find out whether any Chromebooks have Thunderbolt support. Please check your model’s specifications or datasheet to be certain.

What’s the good news?

The good news is short and sweet: if you turn your computer completely off, or ensure that it’s in Hibernate mode, then it’s not vulnerable. Thunderspy is a nasty attack, but it’s easily prevented.

What should I do?

  1. Turn your computer off when you leave it unattended, even for short amounts of time.

That was easy, wasn’t it? This is best practice anyway, and it turns out that hibernate mode is also OK. What the attacker is looking for is a powered-up, logged-on computer with Thunderbolt. If you can stop them finding a computer that meets those criteria, then you’re fine. Putting your computer into hibernate mode is also OK.

5 Rust reflections (from Java)

I’m a (budding) Rustacean.

It’s been a long time since I properly learned a new language – computer or human. Maybe 25 years. That language was Java, and although I’ve had to write little bits of C (very, very little) and Javascript in the meantime, the only two languages I’ve written much actual code in have been Perl and Java. As I’ve posted before, I’m co-founder of a project called Enarx (latest details here), which is written almost entirely in Rust. These days I call myself an “architect”, and it’s been quite a long time since I wrote any production code. In the lead up to Christmas last year (2019), I completed the first significant project I’ve written in quite a few years: an implementation of a set of algorithms around a patent application, in Java. It was a good opportunity to get my head back into code, and I was quite pleased with it. I wrote it with half a mind to compile it into WebAssembly as a candidate workload for Enarx, but actually compiling it turned out to be a bit of a struggle, and work got in the way, so completed a basic implementation, checked it into a private github repository and generally forgot about it.

My involvement with the Enarx project so far has been entirely design and architecture work – plus documentation, marketing, evangelism, community work and the rest, but no coding. I have suggested, on occasion – almost entirely in jest – that I commit some code to the project in Perl, and it’s become a bit of a running joke that this would be the extent of any code I submitted, as possibly my involvement with the project, as it would be immediately rejected. And then, about a week and a half ago, I decided to learn Rust. And then to rewrite (including, where necessarily refactoring) that Java project I wrote a few months ago. Here are some of my thoughts on Rust, from the point of a view of a Java developer with a strong Object-Oriented background.

1. Rust feels familiar

Although many of the tutorials and books you’ll find out there are written with C and C++in mind, there’s enough similarity with Java to make the general language feel familiar. The two tutorials I’ve been using the most are The Rust Programming Language online and Programming Rust in dead tree format, and the latter makes frequent references to similarities and differences to and from other languages, including not only C, C++ and Java, but also Python, Javascript and others. Things like control structures and types are similar enough to Java that they’re generally simple to understand, and although there are some major differences, you should be able to get your head round the basics of the language pretty simply. Beware, however: one of the biggest initial problems I’ve been having is that Rust sometimes feels too familiar, so I start trying to do things in the wrong way, have to back out, and try to work out a better way: a way which is more idiomatic to Rust. I have a long way to go on this!

2. References make sense

In Rust, you end up having to use references. Frankly, referencing and de-referencing variables was something that never made much sense to me when I looked at C or C++, but this time, it feels like I get it. If you’re used to passing Java variables by reference and value, and know when you need to take steps to do so differently in specific situations, then you’re ready to start understanding Rust references. The other thing you need to understand is why Rust needs you to use them: it’s because Rust is very, very careful about memory management, and you don’t have a Garbage Collector to clean up after you wherever you go (as in Java). You can’t just pass Strings (for instance) around willy-nilly: Rust is going to insist that you know the lifetime of a variable, and think about when it’s ready to be “dropped”. This means that you need to think hard about scope, and introduces a complex concept: ownership.

3. Ownership will make sense

Honestly, I’m not there yet. I’ve been learning and coding in Rust for under two weeks, and I’m beginning to get my head around ownership. For me (as, I suspect, for many newcomers), this is the big head-shift around moving to Rust from Java or most other languages: ownership. As I mentioned above, you need to understand when a variable is going to be used, and how long it will live. There’s more to it than that, however, and really getting this is something which feels a little foreign to me as a Java developer: you need to understand about the stack and the heap, a distinction which was sufficiently concealed from me by Java, but something which many C and C++ developers will probably understand much more easily. This isn’t the place to explain the concept (I’ve found the diagrams in Programming Rust particularly helpful), but in order to manage the lifetime of variables in memory, Rust is going to need to know what component owns each one. This gets complicated when you’re used to creating objects and instantiating them with variables from all over the place (as in Java), and requires some significant rethinking. Combining this with explicit marking of lifetimes is the biggest conceptual change that I’m having to perform right now.

4. Cargo is helpful

I honestly don’t use the latest and greatest Java tools properly. When I started to learn Java, it wasn’t even in 1.0, and by the time I finished writing production code on a regular basis, there wasn’t yet any need to pick up the very latest tooling, so it may be that Java is better at this than I remember, but the in-built tools for managing the various pieces of a Rust project, including dependencies, libraries, compilation and testing, are a revelation. The cargo binary just does the right thing, and it’s amazing to watch it do its job when it realises that you’ve made a change to your dependencies, for instance. It will perform automatic tests, optimise automatically, produce documentation – so many useful tasks, all within one package. Combine this with git repositories, and managing projects becomes saner and easier.

5. The compiler is amazing

Last, but very far from least, is the compiler. I love the Rust compiler: it really, really tries to help you. The members of the community[1] that makes and maintains clearly go out of their way to provide helpful guidance to correct you when you make mistakes – and I, for one, have been making many of them. Rather than the oracular pronouncements that may be familiar from other languages’ compilers, you’ll get colour-coded text with warnings and errors, and suggestions as to what you might actually be trying to do. You will even be given output such as For more information about this error, try rustc --explain E0308. When you do try this, you get (generally!) helpful explanation and code snippets. Sometimes, particularly when you’re still working your way into the language, it’s not always obvious what you’re doing wrong, but wading through the errors can help you get your head round the concepts in a way which feels very different to messages I’m used to getting from javac, for example.

Conclusion

I don’t expect ever to be writing lots of production Rust, nor ever truly to achieve guru status – in Rust or any other language, to be honest – but I really think that Rust has a lot to be said for it. Throughout my journey so far, I’ve been nodding my head and thinking “that’s a good way to do that”, or “ah, that makes so much more sense than the way I’m used to”. This isn’t an article about why Rust is such a good language – there are loads of those – nor about the best way to learn Rust – there are lots of those, too – but I can say that I’m enjoying it. It’s challenging, but one thing that the tutorials, books and other learning materials are all strong on is explaining the reasons for the choices that Rust makes, and that’s certainly been helpful to me, both in tackling my frustrations, but also in trying to internalise some of the differences between Java and Rust.

If I can get my head truly into Rust, I honestly don’t think I’m likely to write any Java ever again. I’m not sure I’ve got another 25 years of coding in me, but I think that I’m with Rust for the long haul now. I’m a (budding) Rustacean.


1 – Rust, of course, is completely open source, and the community support for it seems amazing.

An Enarx milestone: binaries

Demoing the same binary in very different TEEs.

This week is Red Hat Summit, which is being held virtually for the first time because of the Covid-19 crisis. The lock-down has not affected the productivity of the Enarx team, however (at least not negatively), as we have a very exciting demo that we will be showing at Summit. This post should be published at 1100 EDT, 1500 BST, 1400 GMT on Tuesday, 2020-04-28, which is the time that the session which Nathaniel McCallum and I recorded will be released to the world. I hope to be able to link to that once it’s released to the world. But what will we be showing?

Well, to set the scene, and to discover a little more about the Enarx project, you might want to read these articles first (also available in Japanese – visit each article of a link):

Enarx, as you’ll discover, is about running workloads in TEEs (Trusted Execution environments), using WebAssembly, in what we call “Keeps”. It’s a mammoth job, particularly as we’re abstracting away the underlying processor architectures (currently two: Intel’s SGX and AMD’s SEV), so that you, the user, don’t need to worry about them: all you need to do is write and compile your application, then request that it be deployed. Enarx, then, has lots of moving parts, and one of the key tasks for us has been to start the work to abstract away the underlying processor architectures so that we can prepare the runtime layers on top. Here’s a general picture of the software layers, and how they sit on top of the hardware platforms:

What we’re announcing – and demoing – today is that we have an initial implementation of code to allow us to abstract away process-based and VM-based types of architecture (with examples for SGX and SEV), so that we can do this:

This seems deceptively simple, but what’s actually going on under the covers is rather more than is exposed in the picture above. The reality is more like this:

This gives more detail: the application that’s running on both architectures (SGX on the left, SEV on the right) is the very same ELF static-PIE binary. To be clear, this is not only the same source code, compiled for different platforms, but exactly the same binary, with the very same hash signature. What’s pretty astounding about this is that in order to make it run on both platforms, the engineering team has had to write two sets of seriously low-level code, including more than a little Assembly language, providing the “plumbing” to allow the binary to run on both.

This is a very big deal, because although we’ve only implemented a handful of syscalls on each platform – enough to make our simple binary run and print out a message – we now have a framework on which we know we can build. And what’s next? Well, we need to expand that framework so that we can then build the WebAssembly layers which will allow WebAssembly applications to run on top:

There’s a long way to go, but this milestone shows that we have an initial framework which we can improve, and on which we can build.

What’s next?

What’s exciting about this milestone from our point of view is that we think it puts Enarx at a stage where more people can join and take part. There’s still lots of low-level work to be done, but it’s going to be easier to split up now, and also to start some of the higher level work, too. Enarx is completely open source, and we do all of our design work in the open, along with our daily stand-ups. You’re welcome to browse our documentation, RFCs (mostly in draft at the moment), raise issues, and join our calls. You can find loads more information on the Enarx wiki: we look forward to your involvement in the project.

Last, and not least, I’d like to take a chance to note that we now have testing/CI/CD resources available for the project with both Intel SGX and AMD SEV systems available to us, all courtesy of Packet. This is amazingly generous, and we both thank them and encourage you to visit them and look at their offerings for yourself!

3 open/closed Covid-19 contact tracing questions

All projects are not created equal.

One of the cheering things about the pandemic crisis in which we find ourselves is the vast up-swell of volunteering that we are seeing across the world. We are seeing this equally across the IT sector, and one of the areas where work is being done is in apps to help track Covid-19. Specifically, there is an interest in Covid-19 contact tracing, or tracking, apps for our mobile[0] phones. These aren’t apps which keep an eye on whether you’ve observed lock-down procedures, but which attempt to work out who has been in contact with whom, and work out from that, once we know that one person is infected with Covid-19, what the likely spread of the virus will be.

There are lots of contact tracing initiatives out there, from Pep-Pt from the European Union to Singapore’s TraceTogether, from the University of Washington’s PACT to MIT’s PACT[1]. Google and Apple are – unprecedentedly – working on an app together. There are lots of ways of comparing these apps and projects, but in today’s article, I want to suggest three measures which can help you consider them from the point of view of “openness”. As regular readers of this blog will know, I’m a big fan of open source – not just for software, but for data, management and the rest – and I believe that there’s also a strong correlation here with civil or human rights. There are lots of ways to compare these apps, but these three measures are not too technical, and can help us get a grip on the likelihood that some of the apps (and associated projects) may impinge on privacy and other issues about which we care. I don’t want the data generated from apps that I download onto my phone to be used now or in the future to curtail my, or other people’s civil or human rights, for blackmail or even for unapproved commercial gain.

1. Open source

Our first question must be: “is the app open source?” If the answer is “no”, then we have no way to know what is being captured, and therefore how it is being used. If the app is closed source, it could be collecting any data from pretty much any measuring device on our phones, including photo, video, audio, Bluetooth, wifi, temperature, GPS or accelerometer. We can try restricting access to these measurements, but such controls have not always been effective, understanding the impact of turning them off is rarely simple, and people frankly rarely bother to check them anyway. Equally bad is the fact that with closed source, you can’t have any idea of how good the security is, nor any chance to criticise and improve it. This is something about which I’ve written many times, including in my articles Disbelieving the many eyes hypothesis and Trust & choosing open source. Luckily, it seems that the majority of contact tracing apps are open source, but please be careful, and reject any which are not.

2 Centralised or distributed

In order to make sense of all the data that these apps collect, there needs to be a centralised[2] store where it can be processed, right? It’s common sense.

Actually, no. Although managing and processing data in one place can be much easier, there are ways to store data in a distributed manner, and allow the sorts of processing needed for contact tracing to take place. It may be more complex, but it also makes it much, much more difficult for governments, corporations or malicious actors to misuse this information. And we should be clear that this will be what happens if the data is made available. Maybe the best governments and the best corporations will be well-behaved by their standards, but a) those are not necessarily the standards that I or others will endorse and b) what about malicious actors and governments and corporations which are not “the best”?

3 Location or proximity tracking

This might seem like another obvious choice: if you want to be finding out who was in contact with whom, then the way to do it is see who was where, and when. GPS tracking – and associated technologies like wifi access point location tracking – combined with easily available time data, would give the ability to work out who was in a particular place at the same time as other people. This is true, but it also provides enormous opportunities for misuse, particularly when the data is held centrally (see above). An alternative is to use sensors like Bluetooth or NFC[3], to allow phones to collect information about other phones (or devices) with which they have been in contact and when. This is more easily anonymised – or pseudonymised – allowing information to be passed to the owners of those phones, but at the same time more difficult to misuse by governments, corporations and malicious actors.

There are other issues to consider, one of which is that these sensors were not designed for this type of use, and we may be sacrificing accuracy if we choose this option. On the other hand, many interactions between people occur indoors, where GPS is much less effective anyway, and these types of technologies may help.

You could argue that this measurement is not about “openness” in itself, but it is a key indicator to whether the information collected can be used in ways which are far from open.

Conclusion

There are many other questions we can ask about Covid-19 contact tracing apps, some of which are related to openness, and some of which are not. These include:

  • Coverage
    • not all demographics have – or use – phones as much as the rest of the population, including the poor, the elderly, and certain religious groups. How effective will such projects be if they have reduced access to these groups?
    • older devices may have less accurate sensors, or not have some of the capabilities required by the apps. What is more, there may be a correlation between use of these older devices with some of the demographics noted above.
    • some people rarely update the apps on their phones, so even if they load an initial version of an app, newer versions, with functionality or security improvements, are likely to be unequally distributed across the set of devices.
  • Removal – how easy will it be to remove the application fully, what are the consequences of not doing so, and how likely are people to do so anyway[4]?
  • Will use of these apps by mandatory or voluntary? If the former, there are serious concerns about civil or human rights, not to mention the problems noted above about coverage.

All of these questions are important, but not directly related to the question of the “openness” of the apps and projects. However, we have, right now, some great opportunities to work with and influence some really important projects for public health and well-being, and I believe that it is important that we consider the questions I’ve raised about openness before endorsing, installing or using any of the apps that are being created.


0 – or “cell”, if you’re in North America.

1 – yes, they chose the same acronym. Yes, it is confusing.

2 – or, I supposed, “centralized”, depending on your geography.

3 – “Near Field Communication” – the same capability used when you do contactless payment with your phone or credit/debit card.

4 – how many apps do you still have on your phone that you’ve not even opened for 3 months? Yup, me too.

Post-Covid, post-open?

We are inventive, we are used to turning technologies to good.

The world of lockdown to which we’re becoming habituated at the moment has produced some amazing upsides. The number of people volunteering, the resurgence of local community initiatives, the selfless dedication of key workers across the world and the recognition of their sacrifice by the general public are among the most visible. As many regular readers of this blog are likely to be aware, there has also been an outpouring of interest and engagement in software- and hardware-related projects to help, from infection-tracking apps to 3D-printing of PPE[0]. Companies have made training and educational materials available for free, and there are attempts around the world to engage and contribute to the public commonwealth.

Sadly, not all of the news is good. There has been a rise in phishing attacks, and the lack of appropriate or sufficient security in commonly-used apps such as Zoom has become frightenly evident[1]. There’s an article to write here about the balance between security, usability and cost, but I’m going to save that for another day.

Somewhere in the middle, between the obvious positives and obvious negatives, there are some developments which most of us probably accept at necessary, but which aren’t things that we’d normally welcome. Beyond the obvious restrictions on movement and public gatherings, there are a number of actions which governments, in particular, a retaking which have generally negative impacts on human rights and civil liberties, as outlined in this piece by The Guardian. The article lists numerous examples of governments imposing, or considering the imposition of, measures which would normally be quickly attacked by human rights groups, and resisted by most citizens. Despite the headline, which suggests that the article will deal with how difficult these measures will be to remove after the end of the crisis, there is actually little discussion, beyond a note that “[w]hether that surveillance is eventually rolled back will depend on public oversight.”

I think that we need to go beyond just “oversight” and start planning now for public action. In the communities in which I live and work, there is a general expectation that the world – software, management, government, data – is becoming more, not less open. We are in grave danger of losing that openness even once the need for these government measures diminish. Governments – who will see the wider intelligence-gathering and control opportunities of these changes – will espouse the view that “we need these measures in place in order to be able to react quickly if the same thing happens again”, and, if we’re not careful, public sentiment, bruised and bloodied by the pandemic, will quietly acquiesce, and we will see improvements in human and civil rights rolled back decades, and damaged further by the availability of cheap, mobile, networked technology.

If we believe that openness is a public good, then we need to think how to counter the arguments which we will hear from governments, and be ready to be vocal – not just with counter-arguments, but with counter-proposals. This pandemic is unlike either of the World Wars of the 20th Century, when a clear ending was marked, and there was the opportunity (sadly denied to many citizens of the former USSR) to regain civil liberties and roll back the restrictions of the war years. Nor is it even like the aftermath of the 9/11, that event which has impacted the intelligence and security landscape of the past two decades, where there is (was?) at least a set of (posited) human foes to target. In the case of the Covid-19 pandemic, the “enemy” is amorphous and will be around for decades to come. The measures to combat it – and its successors – will only be slowly reduced, and some will not be.

We need to fight against those measures which are unnecessary, and we need to find alternatives – transparent, public alternatives – to measures which may have some positive effects, but whose overall impact on society and human rights is clearly negative. In a era where big data is becoming pervasive, and the tools to mine it tractable, we need to provide international mechanisms to share and use that data in ways which do not benefit any single government, bloc, or section of society. We are inventive, we are used to turning technologies to good. This is the time we need to do it, and do it quickly. We can make a difference by being open, but we need to start now.


0 – Personal Protection Equipment.

1 – although note that the company is reported to be making improvements to at least one area of concern to some – routing of traffic through China.

No security without an architecture

Your diagrams don’t need to be perfect. But they do need to be there.

I attended a virtual demo this week. It didn’t work, but none of us was stressed by that: it was an internal demo, and these things happen. Luckily, the members of the team presenting the demo had lots of information about what it would have shown us, and a particularly good architectural diagram to discuss. We’ve all been in the place where the demo doesn’t work, and all felt for the colleague who was presenting the slidedeck, and on whose screen a message popped up a few slides in, saying “Demo NO GO!” from one of her team members.

After apologies, she asked if we wanted to bail completely, or to discuss the information they had to hand. We opted for the latter – after all, most demos which aren’t foregrounding user experience components don’t show much beyond terminal windows that most of us could fake up in half an hour or so anyway. She answered a couple of questions, and then I piped up with one about security.

This article could have been about the failures in security in a project which was showing an early demo: another example of security being left till late (often too late) in the process, at which point it’s difficult and expensive to integrate. However, it’s not. It clear that thought had been given to specific aspects of security, both on the network (in transit) and in storage (at rest), and though there was probably room for improvement (and when isn’t there?), a team member messaged me more documentation during the call which allowed me to understand of the choices the team had made.

What this article is about is the fact that we were able to have a discussion at all. The slidedeck included an architecture diagram showing all of the main components, with arrows showing the direction of data flows. It was clear, colour-coded to show the provenance of the different components, which were sourced from external projects, which from internal, and which were new to this demo. The people on the call – all technical – were able to see at a glance what was going on, and the team lead, who was providing the description, had a clear explanation for the various flows. Her team members chipped in to answer specific questions or to provide more detail on particular points. This is how technical discussions should work, and there was one thing in particular which pleased me (beyond the fact that the project had thought about security at all!): that there was an architectural diagram to discuss.

There are not enough security experts in the world to go around, which means that not every project will have the opportunity to get every stage of their design pored over by a member of the security community. But when it’s time to share, a diagram is invaluable. I hate to think about the number of times I’ve been asked to look at project in order to give my thoughts about security aspects, only to find that all that’s available is a mix of code and component documentation, with no explanation of how it all fits together and, worse, no architecture diagram.

When you’re building a project, you and your team are often so into the nuts and bolts that you know how it all fits together, and can hold it in your head, or describe the key points to a colleague. The problem comes when someone needs to ask questions of a different type, or review the architecture and design from a different slant. A picture – an architectural diagram – is a great way to educate external parties (or new members of the project) in what’s going on at a technical level. It also has a number of extra benefits:

  • it forces you to think about whether everything can be described in this way;
  • it forces you to consider levels of abstraction, and what should be shown at what levels;
  • it can reveal assumptions about dependencies that weren’t previously clear;
  • it is helpful to show data flows between the various components
  • it allows for simpler conversations with people whose first language is not that of your main documentation.

To be clear, this isn’t just a security problem – the same can go for other non-functional requirements such as high-availability, data consistency, performance or resilience – but I’m a security guy, and this is how I experience the issue. I’m also aware that I have a very visual mind, and this is how I like to get my head around something new, but even for those who aren’t visually inclined, a diagram at least offers the opportunity to orient yourself and work out where you need to dive deeper into code or execution. I also believe that it’s next to impossible for anybody to consider all the security implications (or any of the higher-order emergent characteristics and qualities) of a system of any significant complexity without architectural diagrams. And that includes the people who designed the system, because no system exists on its own (or there’s no point to it), so you can’t hold all of those pieces in your head of any length of time.

I’ve written before about the book Building Evolutionary Architectures, which does a great job in helping projects think about managing requirements which can morph or change their priority, and which, unsurprisingly, makes much use of architectural diagrams. Enarx, a project with which I’m closely involved, has always had lots of diagrams, and I’m aware that there’s an overhead involved here, both in updating diagrams as designs change and in considering which abstractions to provide for different consumers of our documentation, but I truly believe that it’s worth it. Whenever we introduce new people to the project or give a demo, we ensure that we include at least one diagram – often more – and when we get questions at the end of a presentation, they are almost always preceded with a phrase such as, “could you please go back to the diagram on slide x?”.

I nearly published this article without adding another point: this is part of being “open”. I’m a strong open source advocate, but source code isn’t enough to make a successful project, or even, I would add, to be a truly open source project: your documentation should not just be available to everybody, but accessible to everyone. If you want to get people involved, then providing a way in is vital. But beyond that, I think we have a responsibility (and opportunity!) towards diversity within open source. Providing diagrams helps address four types of diversity (at least!):

  • people whose first language is not the same as that of your main documentation (noted above);
  • people who have problems reading lots of text (e.g. those with dyslexia);
  • people who think more visually than textually (like me!);
  • people who want to understand your project from different points of view (e.g. security, management, legal).

If you’ve ever visited a project on github (for instance), with the intention of understanding how it fits into a larger system, you’ll recognise the sigh of relief you experience when you find a diagram or two on (or easily reached from) the initial landing page.

And so I urge you to create diagrams, both for your benefit, and also for anyone who’s going to be looking at your project in the future. They will appreciate it (and so should you). Your diagrams don’t need to be perfect. But they do need to be there.

7 tips for managers of new home workers

You will make mistakes. You are subject to the same stresses and strains.

Many organisations and companies are coming to terms with the changes forced on them by Covid-19 (“the coronavirus”), and working out what it means to them, their employees and their work patterns. For many people who were previously in offices, it means working from home.  I wrote an article a few weeks ago called 9 tips for new home workers, and then realised that it wouldn’t just be new home workers who might be struggling, but also their managers.  If you’re reading this, then you’re probably a manager, working with people who don’t normally work from home – which may include you – so here are some tips for you, too.

1 – Communicate

Does that meeting need to be at 9am?  Do you need to have the meeting today – could it be tomorrow?  As managers, we’re used to being (or at pretending to be) the most important person in our team’s lives during the working day.  For many, that will have changed, and we become a distant second, third or fourth. Family and friends may need help and support, kids may need setting up with schoolwork, or a million other issues may come up which mean that expecting attention at the times that we expect it is just not plausible.  Investigate the best medium (or media) for communicating with each separate member of your team, whether that’s synchronous or asynchronous IM, email, phone, or a daily open video conference call, where anybody can turn up and just be present.  Be aware of your team’s needs – which you just can’t do without communicating with them – and also be aware that those needs may change over the coming weeks.

2 – Flex deadlines

Whether we like it or not, there are things more important than work deadlines at the moment, and although you may find that some people produce work as normal, others will be managing at best only “bursty” periods of work, at abnormal times (for some, the weekend may work best, for others the evenings after the kids have gone to bed).  Be flexible about deadlines, and ask your team what they think they can manage.  This may go up and down over time, and may even increase as people get used to new styles of working.  But adhering to hard deadlines isn’t going to help anybody in the long run – and we need to be ready for the long run.

3 – Gossip

This may seem like an odd one, but gossip is good for human relationships.  When you start a call, set aside some time to chat about what’s going on where the other participants are, in their homes and beyond.  This will help your team feel that you care, but also allow you to become aware of some issues before they arise. A word of caution, however: there may be times when it becomes clear in your discussions that a team-member is struggling.  In this case, you have two options. If the issue seems to be urgent, you may well choose to abandon the call (be sensitive about how you do this if it’s a multi-person call) and to spend time working with the person who is struggling, or signposting them directly to some other help.  If the issue doesn’t seem to be urgent, but threatens to take over the call, then ask the person whether they would be happy to follow up later. In the latter case, you must absolutely do that: once you have recognised an issue, you have a responsibility to help, whether that help comes directly from you or with support from somebody else.  

4 -Accommodate

Frankly, this builds on our other points: you need to be able to accommodate your team’s needs, and to recognise that they may change over time, but will also almost certainly be different from yours.  Whether it’s the setting for meetings, pets and children[1], poor bandwidth, strange work patterns, sudden unavailability or other changes, accommodating your team’s needs will make them more likely to commit to the work they are expected to do, not to mention make them feel valued, and consider you as more of a support than a hindrance to their (often drastically altered) new working lives.

5 – Forgive

Sometimes, your team may do things which feel that they’ve crossed the line – the line in “normal” times.  They may fail to deliver to a previously agreed deadline, turn up for an important meeting appearing dishevelled, or speak out of turn, maybe.  This probably isn’t their normal behaviour (if it is, then you have different challenges), and it’s almost certainly caused by their abnormal circumstances.  You may find that you are more stressed, and more likely to react negatively to failings (or perceived failings). Take a step back. Breathe. Finish the call early, if you have to, but try to understand why the behaviour that upset you did upset you, and then forgive it.  That doesn’t mean that there won’t need to be some quiet discussion later on to address it, but if you go into interactions with the expectation of openness, kindness and forgiveness, then that is likely to be reciprocated: and we all need that. 

6 – Forgive yourself

You will make mistakes.  You are subject to the same stresses and strains as your team, with the added burden of supporting them.  You need to find space for yourself, and to forgive yourself when you do make a mistake. That doesn’t mean abrogating responsibility for things you have done wrong, and neither is it an excuse not to apologise for inappropriate behaviour, but constantly berating yourself will add to your stresses and strains, and is likely to exacerbate the problem, rather than relieve it.  You have a responsibility to look after yourself so that you can look after your team: not beating yourself up about every little thing needs to be part of that.

7 – Prepare

Nobody knows how long we’ll be doing this, but what are you going to do when things start going back to normal?  One thing that will come up is the ability of at least some of your team to continue working from home or remotely.  If they have managed to do so given all the complications and stresses of lockdown, kids and family members under their feet, they will start asking “well, how about doing this the rest of the time?” – and you should be asking exactly the same question.  Some people will want to return to the office, and some will need to – at least for some of the time. But increased flexibility will become a hallmark of the organisations that don’t just survive this crisis, but actually thrive after it. You, as a leader, need to consider what comes next, and how your team can benefit from the lessons that you – collectively – have learned. 

1 – or partners/spouses: I caused something of a stir on a video conference that my wife was on today when I came into her office to light her wood-burning stove!