Security disclosure or vulnerability management?

Which do I need for an open source project?

This article is a companion piece to one I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and I urge you to read that first, as it provides background and context to this article which I don’t plan to reiterate in full.

In that previous article, I mentioned that many open source projects have a security disclosure process, and most of the rest of the article was basically a list of decisions and steps that you might find in such a process. There’s another term that you might hear, however, which is a Vulnerability Management Process, or “VMP”. While a security disclosure process can be defined as a type of VMP, there are subtle differences to what these two processes might look like, and what they might mean, or be seen to mean, so I think it’s worth spending a little time examining possible differences before we continue.

I just did a “Google fight”[1] for “security disclosure” vs “vulnerability management”, and the former “won” by a ratio of around 5:3. I suspect that this is largely because security disclosures tend to sound exciting and are more “sexy” for headlines than are articles about managing things – even if those things are vulnerabilities.

I’m torn between the two, because both terms highlight important aspects – or reflect different viewpoints – of the same important basic idea: if a bad thing is discovered that involves your product or project, then you need to fix it. I’m going to come at this, as usual for me, from the point of view of open source, so let me give my personal feelings about each.

Security disclosure process

First off, I like the fact that the word “security” is front and centre here. Saying security focusses the mind in ways which “vulnerability” may not, and while a vulnerability may be the thing that we’re addressing, the impact that we’re trying to mitigate is on the security associated with the project or product. The second thing that I like about this phrase is the implication that disclosure is what we are aiming for. Now, this fits well with an open source mindset, but I wonder whether the accent is somewhat different for those who come from a more proprietary background. Where I read “a process to manage telling people about a security problem”, I suspect that others may read “a process to manage the fact that someone has told people about a security problem.” I urge everyone to move to the first point of view, for two reasons:

  1. I believe that security is best done in the open – though we need to find ways to protect people while fixes are being put in place and disseminated;
  2. if we don’t encourage people to come to us – project maintainers, product managers, architects, technical leads – first, in the belief that fixes will be managed as per point 1 above, and that credit will be given where it’s due, then those who discover vulnerabilities will have little incentive to follow the processes we put in place. If this happens, they are more likely to disclose to the wider world before us, making providing and propagating fixes in a timely fashion much more difficult.

Vulnerability management process

This phrase shares the word “process” with the previous one, and, combined with the world “management”, conveys the importance of working through the issue at hand. It also implies to me, at least, that there is other work to be done around the vulnerability rather than just letting everybody know about it (“disclosure”). This may seem like a bad thing (see above), but on the other hand, acknowledging that vulnerabilities do need managing seems to be a worthwhile thing to signal. What worries me, however, is that managing can be seen to imply “sweeping under the carpet” – in other words, making the problem go away.

Tied with this is something about the word “vulnerability” in this context which is holds both negative and positive connotations. The negative is that one would say “it’s just another vulnerability”, underplaying the security aspect of what it represents. The positive is that sometimes, vulnerabilities are not all of the same severity – some aren’t that serious, compared to others – and it’s important to recognise this, and to have a process which allows you to address all severities of problem, part of which process is to rank and probably prioritise them – often known as “triaging”, a term borrowed from the medical world.

A third option?

There’s at least one alternative. Though it scores lower than either “vulnerability management” or “security disclosure” in a Google fight (which, as I mentioned in the footnotes, isn’t exactly a scientific measure), a “vulnerability disclosure” process is another option. Although it doesn’t capture the “security” aspect, it does at least imply disclosure, which I like.

Which do I need?

My main focus for this article – and may passion and background – is open source, so the next question is: “which do I need for an open source project?” The answer, to some degree, is “either” – or “any of them”, if you include the third option. Arguably, as noted above, a “security disclosure process” is a type of vulnerability management process anyway, but I think that in the open source world particularly, the implication of working towards disclosure – towards openness – is important. The open source community is very sensitive to words, and any suggestion of cover-up is unlikely to be welcomed, even if such an implication was entirely unintentional.

One point that I think it’s worth making is that I believe that pretty much any component or library that you are working on may have security implications down the road[2]. This means that there should be some process in place to deal with vulnerabilities or security issues. To be clear: many of these will be discovered by contributors to the project themselves, rather than external researchers or bug-hunters, but that doesn’t mean that such vulnerabilities are a) less important than externally found ones; OR b) less in need of a process for dealing with them.

A good place to look at what questions to start asking is my previous article, but I strongly recommend that every open source project should have some sort of process in place to deal with vulnerabilities or other issues which may have a security impact. I plan to write a follow-up article with some options about how to deal with different types of disclosure, vulnerability and process soon.


1 – try it: Googlefight.com – it’s a fun (if unscientific) method to gauge the relative popularity of two words or terms on the web.

2 – who would ever have thought that font rendering software could lead to critical security issue?

More Rusty thoughts

I do feel that I’m now actually programming in Rust. And I like it.

I wrote an article a couple of weeks ago – 5 Rust reflections (from Java) – about learning Rust, and specifically, about moving to Rust from Java. I talked about five particular points:

  1. Rust feels familiar
  2. References make sense
  3. Ownership will make sense
  4. Cargo is helpful
  5. The compiler is amazing

I absolutely stand by all of these, but I’ve got a little more to say, because I now feel like a Rustacean[1], in that:

  • I don’t feel like programming in anything else ever again;
  • I’ve moved away from simple incantations.

What do I mean by these two statemetents? Well, the first is pretty simple: Rust feels like the place to be: it’s well-structured, it’s expressive, it helps you do the right thing[2], it’s got great documentation and tools, and there’s a fantastic community. And, of course, it’s all open source, which is something that I care about deeply.

And the second thing? Well, I decided that in order to learn Rust properly, I should take an existing project that I had originally written in Java and reimplement it in hopefully fairly idiomatic Rust. Sometime in the middle of last week, I started fixing mistakes – and making mistakes – around implementation, rather than around syntax. And I wasn’t just copying text from tutorials or making minor, seemingly random changes to my code based on the compiler output. In other words, I was getting things to compile, understanding why they compiled, and then just making programming mistakes[3].

This is a big step forward. When you start learning a language, it’s easy just to copy and paste text that you’ve seen elsewhere, or fiddle with unfamiliar constructs until they – sort of – work. Using code – or producing code -that you don’t really understand, but seems to work, is sometimes referred to as “using incantations” (from the idea that most magicians in fiction, film and gaming say collections magic words which “just work” without really understanding what they’re doing or what the combination of works actually means). Some languages[4] are particularly prone to this sort of approach, but many – most? – people learning a new language will be prone to doing this when they start out, just because they want things to work.

And last night, I was up till 1am implementing a new feature – accepting command-line input – which I really couldn’t get my head round. I’d spent quite a lot of time on it (including looking for, and failing to find, some appropriate incantations), and then asked for some help on a rust-lang channel inhabited by some people I know. A number of people had made some suggestions about what had been going wrong, and one person in particular was enormously helpful in picking apart some of the suggestions so that I understood them better. He explained quite a lot, but finished with “I don’t know the return type of the hash function you’re calling – I think this is a good spot for you to figure this piece out on your own.”

This was just what I needed – and any learner of anything, including programming languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to stay down and see if I could work things out for myself. And I did. I took the suggestions that people had made, understood out what they were doing, tried to divine what they should be doing, worked out how they should be doing it, and then found the right way of making it happen.

I’ve still got lots to learn, and I’ll make lots of mistakes still, but I now feel that I’m in a place to find my way through those mistakes (with a little help along the way, probably – thanks to everyone who’s already pointed me in the right direction). But I do feel that I’m now actually programming in Rust. And I like it.


1 – this is what Rust programmers call themselves.

2 – it’s almost impossible to stop people doing the wrong thing entirely, but encouraging people do to the right thing is great. In fact, Rust goes further, and actually makes it difficult to do the wrong thing in many situations. You really have to try quite hard to do bad things in Rust.

3 – I found a particularly egregious off-by-one error in my code, for instance, which had nothing to do with Rust, and everything to do with my not paying enough attention to the program flow.

4 – *cough* Perl *cough*

Thunderspy – should I care?

Thunderspy is a nasty attack, but easily prevented.

There’s a new attack out there which is getting quite a lot of attention this week. It’s called Thunderspy, and it uses the Thunderbolt port which is on many modern laptops and other computers to suck data from your machine. I thought that it might be a good issue to cover this week, as although it’s a nasty attack, there are easy ways to defend yourself, some of which I’ve already covered in previous articles, as they’re generally good security practice to follow.

What is Thunderspy?

Thunderspy is an attack on your computer which allows an attacker with moderate resources to get at your data under certain circumstances. The attacker needs:

  • physical access to your machine – not for long (maybe five minutes), but they do need it. This type of attack is sometimes called an “evil maid” attack, as it can be carried out by hotel staff with access to your room;
  • the ability to take your computer apart (a bit) – all we’re talking here is a screwdriver;
  • a little bit of hardware – around $400 worth, according to one source;
  • access to some freely available software;
  • access to another computer at the same time.

There’s one more thing that the attacker needs, and that’s for you to leave your computer on, or in suspend mode. I’ve discussed different power modes before (in 3 laptop power mode options), and mentioned, as well, that leaving your machine in suspend mode is generally a bad idea (in 7 security tips for travelling with your laptop). It turns out I was right.

What’s the bad news?

Well, there’s quite a lot of bad news:

  • lots of machines have Thunderbolt ports (you can find pictures of both the port and connectors on Wikipedia’s Thunderbolt page, in case you’re not sure whether your machine is affected);
  • machines are vulnerable even if you have full disk encryption;
  • Windows machines are vulnerable;
  • Linux machines are vulnerable;
  • Macintosh machines are vulnerable;
  • most machines with a Thunderbolt port from 2011 onwards are vulnerable;
  • although protection is available on some newer machines (from around 2019)
    • the extent of its efficacy is unclear;
    • lots of manufacturers don’t implement it;
  • some protections that you can turn on break USB and other functionality;
  • one variant of the attack breaks Thunderbolt security permanently, meaning that the attacker won’t need to take your computer apart at all for subsequent attacks: they just need physical access to the port whilst your machine it turned on (or in suspend mode).

The worst thing to note is that full disk encryption does not help you if your computer is turned on or in suspend mode.

Note – I’ve been unable to find out whether any Chromebooks have Thunderbolt support. Please check your model’s specifications or datasheet to be certain.

What’s the good news?

The good news is short and sweet: if you turn your computer completely off, or ensure that it’s in Hibernate mode, then it’s not vulnerable. Thunderspy is a nasty attack, but it’s easily prevented.

What should I do?

  1. Turn your computer off when you leave it unattended, even for short amounts of time.

That was easy, wasn’t it? This is best practice anyway, and it turns out that hibernate mode is also OK. What the attacker is looking for is a powered-up, logged-on computer with Thunderbolt. If you can stop them finding a computer that meets those criteria, then you’re fine. Putting your computer into hibernate mode is also OK.

5 Rust reflections (from Java)

I’m a (budding) Rustacean.

It’s been a long time since I properly learned a new language – computer or human. Maybe 25 years. That language was Java, and although I’ve had to write little bits of C (very, very little) and Javascript in the meantime, the only two languages I’ve written much actual code in have been Perl and Java. As I’ve posted before, I’m co-founder of a project called Enarx (latest details here), which is written almost entirely in Rust. These days I call myself an “architect”, and it’s been quite a long time since I wrote any production code. In the lead up to Christmas last year (2019), I completed the first significant project I’ve written in quite a few years: an implementation of a set of algorithms around a patent application, in Java. It was a good opportunity to get my head back into code, and I was quite pleased with it. I wrote it with half a mind to compile it into WebAssembly as a candidate workload for Enarx, but actually compiling it turned out to be a bit of a struggle, and work got in the way, so completed a basic implementation, checked it into a private github repository and generally forgot about it.

My involvement with the Enarx project so far has been entirely design and architecture work – plus documentation, marketing, evangelism, community work and the rest, but no coding. I have suggested, on occasion – almost entirely in jest – that I commit some code to the project in Perl, and it’s become a bit of a running joke that this would be the extent of any code I submitted, as possibly my involvement with the project, as it would be immediately rejected. And then, about a week and a half ago, I decided to learn Rust. And then to rewrite (including, where necessarily refactoring) that Java project I wrote a few months ago. Here are some of my thoughts on Rust, from the point of a view of a Java developer with a strong Object-Oriented background.

1. Rust feels familiar

Although many of the tutorials and books you’ll find out there are written with C and C++in mind, there’s enough similarity with Java to make the general language feel familiar. The two tutorials I’ve been using the most are The Rust Programming Language online and Programming Rust in dead tree format, and the latter makes frequent references to similarities and differences to and from other languages, including not only C, C++ and Java, but also Python, Javascript and others. Things like control structures and types are similar enough to Java that they’re generally simple to understand, and although there are some major differences, you should be able to get your head round the basics of the language pretty simply. Beware, however: one of the biggest initial problems I’ve been having is that Rust sometimes feels too familiar, so I start trying to do things in the wrong way, have to back out, and try to work out a better way: a way which is more idiomatic to Rust. I have a long way to go on this!

2. References make sense

In Rust, you end up having to use references. Frankly, referencing and de-referencing variables was something that never made much sense to me when I looked at C or C++, but this time, it feels like I get it. If you’re used to passing Java variables by reference and value, and know when you need to take steps to do so differently in specific situations, then you’re ready to start understanding Rust references. The other thing you need to understand is why Rust needs you to use them: it’s because Rust is very, very careful about memory management, and you don’t have a Garbage Collector to clean up after you wherever you go (as in Java). You can’t just pass Strings (for instance) around willy-nilly: Rust is going to insist that you know the lifetime of a variable, and think about when it’s ready to be “dropped”. This means that you need to think hard about scope, and introduces a complex concept: ownership.

3. Ownership will make sense

Honestly, I’m not there yet. I’ve been learning and coding in Rust for under two weeks, and I’m beginning to get my head around ownership. For me (as, I suspect, for many newcomers), this is the big head-shift around moving to Rust from Java or most other languages: ownership. As I mentioned above, you need to understand when a variable is going to be used, and how long it will live. There’s more to it than that, however, and really getting this is something which feels a little foreign to me as a Java developer: you need to understand about the stack and the heap, a distinction which was sufficiently concealed from me by Java, but something which many C and C++ developers will probably understand much more easily. This isn’t the place to explain the concept (I’ve found the diagrams in Programming Rust particularly helpful), but in order to manage the lifetime of variables in memory, Rust is going to need to know what component owns each one. This gets complicated when you’re used to creating objects and instantiating them with variables from all over the place (as in Java), and requires some significant rethinking. Combining this with explicit marking of lifetimes is the biggest conceptual change that I’m having to perform right now.

4. Cargo is helpful

I honestly don’t use the latest and greatest Java tools properly. When I started to learn Java, it wasn’t even in 1.0, and by the time I finished writing production code on a regular basis, there wasn’t yet any need to pick up the very latest tooling, so it may be that Java is better at this than I remember, but the in-built tools for managing the various pieces of a Rust project, including dependencies, libraries, compilation and testing, are a revelation. The cargo binary just does the right thing, and it’s amazing to watch it do its job when it realises that you’ve made a change to your dependencies, for instance. It will perform automatic tests, optimise automatically, produce documentation – so many useful tasks, all within one package. Combine this with git repositories, and managing projects becomes saner and easier.

5. The compiler is amazing

Last, but very far from least, is the compiler. I love the Rust compiler: it really, really tries to help you. The members of the community[1] that makes and maintains clearly go out of their way to provide helpful guidance to correct you when you make mistakes – and I, for one, have been making many of them. Rather than the oracular pronouncements that may be familiar from other languages’ compilers, you’ll get colour-coded text with warnings and errors, and suggestions as to what you might actually be trying to do. You will even be given output such as For more information about this error, try rustc --explain E0308. When you do try this, you get (generally!) helpful explanation and code snippets. Sometimes, particularly when you’re still working your way into the language, it’s not always obvious what you’re doing wrong, but wading through the errors can help you get your head round the concepts in a way which feels very different to messages I’m used to getting from javac, for example.

Conclusion

I don’t expect ever to be writing lots of production Rust, nor ever truly to achieve guru status – in Rust or any other language, to be honest – but I really think that Rust has a lot to be said for it. Throughout my journey so far, I’ve been nodding my head and thinking “that’s a good way to do that”, or “ah, that makes so much more sense than the way I’m used to”. This isn’t an article about why Rust is such a good language – there are loads of those – nor about the best way to learn Rust – there are lots of those, too – but I can say that I’m enjoying it. It’s challenging, but one thing that the tutorials, books and other learning materials are all strong on is explaining the reasons for the choices that Rust makes, and that’s certainly been helpful to me, both in tackling my frustrations, but also in trying to internalise some of the differences between Java and Rust.

If I can get my head truly into Rust, I honestly don’t think I’m likely to write any Java ever again. I’m not sure I’ve got another 25 years of coding in me, but I think that I’m with Rust for the long haul now. I’m a (budding) Rustacean.


1 – Rust, of course, is completely open source, and the community support for it seems amazing.

An Enarx milestone: binaries

Demoing the same binary in very different TEEs.

This week is Red Hat Summit, which is being held virtually for the first time because of the Covid-19 crisis. The lock-down has not affected the productivity of the Enarx team, however (at least not negatively), as we have a very exciting demo that we will be showing at Summit. This post should be published at 1100 EDT, 1500 BST, 1400 GMT on Tuesday, 2020-04-28, which is the time that the session which Nathaniel McCallum and I recorded will be released to the world. I hope to be able to link to that once it’s released to the world. But what will we be showing?

Well, to set the scene, and to discover a little more about the Enarx project, you might want to read these articles first (also available in Japanese – visit each article of a link):

Enarx, as you’ll discover, is about running workloads in TEEs (Trusted Execution environments), using WebAssembly, in what we call “Keeps”. It’s a mammoth job, particularly as we’re abstracting away the underlying processor architectures (currently two: Intel’s SGX and AMD’s SEV), so that you, the user, don’t need to worry about them: all you need to do is write and compile your application, then request that it be deployed. Enarx, then, has lots of moving parts, and one of the key tasks for us has been to start the work to abstract away the underlying processor architectures so that we can prepare the runtime layers on top. Here’s a general picture of the software layers, and how they sit on top of the hardware platforms:

What we’re announcing – and demoing – today is that we have an initial implementation of code to allow us to abstract away process-based and VM-based types of architecture (with examples for SGX and SEV), so that we can do this:

This seems deceptively simple, but what’s actually going on under the covers is rather more than is exposed in the picture above. The reality is more like this:

This gives more detail: the application that’s running on both architectures (SGX on the left, SEV on the right) is the very same ELF static-PIE binary. To be clear, this is not only the same source code, compiled for different platforms, but exactly the same binary, with the very same hash signature. What’s pretty astounding about this is that in order to make it run on both platforms, the engineering team has had to write two sets of seriously low-level code, including more than a little Assembly language, providing the “plumbing” to allow the binary to run on both.

This is a very big deal, because although we’ve only implemented a handful of syscalls on each platform – enough to make our simple binary run and print out a message – we now have a framework on which we know we can build. And what’s next? Well, we need to expand that framework so that we can then build the WebAssembly layers which will allow WebAssembly applications to run on top:

There’s a long way to go, but this milestone shows that we have an initial framework which we can improve, and on which we can build.

What’s next?

What’s exciting about this milestone from our point of view is that we think it puts Enarx at a stage where more people can join and take part. There’s still lots of low-level work to be done, but it’s going to be easier to split up now, and also to start some of the higher level work, too. Enarx is completely open source, and we do all of our design work in the open, along with our daily stand-ups. You’re welcome to browse our documentation, RFCs (mostly in draft at the moment), raise issues, and join our calls. You can find loads more information on the Enarx wiki: we look forward to your involvement in the project.

Last, and not least, I’d like to take a chance to note that we now have testing/CI/CD resources available for the project with both Intel SGX and AMD SEV systems available to us, all courtesy of Packet. This is amazingly generous, and we both thank them and encourage you to visit them and look at their offerings for yourself!

3 open/closed Covid-19 contact tracing questions

All projects are not created equal.

One of the cheering things about the pandemic crisis in which we find ourselves is the vast up-swell of volunteering that we are seeing across the world. We are seeing this equally across the IT sector, and one of the areas where work is being done is in apps to help track Covid-19. Specifically, there is an interest in Covid-19 contact tracing, or tracking, apps for our mobile[0] phones. These aren’t apps which keep an eye on whether you’ve observed lock-down procedures, but which attempt to work out who has been in contact with whom, and work out from that, once we know that one person is infected with Covid-19, what the likely spread of the virus will be.

There are lots of contact tracing initiatives out there, from Pep-Pt from the European Union to Singapore’s TraceTogether, from the University of Washington’s PACT to MIT’s PACT[1]. Google and Apple are – unprecedentedly – working on an app together. There are lots of ways of comparing these apps and projects, but in today’s article, I want to suggest three measures which can help you consider them from the point of view of “openness”. As regular readers of this blog will know, I’m a big fan of open source – not just for software, but for data, management and the rest – and I believe that there’s also a strong correlation here with civil or human rights. There are lots of ways to compare these apps, but these three measures are not too technical, and can help us get a grip on the likelihood that some of the apps (and associated projects) may impinge on privacy and other issues about which we care. I don’t want the data generated from apps that I download onto my phone to be used now or in the future to curtail my, or other people’s civil or human rights, for blackmail or even for unapproved commercial gain.

1. Open source

Our first question must be: “is the app open source?” If the answer is “no”, then we have no way to know what is being captured, and therefore how it is being used. If the app is closed source, it could be collecting any data from pretty much any measuring device on our phones, including photo, video, audio, Bluetooth, wifi, temperature, GPS or accelerometer. We can try restricting access to these measurements, but such controls have not always been effective, understanding the impact of turning them off is rarely simple, and people frankly rarely bother to check them anyway. Equally bad is the fact that with closed source, you can’t have any idea of how good the security is, nor any chance to criticise and improve it. This is something about which I’ve written many times, including in my articles Disbelieving the many eyes hypothesis and Trust & choosing open source. Luckily, it seems that the majority of contact tracing apps are open source, but please be careful, and reject any which are not.

2 Centralised or distributed

In order to make sense of all the data that these apps collect, there needs to be a centralised[2] store where it can be processed, right? It’s common sense.

Actually, no. Although managing and processing data in one place can be much easier, there are ways to store data in a distributed manner, and allow the sorts of processing needed for contact tracing to take place. It may be more complex, but it also makes it much, much more difficult for governments, corporations or malicious actors to misuse this information. And we should be clear that this will be what happens if the data is made available. Maybe the best governments and the best corporations will be well-behaved by their standards, but a) those are not necessarily the standards that I or others will endorse and b) what about malicious actors and governments and corporations which are not “the best”?

3 Location or proximity tracking

This might seem like another obvious choice: if you want to be finding out who was in contact with whom, then the way to do it is see who was where, and when. GPS tracking – and associated technologies like wifi access point location tracking – combined with easily available time data, would give the ability to work out who was in a particular place at the same time as other people. This is true, but it also provides enormous opportunities for misuse, particularly when the data is held centrally (see above). An alternative is to use sensors like Bluetooth or NFC[3], to allow phones to collect information about other phones (or devices) with which they have been in contact and when. This is more easily anonymised – or pseudonymised – allowing information to be passed to the owners of those phones, but at the same time more difficult to misuse by governments, corporations and malicious actors.

There are other issues to consider, one of which is that these sensors were not designed for this type of use, and we may be sacrificing accuracy if we choose this option. On the other hand, many interactions between people occur indoors, where GPS is much less effective anyway, and these types of technologies may help.

You could argue that this measurement is not about “openness” in itself, but it is a key indicator to whether the information collected can be used in ways which are far from open.

Conclusion

There are many other questions we can ask about Covid-19 contact tracing apps, some of which are related to openness, and some of which are not. These include:

  • Coverage
    • not all demographics have – or use – phones as much as the rest of the population, including the poor, the elderly, and certain religious groups. How effective will such projects be if they have reduced access to these groups?
    • older devices may have less accurate sensors, or not have some of the capabilities required by the apps. What is more, there may be a correlation between use of these older devices with some of the demographics noted above.
    • some people rarely update the apps on their phones, so even if they load an initial version of an app, newer versions, with functionality or security improvements, are likely to be unequally distributed across the set of devices.
  • Removal – how easy will it be to remove the application fully, what are the consequences of not doing so, and how likely are people to do so anyway[4]?
  • Will use of these apps by mandatory or voluntary? If the former, there are serious concerns about civil or human rights, not to mention the problems noted above about coverage.

All of these questions are important, but not directly related to the question of the “openness” of the apps and projects. However, we have, right now, some great opportunities to work with and influence some really important projects for public health and well-being, and I believe that it is important that we consider the questions I’ve raised about openness before endorsing, installing or using any of the apps that are being created.


0 – or “cell”, if you’re in North America.

1 – yes, they chose the same acronym. Yes, it is confusing.

2 – or, I supposed, “centralized”, depending on your geography.

3 – “Near Field Communication” – the same capability used when you do contactless payment with your phone or credit/debit card.

4 – how many apps do you still have on your phone that you’ve not even opened for 3 months? Yup, me too.

Post-Covid, post-open?

We are inventive, we are used to turning technologies to good.

The world of lockdown to which we’re becoming habituated at the moment has produced some amazing upsides. The number of people volunteering, the resurgence of local community initiatives, the selfless dedication of key workers across the world and the recognition of their sacrifice by the general public are among the most visible. As many regular readers of this blog are likely to be aware, there has also been an outpouring of interest and engagement in software- and hardware-related projects to help, from infection-tracking apps to 3D-printing of PPE[0]. Companies have made training and educational materials available for free, and there are attempts around the world to engage and contribute to the public commonwealth.

Sadly, not all of the news is good. There has been a rise in phishing attacks, and the lack of appropriate or sufficient security in commonly-used apps such as Zoom has become frightenly evident[1]. There’s an article to write here about the balance between security, usability and cost, but I’m going to save that for another day.

Somewhere in the middle, between the obvious positives and obvious negatives, there are some developments which most of us probably accept at necessary, but which aren’t things that we’d normally welcome. Beyond the obvious restrictions on movement and public gatherings, there are a number of actions which governments, in particular, a retaking which have generally negative impacts on human rights and civil liberties, as outlined in this piece by The Guardian. The article lists numerous examples of governments imposing, or considering the imposition of, measures which would normally be quickly attacked by human rights groups, and resisted by most citizens. Despite the headline, which suggests that the article will deal with how difficult these measures will be to remove after the end of the crisis, there is actually little discussion, beyond a note that “[w]hether that surveillance is eventually rolled back will depend on public oversight.”

I think that we need to go beyond just “oversight” and start planning now for public action. In the communities in which I live and work, there is a general expectation that the world – software, management, government, data – is becoming more, not less open. We are in grave danger of losing that openness even once the need for these government measures diminish. Governments – who will see the wider intelligence-gathering and control opportunities of these changes – will espouse the view that “we need these measures in place in order to be able to react quickly if the same thing happens again”, and, if we’re not careful, public sentiment, bruised and bloodied by the pandemic, will quietly acquiesce, and we will see improvements in human and civil rights rolled back decades, and damaged further by the availability of cheap, mobile, networked technology.

If we believe that openness is a public good, then we need to think how to counter the arguments which we will hear from governments, and be ready to be vocal – not just with counter-arguments, but with counter-proposals. This pandemic is unlike either of the World Wars of the 20th Century, when a clear ending was marked, and there was the opportunity (sadly denied to many citizens of the former USSR) to regain civil liberties and roll back the restrictions of the war years. Nor is it even like the aftermath of the 9/11, that event which has impacted the intelligence and security landscape of the past two decades, where there is (was?) at least a set of (posited) human foes to target. In the case of the Covid-19 pandemic, the “enemy” is amorphous and will be around for decades to come. The measures to combat it – and its successors – will only be slowly reduced, and some will not be.

We need to fight against those measures which are unnecessary, and we need to find alternatives – transparent, public alternatives – to measures which may have some positive effects, but whose overall impact on society and human rights is clearly negative. In a era where big data is becoming pervasive, and the tools to mine it tractable, we need to provide international mechanisms to share and use that data in ways which do not benefit any single government, bloc, or section of society. We are inventive, we are used to turning technologies to good. This is the time we need to do it, and do it quickly. We can make a difference by being open, but we need to start now.


0 – Personal Protection Equipment.

1 – although note that the company is reported to be making improvements to at least one area of concern to some – routing of traffic through China.