8 tips on how to do open source (badly)

Generally, a “build successful” or “build failed” message should be sufficient.

A while ago, I published my wildly popular[1] article How not to make a cup of tea. Casting around for something to write this week, it occurred to me that I might write about something that I believe is almost as important as to world peace, the forward march of progress and brotherly/sisterly love: open source projects. There are so many guides out there around how to create an open source project that it’s become almost too easy to start a new, successful, community-supported one. I think it’s time to redress the balance, and give you some clues about how not to do it.

Throw it over the wall

You know how it is: you’re a large corporation, you’ve had a team of developers working on a project for several years now, and you’re very happy with it. It’s quite expensive to maintain and improve the code, but luckily, it’s occurred to you that other people might want to use it – or bits of it. And recently, some of your customers have complained that it’s difficult to get improvements and new features, and partners complain that your APIs are obscure, ill-defined and subject to undocumented change.

Then you hit on a brilliant idea: why not open source it and tell them their worries are over? All you need to do is take the existing code, create a GitHub or GitLab[2] project (preferably under a Game of Thrones-themed username that happens to belong to one of your developers), make a public repository, upload all of the code, and put out a press release announcing: a) the availability of your project; and b) what a great open source citizen you are.

People will be falling over themselves to contribute to your project, and you’ll suddenly have hundreds of developers basically working for you for free, providing new features and bug fixes!

Keep a tight grip on the project

There’s a danger, however, when you make your project open source, that other people will think that they have a right to make changes to it. The way it’s supposed to work is that your product manager comes up with a bunch of new features that need implementing, and posts them as issues on the repository. Your lucky new contributors then get to write code to satisfy the new features, you get to test them, and then, if they’re OK, you can accept them into the project! Free development! Sometimes, if you’re lucky, customers or partners (the only two parties of any importance in this process, apart from you) will raise issues on the project repository, which, when appropriately subjected to your standard waterfall development process and vetted by the appropriate product managers, can be accepted as “approved” issues and earmarked for inclusion into the project.

There’s a danger that, as you’ve made the project code open source, some people might see this as an excuse to write irrelevant features and fixes for bugs that none of your customers have noticed (and therefore, you can safely assume, don’t care about). Clearly, in this case, you should reject close any issues related to such features or fixes, and reject (or just ignore) any related patched.

Worse yet, you’ll sometimes find developers[2] complaining about how you run the project. They may “fork” the project, making their own version. If they do this, beware setting your legal department on them. There’s a possibility that as your project is open source, they might be able to argue that they have a right to create a new version. Much better is to set your PR department on them, rubbishing the new project, launching ad hominem[4] attacks on them and showing everybody that you hold the moral high ground.

Embrace diversity (in licensing)

There may be some in your organisation who say that an open source project doesn’t need a licence[5]. Do not listen to their siren song: they are wrong. What your open source project needs is lots of licences! Let your developers choose their favourite for each file they touch, or, even better, let the project manager choose. The OSI maintains a useful list, but consider this just a starting point: why not liberally sprinkle different licences through your project? Diversity, we keep hearing, is good, so why not apply it to your open source project code?

Avoid documentation

Some people suggest that documentation can be useful for open source projects, and they are right. What few of them seem to understand is that their expectations for the type of documentation are likely to be skewed by their previous experience. You, on the other hand, have a wealth of internal project documentation and external product marketing material that you accrued before deciding to make your project open source: this is great news. All you need to do is to create a “docs” folder and copy all of the PDF files there. Don’t forget to update them whenever you do a new product version!

Avoid tooling

All you need is code (and docs – see above). Your internal developers have carefully constructed and maintained build environments, and they should therefore have no problems building and testing any parts of the project. Much of this tooling, being internal, could be considered proprietary, and the details must therefore be kept confidential. Any truly useful contributors will be able to work out everything they need for themselves, and shouldn’t need any help, so providing any information about how actually to build the code in the repository is basically redundant: don’t bother.

An alternative, for more “expert” organisations, is to provide build environments which allow contributors to batch builds to see if they compile or not (whilst avoiding giving them access to the tooling themselves, obviously). While this can work, beware providing too much in the way of output for the developer/tester, as this might expose confidential information. Generally, a “build successful” or “build failed” message should be sufficient.

Avoid diagrams

Despite what some people think, diagrams are dangerous. They can give away too much information about your underlying assumptions for the project (and, therefore, the product you’re selling which is based on it), and serious developers should be able to divine all they need from the 1,500 source files that you’ve deposited in the repository anyway.

A few “marketiture” diagrams from earlier iterations of the project may be acceptable, but only if they are somewhat outdated and don’t provide any real insight into the existing structure of the code.

Keep quiet

Sometimes, contributors – or those hoping to become contributors, should you smile upon their requests – will ask questions. In the old days, these questions tended to be sent to email lists[6], where they could be safely ignored (unless they were from an important customer). More recently, there are other channels that developers expect to use to contact members of the project team, such as issues or chat.

It’s important that you remember that you have no responsibility or duty to these external contributors: they are supposed to be helping you. What’s more, your internal developers will be too busy writing code to answer the sort of uninformed queries that are likely to be raised (and as for so-called “vulnerability disclosures“, you can just fix those in your internal version of the product, or at least reassure your customers that you have). Given that most open source projects will come with an issue database, and possibly even a chat channel, what should you do?

The answer is simple: fall back to email. Insist that the only channel which is guaranteed attention[7] is email. Don’t make the mistake of failing to create an email address to which people can send queries: contributors are much more likely to forget that they’re expecting an answer to an email if they get generic auto-response (“Thanks for your email: a member of the team should get back to you shortly”) than if they receive a bounce message. Oh, and close any issues that people create without your permission for “failing to follow project process”[8].

Post huge commits

Nobody[9] wants to have to keep track of lots of tiny changes to code (or, worse, documentation – see above), or have contributors picking holes in it. There’s a useful way to avoid much of this, however, which is to train your developers only to post large commits to the open source project. You need to ensure that your internal developers understand that code should only be posted to the external repository when the project team (or, more specifically, the product team) deems it ready. Don’t be tempted to use the open source repository as your version control system: you should have perfectly good processes internally, and, with a bit of automation, you can set them up to copy batches of updates to the external repository on a regular basis[10].


1 – well, lots of you read it, so I’m assuming you like it.

2 – other public repositories may be available, but you won’t have heard of them, so why should you care?

3 – the canonical term for such people is “whingers”: they are invariably “experts”. According to them (and their 20 years of security experience, etc., etc.).

4 – or ad mulierem – please don’t be sexist in your attacks.

5 – or license, depending on your spelling choice.

6 – where they existed – a wise organisation could carefully avoid creating them.

7 – “attention” can include a “delete all” filter.

8 – you don’t actually need to define what the process is anywhere, obviously.

9 – in your product organisation, at least.

10 – note that “regular” does not equate to “frequent”. Aim for a cadence of once every month or two.

Open source projects, embargoes and NDAs

Vendors may only disclose to project members under an NDA.

This article is a companion piece to one that I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and another which I wrote more recently, Security disclosure or vulnerability management?. I urge you to read those first, as they provide background and context to this article which I don’t plan to reiterate in full. The focus of this article is to encourage open source projects to consider a specific aspect of the security disclosure/vulnerability management process: people.

A note about me and the background for this article. I’m employed by Red Hat – a commercial vendor of open source software – and I should make the standard disclosure that the views expressed in this article don’t necessarily reflect those of Red Hat (though they may!). The genesis of this article was a conversation I had as part of my membership of the Confidential Computing Consortium, of which Enarx (a project of which I’m a co-founder) is a member. During a conversation with other members of the Technical Advisory Board, we were discussing the importance of having a vulnerability management process (or whatever you wish to call it!), and a colleague from another organisation raised a really interesting point: what about NDAs?

What’s an NDA?

Let’s start with the question: “what’s an NDA”? “NDA” stands for Non-Disclosure Agreement, and it’s a legal document signed by one or more individuals or organisations agreeing not to disclose confidential material[1]. NDAs have a bad reputation in the wider world, as they’re sometimes used as “gagging orders” within legal settlements by powerful people or organisations in cases of, for instance, harassment. They are, however, not all bad! In fact, within the technology world, at least, they’re a very important tool for business.

Let’s say that company A is a software vendor, and creates software that works with hardware (a video card, let’s say) from company B. Company B wants to ensure that when its new product is released, there is software ready to make use of it, so that consumers will want to use it. In order to do that, Company A will need to know the technical specifications of the video card, but Company B wants to ensure that these aren’t leaked to the press – or competitors – before release. NDAs to the rescue! Company A can sign an NDA with Company B, agreeing not to disclose technical details about Company B’s upcoming products and plans to anyone outside Company A until the plans are public. If it’s a “two-way NDA”, and Company B signs it as well, then Company A can tell Company B about their plans for selling and marketing their new software, for instance, to help provide business partnership opportunities. If either of parties breaks the terms of the agreement, then it’s time for expensive legal proceedings (which nobody wants) and the likelihood that any trust from one to the other will break down, leading to repercussions later.

NDAs are common in this sort of context, and employees of the companies bound by NDAs are expected to abide by them, and also not to “carry over” confidential information to a new employer if they get a new job.

What about open source?

One of the things about open source is that it’s, well, “open”. So you’d think that NDAs would be a bad thing for open source. I’d argue, however, that they have actually served a very useful purpose in the development and rise of the open source ecosystem. The reason for this is that companies are often reluctant to have individuals sign NDAs with them, and individuals are often reluctant to sign NDAs. From the company’s point of view, NDAs are more difficult to enforce against individuals, and from the individual’s point of view, the legal encumbrance of signing an NDA – or multiple NDAs – may seem daunting.

Companies, however, are used to NDAs, and have legal departments to manage them. Back in the 1990s, I remember there being a long lag time between new hardware becoming available, and the open source support for it emerging. What did emerge was typically buggy, often reverse-engineered, and rarely took advantages of the newest features offered by the hardware. From around 2000, this has changed significantly: you can expect the latest hardware capabilities to emerge into, say, the Linux kernel, pretty much as soon as the supporting hardware is available. This is to some extent because there are now a large number of companies who employ engineers to work on open source. The hardware vendors have NDAs with these companies, which means that the engineers can find out ahead of time about new features, and have support ready for them when they launch.

There’s a balancing act here, of course, because sometimes providing code implementations and making them open can give away details of hardware features before the hardware vendors are ready to disclose them. This means that some of the engineering needs to take place “behind closed doors” and only be merged into projects once the hardware vendor is ready to announce their specifications to the world. This isn’t as open as one might like. On balance, though, I’d argue that this approach has provided a net benefit to the open source community, allowing hardware vendors to keep competitive information confidential, whilst allowing the open source community access to the latest and greatest hardware features at, or soon after, release.

NDAs and security

So far, we’ve not mentioned security: where does it come in? Well, say that you’re a hardware vendor, and you discover (or someone discloses to you) that there’s a vulnerability in one of your products. What do you do? Well, you don’t want to announce it to the world until a fix is ready (remember the discussion of embargoes in my previous article), but if there are open source projects which need to be involved in getting a fix out, then you need to be able to talk to them.

The key word in that last sentence is “them”. Most – one could say all – successful open source projects have members employed by a variety of different companies and organisations, as well as some who are not employed by any – or work on the project outside of any employer-related time or agreement. The hardware vendor is extremely keen to ensure that any embargo that is created around the work on this vulnerability is adhered to, and the standard way to enforce (or at least legally encourage) such adherence is through NDAs. This concern is understandable, and justified: there have been some high profile examples of information about vulnerabilities being leaked to the press before fixes were fully ready to be rolled out.

The chances, however, of the vendor having an NDA with all of the organisations and companies represented by engineers on a particular project is likely to be low, and there may also be individuals who are not associated with any organisation or company in the context of this project. Setting up NDAs can be slow and costly – and sometimes impossible – and the very fact of setting up an NDA might point to a vulnerability’s existence! Vendors, then, may make the choice only to disclose to open source project members who are governed by an NDA – whether through their employer or an NDA with them as an individual.

Process and NDAs

What does that mean to open source projects? Well, it means that if you’re working with specific hardware vendors – or even dependent on specific software vendors – then you may need to think very carefully about who is on your security/vulnerability response/management team (let’s refer to this as the “VMT” – “Vulnerability Management Team” – for simplicity). You may need to ensure that you have competent security engineers both with relevant expertise and who are also under an NDA (corporate or individual) with the hardware vendors with whom you work. Beyond that, you need to ensure that they are on your VMT, or can be tasked to your VMT if required.

This may be a tough ask, particularly for small projects. However, the world of enterprise software is currently going through a process of realising quite how important certain projects are to the underpinnings of their operation, and this is a time to consider whether core open source projects with security implications need more support from large organisations. Encouraging more involvement from organisations should not only help deal with this issue – that vendors may not talk to individuals on your project who are not under NDA with them. Of course, getting more contributors can have positive and negative effects in terms of governance and velocity, but you really do need to think about what would happen if your project were affected by a hardware (or upstream software) vulnerability, and were unable to react as you had nobody able to be involved in discussions.

The other thing you need to do is ensure that the process you have in place for reporting of vulnerabilities is flexible enough to manage this issue. There should be at least one person on the VMT to whom reports can be made who is under the relevant NDA(s): remember that for a vendor, even the act of getting in touch with a project may be enough to give away information about possible vulnerabilities. Although having anonymous VMT may seem attractive – so that you are not giving away too much information about your project’s governance – there can be times when a vendor (or individual) wishes to contact a named individual. If that individual is not part of the VMT, or has no way to trigger the process to deploy an appropriate member of the project to the VMT, then you are in trouble.

I hope that more open source projects will put VMTs and associated processes in place, as I wrote in my previous article. I believe that it’s also important that we consider the commercial and competitive needs of the ecosystem in which we operate, and that means being ready to react within the context of NDAs, embargoes and vulnerabilities. If you are part of an open source project, I urge you to discuss this issue – both internally and also with any vendors whose hardware or software you use.

Note: I would like to thank the members of the Technical Advisory Board of the CCC for prompting the writing of this article, and in particular the member who brought up the issue, which (to my shame) I had not considered in detail before. I hope this treatment of questions arising meets with their approval!


1 – Another standard disclosure: I am not a lawyer! This is my personal description, and should be used as the basis for any legal advice, etc., etc..

Security disclosure or vulnerability management?

Which do I need for an open source project?

This article is a companion piece to one I wrote soon after the advent of Meltdown and Spectre, Meltdown and Spectre: thinking about embargoes and disclosures, and I urge you to read that first, as it provides background and context to this article which I don’t plan to reiterate in full.

In that previous article, I mentioned that many open source projects have a security disclosure process, and most of the rest of the article was basically a list of decisions and steps that you might find in such a process. There’s another term that you might hear, however, which is a Vulnerability Management Process, or “VMP”. While a security disclosure process can be defined as a type of VMP, there are subtle differences to what these two processes might look like, and what they might mean, or be seen to mean, so I think it’s worth spending a little time examining possible differences before we continue.

I just did a “Google fight”[1] for “security disclosure” vs “vulnerability management”, and the former “won” by a ratio of around 5:3. I suspect that this is largely because security disclosures tend to sound exciting and are more “sexy” for headlines than are articles about managing things – even if those things are vulnerabilities.

I’m torn between the two, because both terms highlight important aspects – or reflect different viewpoints – of the same important basic idea: if a bad thing is discovered that involves your product or project, then you need to fix it. I’m going to come at this, as usual for me, from the point of view of open source, so let me give my personal feelings about each.

Security disclosure process

First off, I like the fact that the word “security” is front and centre here. Saying security focusses the mind in ways which “vulnerability” may not, and while a vulnerability may be the thing that we’re addressing, the impact that we’re trying to mitigate is on the security associated with the project or product. The second thing that I like about this phrase is the implication that disclosure is what we are aiming for. Now, this fits well with an open source mindset, but I wonder whether the accent is somewhat different for those who come from a more proprietary background. Where I read “a process to manage telling people about a security problem”, I suspect that others may read “a process to manage the fact that someone has told people about a security problem.” I urge everyone to move to the first point of view, for two reasons:

  1. I believe that security is best done in the open – though we need to find ways to protect people while fixes are being put in place and disseminated;
  2. if we don’t encourage people to come to us – project maintainers, product managers, architects, technical leads – first, in the belief that fixes will be managed as per point 1 above, and also that credit will be given where it’s due, then those who discover vulnerabilities will have little incentive to follow the processes we put in place. If this happens, they are more likely to disclose to the wider world before us, making providing and propagating fixes in a timely fashion much more difficult.

Vulnerability management process

This phrase shares the word “process” with the previous one, and, combined with the world “management”, conveys the importance of working through the issue at hand. It also implies to me, at least, that there is other work to be done around the vulnerability rather than just letting everybody know about it (“disclosure”). This may seem like a bad thing (see above), but on the other hand, acknowledging that vulnerabilities do need managing seems to be a worthwhile thing to signal. What worries me, however, is that managing can be seen to imply “sweeping under the carpet” – in other words, making the problem go away.

Tied with this is something about the word “vulnerability” in this context which is holds both negative and positive connotations. The negative is that one would say “it’s just another vulnerability”, underplaying the security aspect of what it represents. The positive is that sometimes, vulnerabilities are not all of the same severity – some aren’t that serious, compared to others – and it’s important to recognise this, and to have a process which allows you to address all severities of problem, part of which process is to rank and probably prioritise them – often known as “triaging”, a term borrowed from the medical world.

A third option?

There’s at least one alternative. Though it scores lower than either “vulnerability management” or “security disclosure” in a Google fight (which, as I mentioned in the footnotes, isn’t exactly a scientific measure), a “vulnerability disclosure” process is another option. Although it doesn’t capture the “security” aspect, it does at least imply disclosure, which I like.

Which do I need?

My main focus for this article – and my passion and background – is open source, so the next question is: “which do I need for an open source project?” The answer, to some degree, is “either” – or “any of them”, if you include the third option. Arguably, as noted above, a “security disclosure process” is a type of vulnerability management process anyway, but I think that in the open source world particularly, the implication of working towards disclosure – towards openness – is important. The open source community is very sensitive to words, and any suggestion of cover-up is unlikely to be welcomed, even if such an implication was entirely unintentional.

One point that I think it’s worth making is that I believe that pretty much any component or library that you are working on may have security implications down the road[2]. This means that there should be some process in place to deal with vulnerabilities or security issues. To be clear: many of these will be discovered by contributors to the project themselves, rather than external researchers or bug-hunters, but that doesn’t mean that such vulnerabilities are a) less important than externally found ones; OR b) less in need of a process for dealing with them.

A good place to look at what questions to start asking is my previous article, but I strongly recommend that every open source project should have some sort of process in place to deal with vulnerabilities or other issues which may have a security impact. I’ve also written a follow-up article with some options about how to deal with different types of disclosure, vulnerability and process, which you can find here: Open source projects, embargoes and NDAs.


1 – try it: Googlefight.com – it’s a fun (if unscientific) method to gauge the relative popularity of two words or terms on the web.

2 – who would ever have thought that font rendering software could lead to critical security issue?

More Rusty thoughts

I do feel that I’m now actually programming in Rust. And I like it.

I wrote an article a couple of weeks ago – 5 Rust reflections (from Java) – about learning Rust, and specifically, about moving to Rust from Java. I talked about five particular points:

  1. Rust feels familiar
  2. References make sense
  3. Ownership will make sense
  4. Cargo is helpful
  5. The compiler is amazing

I absolutely stand by all of these, but I’ve got a little more to say, because I now feel like a Rustacean[1], in that:

  • I don’t feel like programming in anything else ever again;
  • I’ve moved away from simple incantations.

What do I mean by these two statemetents? Well, the first is pretty simple: Rust feels like the place to be. It’s well-structured, it’s expressive, it helps you do the right thing[2], it’s got great documentation and tools, and there’s a fantastic community. And, of course, it’s all open source, which is something that I care about deeply.

And the second thing? Well, I decided that in order to learn Rust properly, I should take an existing project that I had originally written in Java and reimplement it in hopefully fairly idiomatic Rust. Sometime in the middle of last week, I started fixing mistakes – and making mistakes – around implementation, rather than around syntax. And I wasn’t just copying text from tutorials or making minor, seemingly random changes to my code based on the compiler output. In other words, I was getting things to compile, understanding why they compiled, and then just making programming mistakes[3].

This is a big step forward. When you start learning a language, it’s easy just to copy and paste text that you’ve seen elsewhere, or fiddle with unfamiliar constructs until they – sort of – work. Using code – or producing code -that you don’t really understand, but seems to work, is sometimes referred to as “using incantations” (from the idea that most magicians in fiction, film and gaming reciti collections of magic words which “just work” without really understanding what they’re doing or what the combination of words actually means). Some languages[4] are particularly prone to this sort of approach, but many – most? – people learning a new language will be prone to doing this when they start out, just because they want things to work.

And last night, I was up till 1am implementing a new feature – accepting command-line input – which I really couldn’t get my head round. I’d spent quite a lot of time on it (including looking for, and failing to find, some appropriate incantations), and then asked for some help on a rust-lang channel inhabited by some people I know. A number of people had made some suggestions about what had been going wrong, and one person in particular was enormously helpful in picking apart some of the suggestions so that I understood them better. He explained quite a lot, but finished with “I don’t know the return type of the hash function you’re calling – I think this is a good spot for you to figure this piece out on your own.”

This was just what I needed – and any learner of anything, including programming languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to stay down and see if I could work things out for myself. And I did. I took the suggestions that people had made, understood out what they were doing, tried to divine what they should be doing, worked out how they should be doing it, and then found the right way of making it happen.

I’ve still got lots to learn, and I’ll make lots of mistakes still, but I now feel that I’m in a place to find my way through those mistakes (with a little help along the way, probably – thanks to everyone who’s already pointed me in the right direction). But I do feel that I’m now actually programming in Rust. And I like it.


1 – this is what Rust programmers call themselves.

2 – it’s almost impossible to stop people doing the wrong thing entirely, but encouraging people do to the right thing is great. In fact, Rust goes further, and actually makes it difficult to do the wrong thing in many situations. You really have to try quite hard to do bad things in Rust.

3 – I found a particularly egregious off-by-one error in my code, for instance, which had nothing to do with Rust, and everything to do with my not paying enough attention to the program flow.

4 – *cough* Perl *cough*

3 open/closed Covid-19 contact tracing questions

All projects are not created equal.

One of the cheering things about the pandemic crisis in which we find ourselves is the vast up-swell of volunteering that we are seeing across the world. We are seeing this equally across the IT sector, and one of the areas where work is being done is in apps to help track Covid-19. Specifically, there is an interest in Covid-19 contact tracing, or tracking, apps for our mobile[0] phones. These aren’t apps which keep an eye on whether you’ve observed lock-down procedures, but which attempt to work out who has been in contact with whom, and work out from that, once we know that one person is infected with Covid-19, what the likely spread of the virus will be.

There are lots of contact tracing initiatives out there, from Pep-Pt from the European Union to Singapore’s TraceTogether, from the University of Washington’s PACT to MIT’s PACT[1]. Google and Apple are – unprecedentedly – working on an app together. There are lots of ways of comparing these apps and projects, but in today’s article, I want to suggest three measures which can help you consider them from the point of view of “openness”. As regular readers of this blog will know, I’m a big fan of open source – not just for software, but for data, management and the rest – and I believe that there’s also a strong correlation here with civil or human rights. There are lots of ways to compare these apps, but these three measures are not too technical, and can help us get a grip on the likelihood that some of the apps (and associated projects) may impinge on privacy and other issues about which we care. I don’t want the data generated from apps that I download onto my phone to be used now or in the future to curtail my, or other people’s civil or human rights, for blackmail or even for unapproved commercial gain.

1. Open source

Our first question must be: “is the app open source?” If the answer is “no”, then we have no way to know what is being captured, and therefore how it is being used. If the app is closed source, it could be collecting any data from pretty much any measuring device on our phones, including photo, video, audio, Bluetooth, wifi, temperature, GPS or accelerometer. We can try restricting access to these measurements, but such controls have not always been effective, understanding the impact of turning them off is rarely simple, and people frankly rarely bother to check them anyway. Equally bad is the fact that with closed source, you can’t have any idea of how good the security is, nor any chance to criticise and improve it. This is something about which I’ve written many times, including in my articles Disbelieving the many eyes hypothesis and Trust & choosing open source. Luckily, it seems that the majority of contact tracing apps are open source, but please be careful, and reject any which are not.

2 Centralised or distributed

In order to make sense of all the data that these apps collect, there needs to be a centralised[2] store where it can be processed, right? It’s common sense.

Actually, no. Although managing and processing data in one place can be much easier, there are ways to store data in a distributed manner, and allow the sorts of processing needed for contact tracing to take place. It may be more complex, but it also makes it much, much more difficult for governments, corporations or malicious actors to misuse this information. And we should be clear that this will be what happens if the data is made available. Maybe the best governments and the best corporations will be well-behaved by their standards, but a) those are not necessarily the standards that I or others will endorse and b) what about malicious actors and governments and corporations which are not “the best”?

3 Location or proximity tracking

This might seem like another obvious choice: if you want to be finding out who was in contact with whom, then the way to do it is see who was where, and when. GPS tracking – and associated technologies like wifi access point location tracking – combined with easily available time data, would give the ability to work out who was in a particular place at the same time as other people. This is true, but it also provides enormous opportunities for misuse, particularly when the data is held centrally (see above). An alternative is to use sensors like Bluetooth or NFC[3], to allow phones to collect information about other phones (or devices) with which they have been in contact and when. This is more easily anonymised – or pseudonymised – allowing information to be passed to the owners of those phones, but at the same time more difficult to misuse by governments, corporations and malicious actors.

There are other issues to consider, one of which is that these sensors were not designed for this type of use, and we may be sacrificing accuracy if we choose this option. On the other hand, many interactions between people occur indoors, where GPS is much less effective anyway, and these types of technologies may help.

You could argue that this measurement is not about “openness” in itself, but it is a key indicator to whether the information collected can be used in ways which are far from open.

Conclusion

There are many other questions we can ask about Covid-19 contact tracing apps, some of which are related to openness, and some of which are not. These include:

  • Coverage
    • not all demographics have – or use – phones as much as the rest of the population, including the poor, the elderly, and certain religious groups. How effective will such projects be if they have reduced access to these groups?
    • older devices may have less accurate sensors, or not have some of the capabilities required by the apps. What is more, there may be a correlation between use of these older devices with some of the demographics noted above.
    • some people rarely update the apps on their phones, so even if they load an initial version of an app, newer versions, with functionality or security improvements, are likely to be unequally distributed across the set of devices.
  • Removal – how easy will it be to remove the application fully, what are the consequences of not doing so, and how likely are people to do so anyway[4]?
  • Will use of these apps by mandatory or voluntary? If the former, there are serious concerns about civil or human rights, not to mention the problems noted above about coverage.

All of these questions are important, but not directly related to the question of the “openness” of the apps and projects. However, we have, right now, some great opportunities to work with and influence some really important projects for public health and well-being, and I believe that it is important that we consider the questions I’ve raised about openness before endorsing, installing or using any of the apps that are being created.


0 – or “cell”, if you’re in North America.

1 – yes, they chose the same acronym. Yes, it is confusing.

2 – or, I supposed, “centralized”, depending on your geography.

3 – “Near Field Communication” – the same capability used when you do contactless payment with your phone or credit/debit card.

4 – how many apps do you still have on your phone that you’ve not even opened for 3 months? Yup, me too.

Post-Covid, post-open?

We are inventive, we are used to turning technologies to good.

The world of lockdown to which we’re becoming habituated at the moment has produced some amazing upsides. The number of people volunteering, the resurgence of local community initiatives, the selfless dedication of key workers across the world and the recognition of their sacrifice by the general public are among the most visible. As many regular readers of this blog are likely to be aware, there has also been an outpouring of interest and engagement in software- and hardware-related projects to help, from infection-tracking apps to 3D-printing of PPE[0]. Companies have made training and educational materials available for free, and there are attempts around the world to engage and contribute to the public commonwealth.

Sadly, not all of the news is good. There has been a rise in phishing attacks, and the lack of appropriate or sufficient security in commonly-used apps such as Zoom has become frightenly evident[1]. There’s an article to write here about the balance between security, usability and cost, but I’m going to save that for another day.

Somewhere in the middle, between the obvious positives and obvious negatives, there are some developments which most of us probably accept at necessary, but which aren’t things that we’d normally welcome. Beyond the obvious restrictions on movement and public gatherings, there are a number of actions which governments, in particular, a retaking which have generally negative impacts on human rights and civil liberties, as outlined in this piece by The Guardian. The article lists numerous examples of governments imposing, or considering the imposition of, measures which would normally be quickly attacked by human rights groups, and resisted by most citizens. Despite the headline, which suggests that the article will deal with how difficult these measures will be to remove after the end of the crisis, there is actually little discussion, beyond a note that “[w]hether that surveillance is eventually rolled back will depend on public oversight.”

I think that we need to go beyond just “oversight” and start planning now for public action. In the communities in which I live and work, there is a general expectation that the world – software, management, government, data – is becoming more, not less open. We are in grave danger of losing that openness even once the need for these government measures diminish. Governments – who will see the wider intelligence-gathering and control opportunities of these changes – will espouse the view that “we need these measures in place in order to be able to react quickly if the same thing happens again”, and, if we’re not careful, public sentiment, bruised and bloodied by the pandemic, will quietly acquiesce, and we will see improvements in human and civil rights rolled back decades, and damaged further by the availability of cheap, mobile, networked technology.

If we believe that openness is a public good, then we need to think how to counter the arguments which we will hear from governments, and be ready to be vocal – not just with counter-arguments, but with counter-proposals. This pandemic is unlike either of the World Wars of the 20th Century, when a clear ending was marked, and there was the opportunity (sadly denied to many citizens of the former USSR) to regain civil liberties and roll back the restrictions of the war years. Nor is it even like the aftermath of the 9/11, that event which has impacted the intelligence and security landscape of the past two decades, where there is (was?) at least a set of (posited) human foes to target. In the case of the Covid-19 pandemic, the “enemy” is amorphous and will be around for decades to come. The measures to combat it – and its successors – will only be slowly reduced, and some will not be.

We need to fight against those measures which are unnecessary, and we need to find alternatives – transparent, public alternatives – to measures which may have some positive effects, but whose overall impact on society and human rights is clearly negative. In a era where big data is becoming pervasive, and the tools to mine it tractable, we need to provide international mechanisms to share and use that data in ways which do not benefit any single government, bloc, or section of society. We are inventive, we are used to turning technologies to good. This is the time we need to do it, and do it quickly. We can make a difference by being open, but we need to start now.


0 – Personal Protection Equipment.

1 – although note that the company is reported to be making improvements to at least one area of concern to some – routing of traffic through China.

7 tips for kicking off an open source project

It’s not really about project mechanics at all

オープンソースプロジェクトを始める7つのアドバイス

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.

 

オープンソースプロジェクトを始める7つのアドバイス

プロジェクト手法ではなく…

私は今Enarxプロジェクトに関わっています。とても深く、です。すでにご存知かもしれませんが、これはオープンソースのプロジェクトで、信頼できないホスト上で機密性の高いワークロードの実行を可能にするプロジェクトです。

 

何年もオープンソースプロジェクトに関わってきましたが、このプロジェクトで初めて私は共同創立者となりました。

私たちは現状、コードや文書を十分用意し、ロゴもステッカーも(これ重要!)用意できている段階です。

 

プロジェクトはLinux Foundationグループ(Confidential Computing Consortium)に含まれるはずなので、順調です。さらにプロジェクトを加速させるためにも内容に関してもお伝えしていったほうがいいでしょう。はっきりさせておくと、Enarxは商用とエンタープライズアプリケーションになるプロジェクトです。十分には成熟しておらず、まだまだハードルやチャレンジがあるかもしれません。さらに言うと、私たちの歩んできた道は全てのプロジェクトも当てはまらないかもしれませんが、他のプロジェクトやこれからプロジェクトを始めようとする人への指針になればと思っています。

 

ここまで来るにはたくさんのサポートがあったことをまずお伝えします。

私はOpenSource.comから始めましたが、ここではたくさんのガイドが載っています。それに従っても間違ってしまうこともあるかもしれません。ただ、以下に考慮すべき点を挙げておきます。

 

1 クリティカルマスを目標に

 

私は幸いにもRed Hatと言う素晴らしい職場で働いています。ここでは全てのものがオープンソースですし、オープンソースとそのコミュニティを非常に重要だと考えています。そこで「クリティカルマス」企業と言うものを耳にしました。物事を実際に行っていくには十分な人々の関心が必要で、人々が無視できないものとする必要があるということです。共同創立者のNathaniel McCallumと私はプロジェクトに情熱的で、組織内でスポンサーを得ることに時間をかけました。(誰のことだかわかりますよね、皆さんに感謝です!)そしてエンジニア達に「売り込み」をして惹き込み、プロジェクトが止まれなくなるくらいほどにしました。

プロジェクトのいくつかは一人二人の貢献者しか得ることができずもたついてしまいますが、人々の興味を惹きつけるには、どんどん進めてくれる、ある程度の人数を集めることが必須なのです。

 

2 デモ

 

人々を巻き込みたければデモをすると良いでしょう。洗練されている必要はなく、しようとしていることが実現可能であること、あなたが成し得たいことを示さなければいけません。初期段階のデモではコマンドラインの出力だけでしょうが、UIプロダクトを提供するのでなければ、それでもいいんです。成し遂げようとすることと情熱、プロジェクトの大切さを伝えることが有益なのです。人は何かを「見たい」「体験したい」ので、形にしてやる気を見せることが近道なのです。

 

3 ライセンスを選ぶ

コードをオープンソースで作ると、他の人にも貢献してもらいたくなるでしょう。これはあまり重要ではなく、正しいオープンソースのライセンスを選択することが他の人の貢献度を高め、定義された用語の理解を深め、その貢献する人働いている組織とその人たち自身の貢献するハードルを下げるのです。

 

4 文書作成

 

開発者文書が最重要だと考えるかもしれません。それなければどうやって人が貢献してコードを書くことができるでしょうか。

 

初めのうちは必要ないと思っています。小さなプロジェクトではコードが何をするものか、何をさせたいか、何が欠けているかいるか、の説明をすることで何人かの人を巻き込むことができます。

しかしコードが何をするものでどう便利か説明する文書がないのに、どうやってたくさんの人が時間を割いてくれるでしょうか。

 

文書といっても、ちゃんとしたマーケティング用のものだったり正式なものである必要はなくて、どうしてそれをしなくてはいけないかと皆さんに伝えるものでなければいけません。

これは一番目のポイント、クリティカルマスに注力することにも通じています。文書、ユースケースを示すことで、「ポイント」でプロジェクトが実現したいことに説得力を持たせるのに役立ちます。

 

私たちはgithubウィキをメインの文書置き場にしていて、作成と同時にアップデートしています。これはもう少し改善できるかと思います。

 

5 見えるプロジェクト

 

プロジェクトがちゃんと見える状態でないと見つけてもらえません。私たちのプロジェクトはとてもラッキーで、Confidential Computing Consortiumができた上にそこで見せられるだけのプラットフォームをすぐに作ることができたので、クリティカルマスに届く状態です。

 

Twitterのアカウントもあり(@enarxproject)このブログOpensource.comで記事も出しています。Red Hatのhttps://next.redhat.com/にてブログを出す機会にも恵まれ、プレスのインタビューも受けましたし出来るだけカンファレンスでも講演しています。私たちにはこのような良い機会がありましたが、全てのプロジェクトに適切なアプローチではないかもしれません。しかし知ってもらうことで、もっとたくさんの人々に貢献してもらえます。

 

6 歓迎しましょう

世間に知って頂けたとしましょう。次は何ができるでしょうか。そう、皆さんにプロジェクトに参加したいと思って頂きたいですよね。歓迎してもらえなければその参加数は少なくなって行きます。また上の方で私が何を言ったかに関わらず、しばらくすると技術文書も必要になります。そしてその人たちがあなたと話し合う方法が必要ですね。そうすることで評価されていると感じますから。

 

私たちの場合、Gitter(https://gitter.im/enarx/)で、毎日のスタンドアップ会議には参加したい人がみんな参加できます。最近ではそ課題データベース(https://github.com/enarx/enarx/issues)をGithubに作成しタコとで、スレッドの会話でタイムゾーンがあることから毎日のスタンドアップ会議の時間が合っていないことが明らかになりました。ので、会議の数を少なくとも週一とする配慮をしたのです。

 

7 仲の良い人と活動しましょう

 

私はとてもとてもEnarxプロジェクトチームのみんなと働くことが楽しいです。楽しく過ごし、冗談をかわし、笑って、共通の目標をシェアしています。Enarxの成功のためです。出来るだけ楽しんですること、それが大切だと思います。特にプロジェクトの初期段階では情熱的な人と楽しく仕事できる人が必要です。例えその人が地理的には数千キロ(マイル?)離れた場所にいても、です。そのように参加できなければ情熱もどんどん先細ってくるでしょうし勢いも失われ、プロジェクトは失敗に終わるでしょう。一緒に活動する人は選べるわけではないでしょうが、できればあなたの仲が良い人を選びましょう。

 

結論:「人」です。

 

この記事を書き始めるまでは気づきませんでしたが、全くプロジェクト手法が問題ではないのです。

 

人、です。

 

読み返せばどのアドバイスにも人の大切さが述べてあり、ライセンスの選び方にも、です。オープンソースプロジェクトとはコードではないのです。人なのです。どのようにシェアし、一緒に活動し交流するかなのです。

 

オープンソースプロジェクトはそれぞれ異なるものでしょうから、この7つのアドバイスが全て当てはまることはないでしょう。間違いなくEnarxはまだ成功と言い切れるものではありませんので、今の段階でこのようなアドバイスをすべきでないのかもしれません。しかし成功してきた今までのオープンソースプロジェクトを思い起こすと、やはり人と言うものはとても大切なのです。

 

元の記事:https://aliceevebob.com/2019/12/17/7-tips-for-kicking-off-an-open-source-project/

2019年12月7日 Mike Bursell

 

タグ:オープンソース

 

コンフィデンシャルコンピューティング ー新しいHTTPSとは?

デフォルトで付いてくるセキュリティなんてありません。

この記事は
https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/ を翻訳したものです。
ここ数年、「http://…&#8221」のようなウェブサイトはなくなってきました。これはやっと業界がウェブサイトにセキュリティが「ある」ことに気付いたからです。と同時にサーバーとクライアントどちらともHTTPS通信の設定をすることが容易になったからです。

同じような動きがクラウド、エッジ、IoT、ブロックチェーン、AI/MLなどのコンピューティングにも現れることでしょう。

ストレージ内に保存するデータやネットワークで転送されるデータはは暗号化すべきである、とは認識されていました。けれどプロセスしている間使用されているデータを暗号化するのは難しく、高価でした。

Trusted Execution Environment (TEE)などのハードウェアを使って、使用中のデータやアルゴリズムを保護します。コンフィデンシャルコンピューティングは、ホストシステムや攻撃されやすい環境のデータを保護するのです。

TEE とEnarx Project(Nathaniel McCallumと共同創立しているプロジェクトです、参考: Enarx for everyone (a quest) and Enarx goes multi-platform )に付いては何度かブログに投稿しています。
EnarxはTEEを使っていて、Enarkでプラットフォームや使用言語に依存せず、機密性が必要なアプリケーションやマイクロサービスなどのコンポーネントを安全に信頼できないホストにデプロイすることができます。

Enarxはもちろん完全にオープンソースで(Apache2.0のライセンス使用)です。
ワークロードを信頼できないホストで稼働させるのはコンフィデンシャルコンピューティングが保証するところです。これからは下記のような場合の機密性があるデータにコンフィデンシャルコンピューティングが普通に使われるようになるでしょう。:

ストレージ:ストレージインフラを完全に信用できないので、保存したデータは暗号化したい
ネットワーク:ネットワークインフラを完全に信用できないので、転送中のデータを暗号化したい
コンピューティング:コンピューティングインフラを信用できないので、使用中のデータを暗号化したい

信頼信用に関してはもっと言いたいことはあるのですが「完全に」という言葉が大切です。(これは推敲の最中に書き足しました。)
パケットを送ったりやブロックを保存したりするかどうか、上記のどのケースでもCPUやファームウェアなど、インフラをある程度信頼しなくてはいけません。というのも、それらを信頼できなければコンピューティングなんてできません。
(準同型暗号という技術があり提供されつつありますが、まだ限定的で技術も未完成です)

CPU周りで見つかる脆弱性があると、CPUを完全に信頼するかどうか、また乗っているホストの物理攻撃に完全に安全がどうか、というのは何度も出てくる疑問です。
どちらの疑問にも、「いいえ」と答えられますね。しかし拡張性とデプロイの費用の問題から現状ではベストな技術でしょう。

二番目の疑問については、誰も(もしくは他の技術)完全に安全だと偽装できないということです。私たちがすべきなのはthreat model を考慮し、この場合ではTEEが特定の要件に対して十分なセキュリティを提供できるかどうか決定する、ということです。

一つ目の疑問に関してはEnarxの当てはまるモデルは、特定のCPUセットを信頼するかどうかデプロイメントの際に全て決め打ちする、ということでしょう。
例えばQというベンダのR世代のチップに脆弱性が見つかったとしましょう。「ワークロードをQから出ているR世代のCPUにはデプロイさせず、Q社のSタイプ、Tタイプ、Uタイプのチップと、P社、M社、N社のCPUにはデプロイOKとする」と宣言できれば簡単ですね。

コンフィデンシャルコンピューティングが注目されていますが、そこに適応させるには3つの変化のステージがあると考えています。

1 ハードウェアの稼働性:
TEEがサポートされているハードウェアが手に入るようになったのはここ半年から一年の間です。IntelのSGXやAMDのSEVなど市場で鍵となる製品が出てきだことからもわかります。
これからもTEEが使えるハードウェアの製品が出てくると予想されます。

2 業界の受け入れ状態:
アプリケーションのデプロイメントとしてクラウドが急激に受け入れられているのに合わせて法規制や整備は扱うデータを保護するよう、組織や団体に対して要求を増やしてきています。
組織や団体は、信頼性のないホストでの機密性の高いアプリケーション(もしくは機密データを扱うアプリ)の稼働方法にざわざわしてきています。正確には、彼らが完全に信用できないホスト上で、のアプリに関してですね。

これは別に驚くことではないのです。もしマーケットが投資に値するものではなければ、チップベンダーはこの技術に投資しないでしょう。
Linux FoundationのConfidential Computing Consortium (CCC)の体制は、どれくらい業界がコンフィデンシャルコンピューティングの共通使用モデルを見つけようとしているか、オープンソースプロジェクトにこのような技術採用を勧めているか、の別のよい例ですね。

その一つがRed Hatが始めたEnarxはCCCのプロジェクトです。

3 オープンソース:
ブロックチェーンのように、コンフィデンシャルコンピューティングはオープンソースを使うことがとても簡単な技術の一つです。

機密性の高いアプリケーションを動かす場合、動いているもの自体を信用しなくてはいけません。CPUやファームウェアのようなものではなく、TEEの中でワークロードの実際の実行を手伝うフレームワークのことです。

良い言い回しがあります。
「私はホストマシーンとソフトウェアスタックが信用できないからTEEを使うんだ」

しかしTEEのソフトウェア環境に可視性がなければ、ただソフトウェアを別の不可視性の高い環境に移しただけです。
TEEのオープンソースによって、あなたやコミュニティ5トはプロプライエタリのベンダー仕様ソフトウェアにはできないチェックと監査ができるようになるのです。

このようにCCCはオープンな開発モデルをであるLinux Foundationに属しているのであり、TEEに関するソフトウェアプロジェクトにCCCに参加するよう、またオープンソースにするように推進しているのです。

このハードウェアの可動性、業界の受け入れとオープンソースの三つがここ15から20年の技術の変革を促進するものだと考えます。
ブロックチェーン、AI、クラウドコンピューティング、ウェブスケールコンピューティング、ビッグデータ、インターネット販売は全てこの三つが合わさって、今までになかった変革を業界にもたらしたのです。

デフォルトのセキュリティはここ何十年か必要だと訴えられているものですが、まだ達成されていません。正直なところ、それが本当に実現するかはわかりません。

しかし新しい技術が実現することで、業界で、特定のユースケースにセキュリティが浸透することがもっと実用的になり、そこに期待も集まるでしょう。

コンフィデンシャルコンピューティングは次の新しい変革を迎えようとしています。
そして読者の皆さんがその革命に参加する日が来るでしょう。オープンソースなのですから。
元の記事:https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
2019年12月3日 Mike Bursell

 

Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”