GET/SET methods for open source projects

Or – how to connect with open source

I’m aware that many of the folks who read my blog already know lots about open source, but I’m also aware that there are many who know little if anything about it. I’m a big, big proponent or open source software (and beyond, such as open hardware), and there are lots of great resources you can find to learn more about it: a very good starting point is Opensource.com. It’s run by a bunch of brilliant people for the broader community by my current employer, Red Hat (I should add a disclaimer that I’m not only employed by Red Hat, but also a “Correspondent” at Opensource.com – a kind of frequent contributor/Elder Thing), and has articles on pretty much every aspect of open source that you can imagine.

I was thinking about APIs today (they’re in the news this week after a US Supreme Court Judgment on an argument between Google and Oracle), and it occurred to me that if I were interested in understanding how to interacting with open source at the project level, but didn’t know much about it, then a quick guide might be useful. The same goes if I were involved in an open source project (such as Enarx) which was interested in attracting contributors (particularly techie contributors) who aren’t already knowledgeable about open source. Given that most programmers will understand what GET and SET methods do (one reads data, the other writes data), I thought this might be useful way to consider engagement[1]. I’ll start with GET, as that’s how you’re likely to be starting off, as well – finding out more about the project – and then move to SET. This is far from an exhaustive list, but I hope that I’ve hit most of the key ways you’re most likely to start getting involved/encourage others to get involved. The order I’ve chosen reflects what I suspect is a fairly typical approach to finding out more about a project, particularly for those who aren’t open source savvy already, but, as they say, YMMV[2].

I’ve managed to stop myself using Enarx as the sole source of examples, but have tried to find a variety of projects to give you a taster. Disclaimer: their inclusion here does not mean that I am a user or contributor to the project, nor is it any guarantee of their open source credentials, code quality, up-to-date-ness, project maturity or community health[4].

GET methods

  • Landing page – the first encounter that you may have with a project will probably be its landing page. Some projects go for something basic, others apply more design, but you should be able to use this as the starting point for your adventures around the project. You’d generally hope to find a link various of the other resources listed below from this page. Sigstore
  • Wiki – in many cases, the project will have a wiki. This could be simple, it could be complex. It may allow editing by anyone, or only by a select band of contributors to the project, and its relevance as source-of-truth may be impacted by how up to date it is, but the wiki is usually an excellent place to start. Fedora Project
  • Videos – some projects maintain a set of videos about their project. These may include introductions to the concepts, talking head interviews with team members, conference sessions, demos, HOW-TOs and more. It’s also worth looking for videos put up my contributors to the project, but which aren’t necessarily officially owned by the project. Rust Language
  • Code of Conduct – many projects insist that their project members follow a code of conduct, to reduce harassment, reduce friction and generally make the project a friendly, more inclusive and more diverse place to be. Linux kernel
  • Binary downloads – as projects get more mature, they may choose to provide pre-compiled binary downloads for users. More technically-inclined users may choose to compile their own from the code base (see below) even before this, but binary downloads can be a quick way to try out a project and see whether it does what you want. Chocolate Doom (a Doom port)
  • Design documentation – without design documentation, it can be very difficult to get really into a project (I’ve written about the importance of architecture diagrams on this blog before). This documentation is likely to include everything from an API definition up to complex use cases and threat models. Kubernetes
  • Code base – you’ve found out all you need to get going: it’s time to look at the code! This may vary from a few lines to many thousands, may include documentation in comments, may include test cases: but if it’s not there, then the project can’t legitimately call itself open source. Rocket Rust web framework[5]
  • Email/chat – most projects like to have a way for contributors to discuss matters asynchronously. The preferred medium varies between projects, but most will choose an email list, a chat server or both. Here’s where to go to get to know other users and contributors, ask questions, celebrate successful compiles, and just hang out. Enarx chat
  • Meet-ups, video conferences, calls, etc. – though physically meetings are tricky for many at the moment (I’m writing as Covid-19 still reduces travel opportunities for many), having ways for community members and contributors to get together synchronously can be really helpful for everybody. Sometimes these are scheduled on a daily, weekly or monthly basis, sometimes they coincide with other, larger meet-ups, sometimes a project gets big enough to have its own meet-ups, and sometimes so big that there are meet-ups of sub-projects or internal interest groups. Linux Security Summit Europe

SET methods

  • Bug reports – for many of us, the first time we contribute anything substantive back to an open source project is when we file a bug report. These types of bug reports – from new users – can be really helpful for projects, as they not only expose bugs which may not already be known to the project, but they also give clues as to how actual users of the project are trying to use the code. If the project already publishes binary downloads (see above), then you don’t even need to have compiled the code to try it and submit a bug report, but bug reports related to compilation and build can also be extremely useful to the project. Sometimes, the mechanism for bug reporting also provides a way to ask more general questions about the project, or to ask for new features. exa (replacement for the ls command)
  • Tests – once you’ve starting using the project, another way to get involved (particularly once you start contributing code) can be to design and submit tests for how the project ought to work. This can be a great way to unearth both your assumptions (and lack of knowledge!) about the project, but also the project’s design assumptions (some of which may well be flawed). Tests are often part of the code repository, but not always. Gnome Shell
  • Wiki – the wiki can be a great way to contribute to the project whether you’re coding or not. Many projects don’t have as much information available as they should do, and that information may often not be aimed at people coming to the project “fresh”. If this is what you’ve done, then you’re in a great position to write material which will help other “newbs” to get into the project faster, as you’ll know what would have helped you if it had been there. Wine (Windows Emulator for Linux)
  • Code – last, but not least, you can write code. You may take hours, months or years to get to this stage – or may never reach it – but open source software is nothing without its code. If you’ve paid enough attention to the other steps, got involved in the community, understood what the project aims to do, and have the technical expertise (which you may well develop as you go!), then writing code may be way to you want to do. Enarx (again)

1 – I did consider standard RESTful verbs – GET, PUT, POST and DELETE, but that felt rather contrived[2].

2 – And I don’t like the idea of DELETE in this context!

3 – “Your Mileage May Vary”, meaning, basically, that your experience may be different, and that’s to be expected.

4 – that said, I do use lots of them!

5 – I included this one because I’ve spent far too much of my time look at this over the past few months…

3 vital traits for an open source leader

The world is not all about you.

I’ve written a few articles on how to be do something badly, or how not to do things, as I think they’re a great way of adding a little humour to the process of presenting something. Some examples include:

The last, in particular, was very popular, and ended up causing so much furore on a mailing list that the mailing list had to be deleted. Not my fault, arguably (I wasn’t the one who posted it to the list), but some measure of fame (infamy?) anyway. I considered writing this article in a similar vein, but decided that although humour can work as a great mechanism to get engaged, it can also sometimes confuse the message, or muddy the waters of what I’m trying to say. I don’t want that: this is too important.

I’m writing this in the midst of a continuing storm around the re-appointment to the board of the Free Software Foundation (FSF) of Richard Stallman. We’re also only a couple of years on from Linus Torvalds deciding to make some changes to his leadership style, and apologising for his past behaviour. Beyond noting those events, I’m not going to relate them to any specific points in this article, though I will note that they have both informed parts of it.

The first thing I should say about these tips is that they’re not specific to open source, but are maybe particularly important to it. Open source, though much more “professional” than it used to be, with more people paid to work on it, is still about voluntary involvement. If people don’t like how a project is being run, they can leave, fork it, or the organisation for which they work may decide to withdraw funding (and/or its employees’ time). This is different to most other modes of engagement in projects. Many open source projects also require maintenance, and lots of it: you don’t just finish it, and then hand it over. In order for it to continue to grow, or even to continue to be safe and relevant to use, it needs to keep running, preferably with a core group of long-term contributors and maintainers. This isn’t unique to open source, but it is key to the model.

What all of the above means is that for an open source project to thrive in the long-term, it needs a community. The broader open source world (community in the larger sense) is moving to models of engagement and representation which more closely model broader society, acknowledging the importance of women, neuro-diverse members, older, younger, disabled members and other under-represented groups (in particular some ethnic groups). It has become clear to most, I believe, that individual projects need to embrace this shift if they are to thrive. What, then, does it mean to be a leader in this environment?

1. Empathise

The world is not all about you. The project (however much it’s “your baby”) isn’t all about you. If you want your project to succeed, you need other people, and if you want them to contribute, and to continue to contribute to your project, you need to think about why they might want to do so, and what actions might cause them to stop. When you do something, say something or write something, think not just about how you feel about it, but about how other people may feel about it.

This is hard. Putting yourself in other people’s shoes can be really, really difficult, the more so when you don’t have much in common with them, or feel that your differences (ethnicity, gender, political outlook, sexuality, nationality, etc.) define your relationship more than your commonalities. Remember, however, that you do share something, in fact, the most important thing in this context, which is a desire to work on this project. Ask others for their viewpoints on tricky problems – no, strike that – just ask others for the viewpoints, as often as possible, particularly when you assume that there’s no need to do so. If you can see things at least slightly from other people’s point of view, you can empathise with them, and even the attempt to do so shows that you’re making an effort, and that helps other people make an effort to empathise, too, and you’re already partway to meeting in the middle.

2. Apologise

You will get things wrong. Others will get things wrong. Apologise. Even if you’re not quite sure what you’ve done wrong. Even if you think you’re in the right. If you’ve hurt someone, whether you meant to or not, apologise. This can be even harder than empathising, because once two (or more) parties have entrenched themselves in positions on a particular topic, if they’re upset or angry, then the impulse to empathise will be significantly reduced. If you can empathise, it will become easier to apologise, because you will be able to see others’ points of view. But even if you can’t see their point of view, at least realise that they have another point of view, even if you don’t agree with it, or think it’s rational. Apologising for upsetting someone is only a start, but it’s an important one.

3. Don’t rely on technical brilliance and vision

You may be the acknowledged expert in the field. You may have written the core of the project. It may be that no-one will ever understand what you have done, and its brilliance, quite like you. Your vision may be a guiding star, bringing onlookers from near and far to gaze on your project.

Tough.

That’s not enough. People may come to your project to bask in the glory of your technical brilliance, or to wrap themselves in the vision you have outlined. Some may even stay. But if you can’t empathise, if you can’t apologise when you upset them, those people will represent only a fraction of the possible community that you could have had. The people who stay may be brilliant and visionary, too, but your project is the weaker for not encouraging (not to mention possibly actually discouraging) broader, more inclusive involvement of those who are not like you, in that they don’t value brilliance and genius sufficiently to overlook the deficits in your leadership. It’s not just that you won’t get people who aren’t like you: you will even lose people who are like you, but are unwilling to accept a leadership style which excludes and alienates.

Conclusion

It’s important, I think, to note that the two first points above require active work. Fostering a friendly environment, encouraging involvement, removing barriers: these are all important. They’re also (at least in theory) fairly simple, and don’t require hard choices and emotional investment. Arguably, the third point also requires work, in that, for many, there is an assumption that if your project is technically exciting enough (and, by extension, so is your leadership), then that’s enough: casting away this fallacy can be difficult to do.

Also, I’m aware that there’s something of an irony that I, a white, fairly neuro-typical, educated, middle-aged, Anglo adult male in a long-term heterosexual relationship, is writing about this, because many – too many! – of the leaders in this (as with many other spaces) are very much like me in many of their attributes. And I need to do a better job of following my own advice above. But I can try to model it, and I can shout about how important it is, and I can be an ally to those who want to change, and to those worst affected when that change does not come. I cannot pretend that inertia, a lack of change and a resistance to it, affects me as much as it does others, due to my position of privilege within society (and the communities about which I’ve been writing), but I can (and must) stand up when I can.

There are also times to be quiet and leave space for other voices (despite the fact that even the ability to grant that space is another example of privilege). I invite others to point me at other voices, and if I get enough feedback to do so, I’ll compile an article in the next few weeks designed to point at them from this blog.

In the meantime, one final piece of advice for leaders: be kind.

3 types of application security

There are security applications and there are applications which have security.

I’m indebted to my friend and colleague, Matt Smith, for putting me on the road to this article: he came up with a couple of the underlying principles that led me to what you see below. Thanks, Matt: we’ll have a beer (and maybe some very expensive wine) – one day.

There are security applications and there are applications which have security: specifically, security features or functionality. I’m not saying that there’s not a cross-over, and you’d hope that security applications have security features, but they’re used for different things. I’m going to use open source examples here (though many of them may have commercial implementations), so let’s say: OpenVPN is a security product, whose reason for existence is to provide a security solution, where as Kubernetes is not a security application, though it does include security features. That gives us two types of application security, but I want to add another distinction. It may sound a little arbitrary, at least from the point of the person designing or implementing the application, but I think it’s really important for those who are consuming – that is, buying, deploying or otherwise using an application. Here are the three:

  1. security application – an application whose main purpose is to solve a security-related problem;
  2. compliance-centric security – security within an application which aims to meet certain defined security requirements;
  3. risk-centric security – security within an application which aims to allow management or mitigation of particular risk.

Types 2 and 3 are subsets of the non-security application (though security applications may well include them!), and may not seem that different, until you look at them from an application user’s point of view. Those differences are what I want to concentrate on in this article.

Compliance-centric

You need a webserver – what do you buy (or deploy)? Well, let’s assume that you work in a regulated industry, or even just that you’ve spent a decent amount of time working on your risk profile, and you decide that you need a webserver which supports TLS 1.3 encryption. This is basically a “tick box” (or “check box” for North American readers): when you consider different products, any which do not meet this requirement are not acceptable options for your needs. They must be compliant – not necessarily to a regulatory regime, but to your specific requirements. There may also be more complex requirements such as FIPS compliance, which can be tested and certified by a third party – this is a good example of a compliance feature which has moved from a regulatory requirement in certain industries and sectors to a well-regarded standard which is accepted in others.

I think of compliance-centric security features as “no” features. If you don’t have them, the response to a sales call is “no”.

Risk-centric

You’re still trying to decide which webserver to buy, and you’ve come down to a few options, all of which meet your compliance requirements: which to choose? Assuming that security is going to be the deciding factor, what security features or functionality do they provide which differentiate them? Well, security is generally about managing risk (I’ve written a lot about this before, see Don’t talk security: talk risk, for example), so you look at features which allow you to manage risks that are relevant to you: this is the list of security-related capabilities which aren’t just compliance. Maybe one product provides HSM integration for cryptographic keys, another One Time Password (OTP) integration, another integrity-protected logging. Each of these allows you to address different types of risk:

  • HSM integration – protect against compromise of private keys
  • OTP integration – protect against compromise of user passwords, re-use of user passwords across sites
  • integrity-protected logging – protect against compromise of logs.

The importance of these features to you will depend on your view of these different risks, and the possible mitigations that you have in place, but they are ways that you can differentiate between the various options. Also, popular risk-centric security features are likely to morph into compliance-centric features as they are commoditised and more products support them: TLS is a good example of this, as is password shadowing in Operating Systems. In a similar way, regulatory regimes (in, for instance, the telecommunications, government, healthcare or banking sectors) help establish minimum risk profiles for applications where customers or consumers of services are rarely in a position to choose their provider based on security capabilities (typically because they’re invisible to them, or the consumers do not have the expertise, or both).

I think of risk-centric security features as “help me” features: if you have them, the response to a sales call is “how will they help me?”.

Why is this important to me?

If you are already a buyer of services, and care about security – you’re a CISO, or part of a procurement department, for instance – you probably consider these differences already. You buy security products to address security problems or meet specific risk profiles (well, assuming they work as advertised…), but you have other applications/products for which you have compliance-related security checks. Other security features are part of how you decide which product to buy.

If you are a developer, architect, product manager or service owner, though, think: what I am providing here? Am I providing a security application, or an application with security features? If the latter, how do I balance where I put my effort? In providing and advertisingcompliance-centric or risk-centric features? In order to get acceptance in multiple markets, I am going to need to address all of their compliance requirements, but that may not leave me enough resources to providing differentiating features against other products (which may be specific to that industry). On the other hand, if I focus too much on differentiation, you may miss important compliance features, and not get in the door at all. If you want to getting to the senior decision makers in a company or organisation and to be seen as a supplier of a key product – one which is not commoditised, but is differentiated from your competitors and really helps organisations manage risk – then you need to thinking about moving to risk-centric security features. But you really, really need to know what compliance-centric features are expected, as otherwise you’re just not going to get in the door.

A User Advisory Council for the CCC

The CCC is currently working to create a User Advisory Council (UAC)

Disclaimer: the views expressed in this article (and this blog) do not necessarily reflect those of any of the organisations or companies mentioned, including my employer (Red Hat) or the Confidential Computing Consortium.

The Confidential Computing Consortium was officially formed in October 2019, nearly a year and a half ago now. Despite not setting out to be a high membership organisation, nor going out of its way to recruit members, there are, at time of writing, 9 Premier members (of which Red Hat, my employer, is one), 22 General members, and 3 Associate members. You can find a list of each here, and a brief analysis I did of their business interests a few weeks ago in this article: Review of CCC members by business interests.

The CCC has two major committees (beyond the Governing Board):

  • Technical Advisory Board (TAC) – this coordinates all technical areas in which the CCC is involved. It recommends whether software projects should be accepted into the CCC (no hardware projects have been introduced so far, thought it’s possible they might be), coordinates activities like special interest groups (we expect one on Attestation to start very soon), encourages work across projects, manages conversations with other technical bodies, and produces material such as the technical white paper listed here.
  • Outreach Committee – when we started the CCC, we decided against going with the title “Marketing Committee”, as we didn’t think it represented the work we hoped this committee would be doing, and this was a good decision. Though there are activities which might fall under this heading, the work of the Outreach Committee is much wider, including analyst and press relations, creation of other materials, community outreach, cross-project discussions, encouraging community discussions, event planning, webinar series and beyond.

These two committees have served the CCC well, but now that it’s fairly well established, and has a fairly broad industry membership of hardware manufacturers, CSPs, service providers and ISVs (see my other article), we decided that there was one set of interested parties who were not well-represented, and which the current organisational structure did not do a sufficient job of encouraging to get involved: end-users.

It’s all very well the industry doing amazing innovation, coming up with astonishingly well-designed, easy to integrate, security-optimised hardware-software systems for confidential computing if nobody wants to use them. Don’t get me wrong: we know from many conversations with organisations across multiple sectors that users absolutely want to be able to make use of TEEs and confidential computing. That is not that same, however, as understanding their use cases in detail and ensuring that what we – the members of the CCC, who are focussed mainly on creating services and software – actually provide what users need. These users are across many sectors – finance, government, healthcare, pharmaceutical, Edge, to name but a few – and their use cases and requirements are going to be different.

This is why the CCC is currently working to create a User Advisory Council (UAC). The details are being worked out at the moment, but the idea is that potential and existing users of confidential computing technologies should have a forum in which they can connect with the leaders in the space (which hopefully describes the CCC members), share their use cases, find out more about the projects which are part of the CCC, and even take a close look at those projects most relevant to them and their needs. This sort of engagement isn’t likely, on the whole, to require attendance at lots of meetings, or to have frequent input into the sorts of discussions which the TAC and the Outreach Committee typically consider, and the general feeling is that as we (the CCC) are aiming to service these users, we shouldn’t be asking them to pay for the privilege (!) of talking to us. The intention, then, is to allow a low bar for involvement in the UAC, and for there to be no membership fee required. That’s not to stop UAC members from joining the CCC as members if they wish – it would be a great outcome if some felt that they were so keen to become more involved that membership was appropriate – but there should be no expectation of that level of commitment.

I should be clear that the plans for the UAC are not complete yet, and some of the above may change. Nor should you consider this a formal announcement – I’m writing this article because I think it’s interesting, and because I believe that this is a vital next step in how those involved with confidential computing engages with the broader world, not because I represent the CCC in this context. But there’s always a danger that “cool” new technologies develop into something which fits only the fundamentally imaginary needs of technologists (and I’ll put my hand up and say that I’m one of those), rather than the actual needs of businesses and organisations which are struggling to operate around difficult issues in the real world. The User Advisory Council, if it works as we hope, should allow the techies (me, again) to hear from people and organisations about what they want our technologies to do, and to allow the CCC to steer its efforts in these directions.

Managing by exception

What I want, as a human, is interesting opportunities to apply my expertise

I’ve visited a family member this week (see why in last week’s article), in a way which is allowed under the UK Covid-19 lockdown rules as an “exceptional circumstance”. When governments and civil authorities are managing what their citizens are allowed to do, many jurisdictions (including the UK, where I live), follow the general principle of “Everything which is not forbidden is allowed“. This becomes complicated when you’re putting in (hopefully short-term) restrictions on civil liberties such as disallowing general movement to visit family and friends, but in the general case, it makes a lot of sense. It allows for the principle of “management by exception”: rather than taking the approach that you check that every journey is allowed, you look out for disallowed journeys (taking an unnecessary trip to a castle in the north of England, for instance) and (hopefully) punish those who have undertaken them.

What astonishes me about the world of IT – and security in particular – is how often we take the opposite approach. We record every single log result and transfer it across the network just in case there’s a problem. We have humans in the chain for every software build, checking that the correct versions of containers have been used. When what we should be doing is involving humans – the expensive parts of the chain – only when they’re needed, and only sending lots of results across the network – the expensive part of that chain – when the system which is generating the logs is under attack, or has registered a fault.

That’s not to say that we shouldn’t be recording information, but that we should be intelligent about how we use it: which means that we should be automating. Automation allows us to manage – that is, apply the expensive operations in a chain – only when it is relevant. Having a list of allowed container images, and then asking the developer why she has chosen a non-standard option, is so, so much cheaper for the organisation, not to mention more interesting for the container expert, than monitoring every single build. Allowing the system generating logs to increase the amount of information sent when it realises its under attack, or to send it a command to up what it sends when a possible problem is noticed remotely – is more efficient than the alternative.

The other thing I’m not saying is that we should just ignore information that’s generated in normal cases, where operation is “nominal“. The growing opportunities to apply AI/ML techniques to this to allow us to realise what is outside normal operation, and become more sensitive to when we need to apply those expensive components in a system where appropriate, makes a lot of sense. Sometimes, statistical sampling is required, where we can’t expect all of the data to be provided to those systems (in the remote logging case, for instance), or designs of distributed systems with remote agents need to be designed.

What I want, as a human, is interesting opportunities to apply my expertise, where I can make a difference, rather than routine problems (if you have routine problems, you have broader, more concerning issues) which don’t test me, and which don’t make a broader difference to how the systems and processes I’m involved with run. That won’t happen unless I can be part of an organisation where management by exception is the norm.

One final thing that I should be clear about is that I’m also not talking about an approach where “everything which isn’t explicitly allowed is disallowed” – that doesn’t sound like a great approach for security (I may not be a huge fan of the term zero-trust, but I’m not that opposed to it). It’s the results of the decisions that we care about, on the whole, and where we can manage it, we just have to automate, given the amount of information that’s becoming available. Even worse than not managing by exception is doing nothing with the data at all!

It doesn’t happen often, but let’s realise that, on this occasion, we have something to learn from our governments, and manage by exception.

Saving one life

Scratching the surface of the technologies which led to the saving of a life

When a loved one calls you from the bathroom at 3.30 in the morning, and you find them collapsed, unconscious on the floor, what does technology do for you? I’ve had the opportunity to consider this over the past few days after a family member was rushed to hospital for an emergency operation which, I’m very pleased to say, seems to have been completely successful. Without it, or if it had failed (the success rate is around 50%), they would, quite simply, be dead now.

We are eternally grateful to all those directly involved in my family member’s care, and to the NHS, which means that there are no bills to pay, just continued National Insurance taken as tax from our monthly pay packets, and which we begrudge not one jot. But I thought it might be worth spending a few minutes just scratching the surface of the sets of technologies which led to the saving of a life, from the obvious to the less obvious. I have missed out many: our lives are so complex and interconnected that it is impossible to list everything, and it is only when they are missing that we realise how it all fits together. But I want to say a huge – a HUGE – thank you to anyone who has ever been involved in any of the systems or technologies, and to ask you to remind yourself that even if you are seldom thanked, your work saves lives every day.

The obvious

  • The combined ECG and blood pressure unit attached to the patient which allows the ambulance crew to react quickly enough to save the patient’s life
  • The satellite navigation systems which guided the crew to the patient’s door
  • The landline which allowed the call to the emergency systems
  • The triage and dispatch system which prioritised the sending of the crew
  • The mobile phone system which allowed a remote member of the family to talk to the crew before they transported the patient

The visible (and audible)

  • The anaesthesiology and monitoring equipment which kept the patient alive during the operation
  • The various scanning equipment at the hospital which allowed a diagnosis to be reached in time
  • The sirens and flashing lights on the ambulances
  • The technology behind the training (increasingly delivered at least partly online) for all of those involved in the patient’s care

The invisible

  • The drugs and medicines used in the patient’s care
  • Equipment: batteries for ambulances, scalpels for operating theatres, paper for charts, keyboards, CPUs and motherboards for computers, soles for shoes, soap for hand-washing, paint for hospital corridors, pillows and pillow cases for beds and everything else that allows the healthcare system to keep running
  • The infrastructure to get fuel to the ambulances and into the cars, trains and buses which transported the medical staff to hospital
  • The maintenance schedules and processes for the ambulances
  • The processes behind the ordering of PPE for all involved
  • The supply chains which allowed those involved to access the tea, coffee, milk, sugar and other (hopefully legal) stimulants to keep staff going through the day and night
  • Staff timetabling software for everyone from cleaners to theatre managers, maintenance people to on-call surgeons
  • The music, art, videos, TV shows and other entertainment that kept everyone involved sufficiently energised to function

The infrastructure

  • Clean water
  • Roads
  • Electricity
  • Internet access and routing
  • Safety processes and culture in healthcare
  • … and everything else I’ve neglected to mention.

A final note

I hope it’s clear that I’m aware that the technology is all interconnected, and too complex to allow every piece to be noted: I’m sorry if I missed your piece out. The same, however, goes for the people. I come from a family containing some medical professionals and volunteers, and I’m aware of the sacrifices made not only by them, but also by the people around them who they know and love, and who see less of them than they might like, or how have to work around difficult shift patterns, or see them come back home after a long shift, worn out or traumatised by what they’ve seen and experienced. The same goes for ancillary workers and services worked in other, supporting industries.

I thank you all, both those involved directly and those involved in any of the technologies which save lives, those I’ve noted and those I’ve missed. In a few days, I hope to see a member of my family who, without your involvement, I would not ever be seeing again in this life. That is down to you.

Review of CCC members by business interests

Reflections on the different types of member in the Confidential Computing Consortium

This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.

The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.

The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.

What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.

Cloud Service Providers (CSPs)

Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.

  • Google
  • IBM
  • Microsoft

The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.

Silicon vendors

Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.

  • AMD
  • Arm
  • Huawei
  • IBM
  • Intel
  • Nvidia

Hardware manufacturers

Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.

  • Cisco
  • Google
  • Huawei
  • IBM
  • Nvidia
  • Western Digital
  • Xilinx

Service companies

In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.

  • Accenture
  • Ant Group
  • Bytedance
  • Facebook
  • Google
  • Huawei
  • IBM
  • Microsoft
  • Red Hat
  • Swisscom

ISVs

There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.

What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.

  • Anjuna
  • Anqlave
  • Bytedance
  • Cosmian
  • Cysec
  • Decentriq
  • Edgeless Systems
  • Fortanix
  • Google
  • Huawei
  • IBM
  • r3
  • Red Hat
  • VMware

Blockchain

As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.

  • Appliedblockchain
  • Google
  • IBM
  • iExec
  • Microsoft
  • Phala network
  • r3

5 signs that you may be a Rust programmer

I’m an enthusiastic evangelist. I’m also not a very good Rustacean.

I’m a fairly recent convert to Rust, which I started to learn around the end of April 2020 (when we assumed there would only be the one lockdown, and that Covid-19 would be “over by Christmas” – oh, the youthful folly). But, like many converts, I’m an enthusiastic evangelist. I’m also not a very good Rustacean, truth be told, in that my coding style isn’t great, and I don’t write particularly idiomatic Rust. This is partly, I suspect, because I never really finished learning Rust before diving in and writing quite a lot of code (some of which is coming back to haunt me) and partly because I’m just not that good a programmer.

But I love Rust, and so should you. It’s friendly – well, more friendly than C or C++ – it’s ready for low-level systems tasks – more so than Python – it’s well-structured – more than Perl – and, best of all, it’s completely open source, from the design level up – much more than Java, for instance. Despite my lack of expertise, I noticed a few things which I suspect are common to many Rust enthusiasts and programmers, the first of which was sparked by some exciting recent news.

The word Foundation excites you

For Rust programmers, the word “Foundation” will no longer be associated first and foremost with Isaac Asimov, but with the newly formed Rust Foundation. Microsoft, Huawei, Google, AWS and Mozilla are providing the directors (and presumably most of the initial funding) for the foundation, which will look after all aspects of the language, “heralding Rust’s arrival as an enterprise production-ready technology”, according to the  Interim Executive Director, Ashley Williams (on a side note, it’s great to see a woman heading up such a major industry initiative).

The Foundation seems committed to safe-guarding the philosophy of Rust and ensuring that everybody has the opportunity to get involved. Rust is, in many ways, a poster-child example of an open source project. Not that it’s perfect (either the language or the community!), but in that there seem to be sufficient enthusiasts who are dedicated to preserving the high-involvement, low-bar approach to community which I think of as core to the much of open source. I strongly welcome the move, which I think can only help to promote Rust adoption and maturity over the coming years and months.

You get frustrated by news feed references to Rust (the game)

There’s another computer-related thing out there which goes by the name “Rust”, and it’s a “multi-player only survival video game”. It’s newer than Rust the language (having been announced only in 2013 and released in 2018), but I was once searching for Rust-related swag and, coming across it, made the mistake of searching for it. The Interwebs being what they are (thanks, Facebook, Google et al.), this meant that my news feed is now infected with this alternative Rust beast and I now get random updates from their fandom and PR folks. Thisis low-key annoying, but I’m pretty sure that I’m not alone in the Rust (language) community. I strongly suggest that if you do want to find out more about this upstart in the computing world, you use a privacy-improving (I refuse to say “privacy-preserving”) browser such as DuckDuckGo or even Tor to do your research.

The word “unsafe” makes you recoil in horror

Rust (the language, again) does a really good job of helping you do the Right Thing[TM]. Certainly in terms of memory safety, which is a major concern within C and C++ (not because it’s impossible, but because it’s really hard to get right consistently). Dave Herman wrote a post in 2016 on why safety is such a positive attribute of the Rust language: Safety is Rust’s Fireflower. Safety (memory, type safety) may not be sexy, but it’s something that you become used to, and grateful for, as you write more Rust – particularly if you’re involved in any systems programming, which is where Rust often excels.

Now, Rust doesn’t stop you from doing the Wrong Thing[TM], but it does make you make a conscious decision when you wish to go outside the bounds of safety, by making you use the unsafe keyword. This is good not only for you, as it will (hopefully) make you think really, really carefully about what you’re putting in any code block which uses it, but also for anyone reading your code: it’s a trigger word which makes any half-sane Rustacean shiver at least slightly, sit upright in their chair and think: “hmm, what’s going on here? I need to pay special attention”. If you’re lucky, that person person reading your code may be able to think of ways of rewriting your code in such a way that it does make use of Rust’s safety features, or at least reduces the amount of unsafe code that gets committed and released.

You wonder why there’s no emoji for ?; or {:?} or ::<>

Everybody loves (to hate) the turbofish (::<>), but there are other semantic constructs that you see regularly in Rust code, in particular {:?} (for string formatting) and ?; (? is a way of propagating errors up the calling stack, and ; ends the line/block, so you often see them together). They’re so common in Rust code that you just learn to parse them as you go, and they’re also so useful that I sometimes wonder why they’ve not made it into normal conversation, at least as emojis. There are probably others, too: what would be your suggestions? (Please, please no answers from Lisp adherents.)

Clippy is your friend (and not an animated paperclip)

Clippy, the Microsoft animated paperclip, was a “feature” that Office users learned very quickly to hate, and which has become the starting point for many memes. cargo clippy, on the other hand, is one of those amazing cargo commands that should become part of every Rust programmer’s toolkit. Clippy is a language linter and helps improve your code to make it cleaner, tidier, more legible, more idiomatic and generally less embarrassing when you share it with your colleagues or the rest of the world. Cargo has arguably rehabilitated the name “Clippy”, and although it’s not something I’d choose to name one of my kids, I don’t feel a sense of unease whenever I come across the term on the web anymore.

The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.