A User Advisory Council for the CCC

The CCC is currently working to create a User Advisory Council (UAC)

Disclaimer: the views expressed in this article (and this blog) do not necessarily reflect those of any of the organisations or companies mentioned, including my employer (Red Hat) or the Confidential Computing Consortium.

The Confidential Computing Consortium was officially formed in October 2019, nearly a year and a half ago now. Despite not setting out to be a high membership organisation, nor going out of its way to recruit members, there are, at time of writing, 9 Premier members (of which Red Hat, my employer, is one), 22 General members, and 3 Associate members. You can find a list of each here, and a brief analysis I did of their business interests a few weeks ago in this article: Review of CCC members by business interests.

The CCC has two major committees (beyond the Governing Board):

  • Technical Advisory Board (TAC) – this coordinates all technical areas in which the CCC is involved. It recommends whether software projects should be accepted into the CCC (no hardware projects have been introduced so far, thought it’s possible they might be), coordinates activities like special interest groups (we expect one on Attestation to start very soon), encourages work across projects, manages conversations with other technical bodies, and produces material such as the technical white paper listed here.
  • Outreach Committee – when we started the CCC, we decided against going with the title “Marketing Committee”, as we didn’t think it represented the work we hoped this committee would be doing, and this was a good decision. Though there are activities which might fall under this heading, the work of the Outreach Committee is much wider, including analyst and press relations, creation of other materials, community outreach, cross-project discussions, encouraging community discussions, event planning, webinar series and beyond.

These two committees have served the CCC well, but now that it’s fairly well established, and has a fairly broad industry membership of hardware manufacturers, CSPs, service providers and ISVs (see my other article), we decided that there was one set of interested parties who were not well-represented, and which the current organisational structure did not do a sufficient job of encouraging to get involved: end-users.

It’s all very well the industry doing amazing innovation, coming up with astonishingly well-designed, easy to integrate, security-optimised hardware-software systems for confidential computing if nobody wants to use them. Don’t get me wrong: we know from many conversations with organisations across multiple sectors that users absolutely want to be able to make use of TEEs and confidential computing. That is not that same, however, as understanding their use cases in detail and ensuring that what we – the members of the CCC, who are focussed mainly on creating services and software – actually provide what users need. These users are across many sectors – finance, government, healthcare, pharmaceutical, Edge, to name but a few – and their use cases and requirements are going to be different.

This is why the CCC is currently working to create a User Advisory Council (UAC). The details are being worked out at the moment, but the idea is that potential and existing users of confidential computing technologies should have a forum in which they can connect with the leaders in the space (which hopefully describes the CCC members), share their use cases, find out more about the projects which are part of the CCC, and even take a close look at those projects most relevant to them and their needs. This sort of engagement isn’t likely, on the whole, to require attendance at lots of meetings, or to have frequent input into the sorts of discussions which the TAC and the Outreach Committee typically consider, and the general feeling is that as we (the CCC) are aiming to service these users, we shouldn’t be asking them to pay for the privilege (!) of talking to us. The intention, then, is to allow a low bar for involvement in the UAC, and for there to be no membership fee required. That’s not to stop UAC members from joining the CCC as members if they wish – it would be a great outcome if some felt that they were so keen to become more involved that membership was appropriate – but there should be no expectation of that level of commitment.

I should be clear that the plans for the UAC are not complete yet, and some of the above may change. Nor should you consider this a formal announcement – I’m writing this article because I think it’s interesting, and because I believe that this is a vital next step in how those involved with confidential computing engages with the broader world, not because I represent the CCC in this context. But there’s always a danger that “cool” new technologies develop into something which fits only the fundamentally imaginary needs of technologists (and I’ll put my hand up and say that I’m one of those), rather than the actual needs of businesses and organisations which are struggling to operate around difficult issues in the real world. The User Advisory Council, if it works as we hope, should allow the techies (me, again) to hear from people and organisations about what they want our technologies to do, and to allow the CCC to steer its efforts in these directions.

Managing by exception

What I want, as a human, is interesting opportunities to apply my expertise

I’ve visited a family member this week (see why in last week’s article), in a way which is allowed under the UK Covid-19 lockdown rules as an “exceptional circumstance”. When governments and civil authorities are managing what their citizens are allowed to do, many jurisdictions (including the UK, where I live), follow the general principle of “Everything which is not forbidden is allowed“. This becomes complicated when you’re putting in (hopefully short-term) restrictions on civil liberties such as disallowing general movement to visit family and friends, but in the general case, it makes a lot of sense. It allows for the principle of “management by exception”: rather than taking the approach that you check that every journey is allowed, you look out for disallowed journeys (taking an unnecessary trip to a castle in the north of England, for instance) and (hopefully) punish those who have undertaken them.

What astonishes me about the world of IT – and security in particular – is how often we take the opposite approach. We record every single log result and transfer it across the network just in case there’s a problem. We have humans in the chain for every software build, checking that the correct versions of containers have been used. When what we should be doing is involving humans – the expensive parts of the chain – only when they’re needed, and only sending lots of results across the network – the expensive part of that chain – when the system which is generating the logs is under attack, or has registered a fault.

That’s not to say that we shouldn’t be recording information, but that we should be intelligent about how we use it: which means that we should be automating. Automation allows us to manage – that is, apply the expensive operations in a chain – only when it is relevant. Having a list of allowed container images, and then asking the developer why she has chosen a non-standard option, is so, so much cheaper for the organisation, not to mention more interesting for the container expert, than monitoring every single build. Allowing the system generating logs to increase the amount of information sent when it realises its under attack, or to send it a command to up what it sends when a possible problem is noticed remotely – is more efficient than the alternative.

The other thing I’m not saying is that we should just ignore information that’s generated in normal cases, where operation is “nominal“. The growing opportunities to apply AI/ML techniques to this to allow us to realise what is outside normal operation, and become more sensitive to when we need to apply those expensive components in a system where appropriate, makes a lot of sense. Sometimes, statistical sampling is required, where we can’t expect all of the data to be provided to those systems (in the remote logging case, for instance), or designs of distributed systems with remote agents need to be designed.

What I want, as a human, is interesting opportunities to apply my expertise, where I can make a difference, rather than routine problems (if you have routine problems, you have broader, more concerning issues) which don’t test me, and which don’t make a broader difference to how the systems and processes I’m involved with run. That won’t happen unless I can be part of an organisation where management by exception is the norm.

One final thing that I should be clear about is that I’m also not talking about an approach where “everything which isn’t explicitly allowed is disallowed” – that doesn’t sound like a great approach for security (I may not be a huge fan of the term zero-trust, but I’m not that opposed to it). It’s the results of the decisions that we care about, on the whole, and where we can manage it, we just have to automate, given the amount of information that’s becoming available. Even worse than not managing by exception is doing nothing with the data at all!

It doesn’t happen often, but let’s realise that, on this occasion, we have something to learn from our governments, and manage by exception.

Saving one life

Scratching the surface of the technologies which led to the saving of a life

When a loved one calls you from the bathroom at 3.30 in the morning, and you find them collapsed, unconscious on the floor, what does technology do for you? I’ve had the opportunity to consider this over the past few days after a family member was rushed to hospital for an emergency operation which, I’m very pleased to say, seems to have been completely successful. Without it, or if it had failed (the success rate is around 50%), they would, quite simply, be dead now.

We are eternally grateful to all those directly involved in my family member’s care, and to the NHS, which means that there are no bills to pay, just continued National Insurance taken as tax from our monthly pay packets, and which we begrudge not one jot. But I thought it might be worth spending a few minutes just scratching the surface of the sets of technologies which led to the saving of a life, from the obvious to the less obvious. I have missed out many: our lives are so complex and interconnected that it is impossible to list everything, and it is only when they are missing that we realise how it all fits together. But I want to say a huge – a HUGE – thank you to anyone who has ever been involved in any of the systems or technologies, and to ask you to remind yourself that even if you are seldom thanked, your work saves lives every day.

The obvious

  • The combined ECG and blood pressure unit attached to the patient which allows the ambulance crew to react quickly enough to save the patient’s life
  • The satellite navigation systems which guided the crew to the patient’s door
  • The landline which allowed the call to the emergency systems
  • The triage and dispatch system which prioritised the sending of the crew
  • The mobile phone system which allowed a remote member of the family to talk to the crew before they transported the patient

The visible (and audible)

  • The anaesthesiology and monitoring equipment which kept the patient alive during the operation
  • The various scanning equipment at the hospital which allowed a diagnosis to be reached in time
  • The sirens and flashing lights on the ambulances
  • The technology behind the training (increasingly delivered at least partly online) for all of those involved in the patient’s care

The invisible

  • The drugs and medicines used in the patient’s care
  • Equipment: batteries for ambulances, scalpels for operating theatres, paper for charts, keyboards, CPUs and motherboards for computers, soles for shoes, soap for hand-washing, paint for hospital corridors, pillows and pillow cases for beds and everything else that allows the healthcare system to keep running
  • The infrastructure to get fuel to the ambulances and into the cars, trains and buses which transported the medical staff to hospital
  • The maintenance schedules and processes for the ambulances
  • The processes behind the ordering of PPE for all involved
  • The supply chains which allowed those involved to access the tea, coffee, milk, sugar and other (hopefully legal) stimulants to keep staff going through the day and night
  • Staff timetabling software for everyone from cleaners to theatre managers, maintenance people to on-call surgeons
  • The music, art, videos, TV shows and other entertainment that kept everyone involved sufficiently energised to function

The infrastructure

  • Clean water
  • Roads
  • Electricity
  • Internet access and routing
  • Safety processes and culture in healthcare
  • … and everything else I’ve neglected to mention.

A final note

I hope it’s clear that I’m aware that the technology is all interconnected, and too complex to allow every piece to be noted: I’m sorry if I missed your piece out. The same, however, goes for the people. I come from a family containing some medical professionals and volunteers, and I’m aware of the sacrifices made not only by them, but also by the people around them who they know and love, and who see less of them than they might like, or how have to work around difficult shift patterns, or see them come back home after a long shift, worn out or traumatised by what they’ve seen and experienced. The same goes for ancillary workers and services worked in other, supporting industries.

I thank you all, both those involved directly and those involved in any of the technologies which save lives, those I’ve noted and those I’ve missed. In a few days, I hope to see a member of my family who, without your involvement, I would not ever be seeing again in this life. That is down to you.

Review of CCC members by business interests

Reflections on the different types of member in the Confidential Computing Consortium

This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.

The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.

The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.

What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.

Cloud Service Providers (CSPs)

Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.

  • Google
  • IBM
  • Microsoft

The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.

Silicon vendors

Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.

  • AMD
  • Arm
  • Huawei
  • IBM
  • Intel
  • Nvidia

Hardware manufacturers

Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.

  • Cisco
  • Google
  • Huawei
  • IBM
  • Nvidia
  • Western Digital
  • Xilinx

Service companies

In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.

  • Accenture
  • Ant Group
  • Bytedance
  • Facebook
  • Google
  • Huawei
  • IBM
  • Microsoft
  • Red Hat
  • Swisscom

ISVs

There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.

What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.

  • Anjuna
  • Anqlave
  • Bytedance
  • Cosmian
  • Cysec
  • Decentriq
  • Edgeless Systems
  • Fortanix
  • Google
  • Huawei
  • IBM
  • r3
  • Red Hat
  • VMware

Blockchain

As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.

  • Appliedblockchain
  • Google
  • IBM
  • iExec
  • Microsoft
  • Phala network
  • r3

5 signs that you may be a Rust programmer

I’m an enthusiastic evangelist. I’m also not a very good Rustacean.

I’m a fairly recent convert to Rust, which I started to learn around the end of April 2020 (when we assumed there would only be the one lockdown, and that Covid-19 would be “over by Christmas” – oh, the youthful folly). But, like many converts, I’m an enthusiastic evangelist. I’m also not a very good Rustacean, truth be told, in that my coding style isn’t great, and I don’t write particularly idiomatic Rust. This is partly, I suspect, because I never really finished learning Rust before diving in and writing quite a lot of code (some of which is coming back to haunt me) and partly because I’m just not that good a programmer.

But I love Rust, and so should you. It’s friendly – well, more friendly than C or C++ – it’s ready for low-level systems tasks – more so than Python – it’s well-structured – more than Perl – and, best of all, it’s completely open source, from the design level up – much more than Java, for instance. Despite my lack of expertise, I noticed a few things which I suspect are common to many Rust enthusiasts and programmers, the first of which was sparked by some exciting recent news.

The word Foundation excites you

For Rust programmers, the word “Foundation” will no longer be associated first and foremost with Isaac Asimov, but with the newly formed Rust Foundation. Microsoft, Huawei, Google, AWS and Mozilla are providing the directors (and presumably most of the initial funding) for the foundation, which will look after all aspects of the language, “heralding Rust’s arrival as an enterprise production-ready technology”, according to the  Interim Executive Director, Ashley Williams (on a side note, it’s great to see a woman heading up such a major industry initiative).

The Foundation seems committed to safe-guarding the philosophy of Rust and ensuring that everybody has the opportunity to get involved. Rust is, in many ways, a poster-child example of an open source project. Not that it’s perfect (either the language or the community!), but in that there seem to be sufficient enthusiasts who are dedicated to preserving the high-involvement, low-bar approach to community which I think of as core to the much of open source. I strongly welcome the move, which I think can only help to promote Rust adoption and maturity over the coming years and months.

You get frustrated by news feed references to Rust (the game)

There’s another computer-related thing out there which goes by the name “Rust”, and it’s a “multi-player only survival video game”. It’s newer than Rust the language (having been announced only in 2013 and released in 2018), but I was once searching for Rust-related swag and, coming across it, made the mistake of searching for it. The Interwebs being what they are (thanks, Facebook, Google et al.), this meant that my news feed is now infected with this alternative Rust beast and I now get random updates from their fandom and PR folks. Thisis low-key annoying, but I’m pretty sure that I’m not alone in the Rust (language) community. I strongly suggest that if you do want to find out more about this upstart in the computing world, you use a privacy-improving (I refuse to say “privacy-preserving”) browser such as DuckDuckGo or even Tor to do your research.

The word “unsafe” makes you recoil in horror

Rust (the language, again) does a really good job of helping you do the Right Thing[TM]. Certainly in terms of memory safety, which is a major concern within C and C++ (not because it’s impossible, but because it’s really hard to get right consistently). Dave Herman wrote a post in 2016 on why safety is such a positive attribute of the Rust language: Safety is Rust’s Fireflower. Safety (memory, type safety) may not be sexy, but it’s something that you become used to, and grateful for, as you write more Rust – particularly if you’re involved in any systems programming, which is where Rust often excels.

Now, Rust doesn’t stop you from doing the Wrong Thing[TM], but it does make you make a conscious decision when you wish to go outside the bounds of safety, by making you use the unsafe keyword. This is good not only for you, as it will (hopefully) make you think really, really carefully about what you’re putting in any code block which uses it, but also for anyone reading your code: it’s a trigger word which makes any half-sane Rustacean shiver at least slightly, sit upright in their chair and think: “hmm, what’s going on here? I need to pay special attention”. If you’re lucky, that person person reading your code may be able to think of ways of rewriting your code in such a way that it does make use of Rust’s safety features, or at least reduces the amount of unsafe code that gets committed and released.

You wonder why there’s no emoji for ?; or {:?} or ::<>

Everybody loves (to hate) the turbofish (::<>), but there are other semantic constructs that you see regularly in Rust code, in particular {:?} (for string formatting) and ?; (? is a way of propagating errors up the calling stack, and ; ends the line/block, so you often see them together). They’re so common in Rust code that you just learn to parse them as you go, and they’re also so useful that I sometimes wonder why they’ve not made it into normal conversation, at least as emojis. There are probably others, too: what would be your suggestions? (Please, please no answers from Lisp adherents.)

Clippy is your friend (and not an animated paperclip)

Clippy, the Microsoft animated paperclip, was a “feature” that Office users learned very quickly to hate, and which has become the starting point for many memes. cargo clippy, on the other hand, is one of those amazing cargo commands that should become part of every Rust programmer’s toolkit. Clippy is a language linter and helps improve your code to make it cleaner, tidier, more legible, more idiomatic and generally less embarrassing when you share it with your colleagues or the rest of the world. Cargo has arguably rehabilitated the name “Clippy”, and although it’s not something I’d choose to name one of my kids, I don’t feel a sense of unease whenever I come across the term on the web anymore.

The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.

Acting (and coding) your age

With seniority comes perks, but it also comes with responsibilities.

I dropped a post on LinkedIn a few days ago:

I’m now 50 years old and writing the most complex code in my career (for Enarx) in a language (Rust) that I only started learning 9 months ago and I’ve just finished the first draft of a book (for Wiley). Not sure what’s going on (and I wouldn’t have believed you if you’d told me this 25 years ago). #codingtips #writing #security #confidentialcomputing #rustlang

I’ve never received such attention. Lots of comments, lots of “likes” and other reactions, lots of people wanting to connect. It was supposed to be a throw-away comment, and I certainly had no intention either to boast or elicit sympathy: I am genuinely surprised by all of the facts mentioned – including my age, given that I feel that I’m somewhere between 23 and 31 (both primes, of course).

I remember in my mid- to late-twenties thinking “this business stuff is pretty simple: why don’t the oldies move aside and let talented youngsters[1] take over, or at least provide them some inspired advice?” Even at the time I realised that this was a little naive, and that there is something to be said for breadth of experience and decades of acquired knowledge, but I’m pretty certain that this set of questions has been asked by pretty much every generation since Ogg looked at the failings in his elders’ flint spear-head knapping technique and later got into a huff when his mum wouldn’t let him lead the mammoth hunt that afternoon.

Why expertise matters

Sadly (for young people), there really are benefits associated with praxis (actually doing things), even if you’ve absorbed all of the theory (and you haven’t, which is one of the things you learn with age). Of course, there’s also the Dunning-Kruger effect, which is a cognitive bias (Trust you? I can’t trust myself.) which leads the inexperienced to overestimate their own ability and experts to underestimate theirs.

Given this, there are some interesting and bizarre myths around about software/coding being a “young man’s game”. Leaving aside the glaring gender bias in that statement[2], this is rather odd. I know some extremely talented over-40 and over-50 software engineers, and I’m sure that you can think of quite a few if you try. There are probably a few factors at play here:

  • the lionisation of the “start-ups in the garage” young (mainly white) coders turning their company into “unicorn” trope;
  • the (over-)association of programming with mathematical ability, where a certain set of mathematicians are considered to have done their best work in their twenties;
  • the relative scarcity of roles (particularly in organisations which aren’t tech-specific) of “individual contributor” career tracks with roles where it’s possible to rise in seniority (and pay) without managing other people;
  • a possible tendency (which I’m positing without much evidence) for a sizeable proportion of senior software folks to take a broader view of the discipline and to move into architectural roles which are required by the industry but are difficult to perform without a good grounding in engineering basics.

In my case, I moved away from writing software maybe 15 years ago, and honestly never thought I’d do any serious coding again, only to discover a gap in the project I’m working on (Enarx) which nobody else had the time to fill, but which I felt merited some attention. That, and a continuous desire to learn new things, which had led me to starting to learn Rust, brought me to some serious programming, which I’ve really enjoyed.

We need old coders: people who have been around the block a few times, have made the mistakes and learned from them. People who can look at competing technologies and make reasoned decisions about which is the best fit for a project, rather than just choosing the newest and “coolest”[3].

Why old people should step aside

Having got all of the above out of my system, I’m now going to put forward an extremely important counter-argument. First, some context. I volunteer for the East of England Ambulance Service Trust as a Community First Responder, a role where I attend patients in (possible) emergency situations and work with ambulance staff, paramedics, etc.. I’ve become very interested in some of the theory around patient safety, which it turns out is currently being strongly influenced by lessons learned over the past few decades from transport safety, particularly aviation safety[5].

I need to do more study around this topic, as there are some really interesting lessons that can be applied to our sector (in fact, some are already be learned from our sector, particularly in how DevOps/WebOps respond to incidents), but there are two points that have really hit home for me this week, and which are relevant to the point at hand. They are specifically discussed with relation to high-intensity, stressful situations, but I think there’s broader applicability.

1. With experience comes expectation

While experience is enormously useful – bringing insights and knowledge that others may not have, or will find difficult to synthesise – it can also lead you down paths which are incorrect. If you’ve seen the same thing 99 times, you’re likely to assume that the 100th will be the same: bringing in other voices, including less experienced ones, allows other views to be considered, giving a better chance that the correct conclusion will be met. You increase diversity of opinion and allow alternatives to be brought into the mix. The less experience team members may be wrong, but from time to time, you’ll be wrong, and everyone will benefit from this. By allowing other people a voice, you’re also setting an example that speaking up and offering alternative views is not only acceptable, but valued. You and the team get to learn from each other, whether it’s when you’re wrong, or when you’re right, but you get to discuss with others how you came to your conclusions, and welcome their probing and questions around how you got there.

2. Sometimes you need to step aside to apply yourself elsewhere

Perhaps equally important is that sometimes, tempting as it may be to get your hands dirty and apply your expertise to a particular problem (particularly one which is possibly trivial to you), there are times when it’s best to step aside and let someone less experienced than you do it. Not only because they need the experience themselves, but also because your skills may be better applied at a systems level or dealing with other problems in other contexts (such as funding or resource management). The example sometimes given in healthcare is when a senior clinician arrives on scene at an incident: rather than their taking over the treatment of patients (however skilled the senior clinician may be), their role is to see the larger situation, to prioritise patients for treatment, assess risks to staff on scene, manage transport and the rest. Sometimes they may need to knuckle down and apply their clinical skills directly (much as senior techies may end up coding to meet a demo deadline, for instance), but most of the time, they are best deployed in stepping aside.

Conclusion

With seniority comes perks: getting to do the interesting stuff, taking decisions, having junior folks make the tea and bring the doughnuts in[6]. But it also comes with responsibilities: helping other people learn, seeing the bigger picture, giving less experienced team members the chance to make mistakes, removing barriers imposed by organisational hierarchy and getting the first round in at the pub[7]. Look back at what you were thinking about the beginning of your career, and give your successors (because they will be your successors) the chances that you were so keen for back then. Show them respect, and you (and your organisation) will benefit.


1 – I think that the “like me” is pretty implicit here, yes?

2 – which, sadly, reflects another bias in the market.

3 – there’s an important point here: many of us older folks love new shiny things just as much as the youngsters, and are aware of the problems of the old approaches and languages – but we’re also aware that there are risks and pain points associated with the new, which need to be taken into account[4].

4 – that really made me sound old, didn’t it?

5 – in large part influenced by the work of Martin Bromiley, a civil aviation pilot whose wife Elaine died in a “routine” operation in 2005 and who has worked (and is working) to help the health care sector transition to a no-blame, “just” culture around patient safety.

6 – this is a joke: if you have ever, ever find yourself in an office or team where this is the norm, and hierarchy shows in this sort of way, either get out or change that culture just as soon as you can. It’s toxic.

7 – I’m writing this in the middle of the UK’s second Covid-19 lockdown, and can barely remember what a “pub” or a “round” even is.

7 steps to a (bad) tech demo

Give the demo, look condescending, go home.

I’m currently involved with putting together a demo to show off the amazing progress we’ve made with Enarx recently. I’ve watched – and given – quite a few demos in my time, and I considered writing a guide to presenting a good demo. But then I thought: why? Demos should be about showing how clever we are, not about the audience – that’s pretty clear from most of the ones I’ve seen – so I decided to write a guide to a what many audiences would consider a bad demo, in other words, the type that most of us in IT are most practised at. Follow this guide, and you can be pretty certain that you will join the ranks of know-it-all, disdainful techies who are better than their audiences and have no interest in engaging with lower, less intelligent beings such as colleagues, user groups or potential customers. If you’re in marketing, sales, documentation or another function, you can still learn from this guide, and use it as part of your journey as you strive to achieve the arrogant sanctimoniousness which we techies cultivate and for which we are (in)famous. Luckily, many demos already exist which exhibit these characteristics, so you shouldn’t need to look far to find examples.

1. Assume your audience is you

You only need one demo, because everybody who matters will come from the same background as you: ultra-technical. You will, of course, be using a terminal, which means text commands to drive the demo. Don’t pander to lesser mortals by expanding parameters, for instance: why use df --all --human-readable --portability /home/mbursell/ when df -ahT ~/ is available and will save you space and time. If you really must provide graphical output (sometimes even hard-core techies have to work with GUIs, to their intense annoyance and disappointment), ensure that you’re using non-standard colours and icons. Why not use a smiley-face emoji for the “irretrievable delete” function or a thumbs-up icon for “cancel”? And you get extra points if you use a browser which still supports the <blink> tag: everybody’s favourite from the mid-90s.

Your audience shouldn’t need any context before the demo, either: if they’ve come to hear you speak, all you’re doing is giving them an update on exactly how far you have come – in other words, how clever you are. Find ways to exhibit that, and they’ll be impressed by your expertise and intelligence, which is what demos are for in the first place.

2. Don’t check it works

Your demo will, of course, work perfectly. Every time. Which means that there’s no need to check it just before delivering it. In fact, you might as well make a few last tweaks just beforehand – preferably without saving the “last good state”. Everyone will be amused if anything goes wrong – and if it does, it absolutely won’t be your fault. Here’s a list of useful things/people to blame if anything does happen:

  • the conference wifi (more difficult if you’re presenting virtually)
  • the VPN (no need to specify what VPN, or even if you’re using one)
  • other developers (who you can accuse of making last minute changes)
  • the Cloud Service Provider (again, no need to specify which one, or even if you’re using one)
  • DNS
  • certificate expiry
  • neutrinos
  • the marketing department.

3. Use small fonts and icons

Assume that everyone watching your demo:

  1. has perfect eyesight
  2. has perfect colour perception
  3. is sufficiently close to the projected image to see it (physical demo)
  4. is viewing on a screen as large as yours (online)
  5. is viewing on a screen at equal resolution to yours (online)
  6. has sufficient bandwidth that everything displays quickly and clearly enough for them to be able to see (online)
  7. can read and decipher unfamiliar text, commands, icons, obscure diagrams and dread sigils at least as quickly as you can display – and then hide – them.

4. Don’t explain

As noted above, anybody who is worthy of viewing the demo that you have put together should be immediately able to understand any context that is relevant, including any assumptions that you have made when creating the demo. “Obviously, I’ve created this WebAssembly binary directly from the Rust source file with the release flag, which means that we get good portability and type safety but decent on-the-wire speed and encryption time” is more than enough detail. You should probably have gone with something shorter like: “here’s an ls -al of the .wasm file. It’s cross-platform, safe and quick. Compiled with cargo +nightly build --release --wasm32-wasi, obviously.” Who needs a long and boring-to-deliver explanation of why you chose WebAssembly to allow users to run their applications on multiple different types of system, and the design and build decisions you made to reduce loading time over the network? No-one you care about, certainly.

5. Be very quick or very slow

This important point is related to several of the ones before. If you’ve been paying attention, you’ll know that there’s no good reason to worry about your audience and whether they’re following along, because if they’re the right type of audience (basically, they’re you), then they will already understand what’s going on. You can therefore speak as fast as you like, and whether you’re speaking their first language or not, they should pick up enough to appreciate the brilliance of the demo (in other words, your brilliance).

There is an alternative, of course, which is to speak really slowly. This should never be because you’re allowing your audience to pay attention and catch up however, but instead because you’re going into extreme detail about every single aspect of your demo, from the choice of your compilation options (see above) to the font family you use in your (many) terminals. This isn’t boring: it’s about you and your choices, so it’s interesting (to anyone who matters – see above).

6. Don’t record the demo

Your demo will work first time (unless you’re hit by problems caused by someone or something else – see 2 above), so there’s no need to record it, is there? What is more, anyone who is sufficiently interested in you, your project and your demo will attend in real-time, so there’s no need to record it for late-comers or for people to watch later. You’ll have made lots of changes within the next couple of weeks anyway, so what’s the point?

7. Don’t answer questions

Demos are for you to tell people about your project, and about you. They are not excuses for postulants to ask their questions of you. Questions generally fall into three types, none of which are of any interest to you:

  1. Stupid questions which betray how little an audience member understands about your demo. This is their fault, as they not clever enough to get what you’re doing. Ignore.
  2. Annoying questions which would be clear if people paid enough attention. These may be questions which are relevant to the demo, but which should be obvious to anyone who has done sufficient research into your work. Why should you clarify your work for lazy people? Ignore.
  3. Dangerous questions which point out possible mistakes or “improvements” to what you’ve done. You know best – you’re not showing a demo to get suggestions, but in order to expose your expertise. Ignore.

Conclusion

You are brilliant. Your demo is brilliant. Anybody who doesn’t see that is at fault, and it’s not your job to make their lives easier. Give the demo, look condescending, go home.

Enarx end-to-end complete!

We now have a fully working end-to-end proof of concept, with no smoke and mirrors.

I’ve written lots about the Enarx project, a completely open source project around deploying workloads to Trusted Execution Environments, and you can find a few of the articles here:

I have some very exciting news to announce.

A team effort

Yesterday was a huge day for the Enarx project, in that we now have a fully working end-to-end proof of concept, with no smoke and mirrors (we don’t believe in those). The engineers on the team have been working really hard on getting all of the low-level pieces in place, with support from other members on CI/CD, infrastructure, documentation, community outreach and beyond. I won’t mention everyone, as I don’t want to miss anyone out, and I also don’t have their permission, but it’s been fantastic working with everyone. We’ve been edging closer and closer to having all the main pieces ready to go, and just before Christmas/New Year we got attested AMD SEV Keeps working, with the ability to access information from that attestation within the Keep. This allowed us to move to the final step, which is creating an end-to-end client-server architecture. It is this that we got running yesterday.

I happened to be the lucky person to be able to complete this part of the puzzle, building on work by the rest of the team. I don’t have the low-level expertise that many of the team have, but my background is in client-server and peer-to-peer distributed systems, and after I started learning Rust around March 2020, I decided to see if I could do something useful for the project code base: this is my contribution to the engineering. To give you an idea of what we’ve implemented, let’s look at a simple architectural diagram of an Enarx deployment.

Simple Enarx architectural diagram

Much of the work that’s been going on has been concentrated in the Enarx runtime component, getting WebAssembly working in SGX and SEV Trusted Execution Environments, working on syscall implementations and attestation. There’s also been quite a lot of work on glue – how we transfer information around the system in a standards-compliant way (we’re using CBOR encoding throughout). The pieces that I’ve been putting together have been the Enarx client agent, the Enarx host agent (or Enarx Keep Manager) and two pieces which aren’t visible in this diagram (but are in the more detailed one below): the Enarx Keep Loader and Enarx Wasm Loader (“App loader” in the detailed view).

Detailed Enarx architectural diagram

The components

Let’s look at what these components do, and then explain exactly what we’ve achieved. The name in bold refers to the diagram, the name in italics relates to the Rust crate (and, where already merged, the github repository) associated with the component.

  • Enarx Client Agent (client) – responsible to talking to the enarx-keepmgr and requesting a Keep. It checks that the Keep is correctly set up and attested and then sends the workload (a WebAssembly package) to the enarx-wasmldr component, using HTTPS with a one-use certificate derived from the attestation process.
  • Enarx Keep Manager (enarx-keepmgr) – creates enarx-keepldr components at the request of the client, proxying communications to them from the client as required (for certain attestation flows, for instance). It is untrusted by the client.
  • Enarx Keep Loader (enarx-keepldr) – there is an enarx-keepldr per Keep, and it performs the loading of components into the Trusted Execution Environment itself. It sits outside the TEE instance, and is therefore untrusted by the client.
  • Enarx App Loader (enarx-wasmldr) – the enarx-wasmldr component resides within the TEE instance, and is therefore has confidentiality and integrity protection from the rest of the host. It receives the WebAssembly (Wasm) workload from the client component and may access secret information provisioned into the Keep during the attestation process.

Here’s the post I made to the Enarx chat #development channel yesterday to announce what we managed to achieve:

  1. client -> keepmgr: “create sev keep”
  2. keepmgr launches sev keep via systemd
  3. client -> keepmgr: “perform attestation, include this private key” (note – private key is encrypted from keepmgr)
  4. keepmgr -> keepldr: “attestation + private key”
  5. keepldr creates keep, passes private key to it
  6. wasmldr creates certificate from private key
  7. wasmldr waits for workload
  8. client sends workload of HTTPS to wasmldr
  9. wasmldr accepts workload over HTTPS
  10. wasmldr executes workload

WE HAVE A FULLY WORKING END-TO-END DEMO! Thank you everyone

What does this mean? Well, everything works! The client requests a Keep using with an AMD SEV instance, it’s created, attested, listens for an incoming connection over HTTPS, and the client sends the workload, which then executes. The workload was written in Rust and compiled to WebAssembly – it’s a real application, in other words, and not a hand-crafted piece of WebAssembly for the purposes of testing.

What’s next?

There’s lots left to do, including:

  • merging all of the code into the main repositories (I was working in a separate set to avoid undue impact on other efforts)
  • tidying it to make it more presentable (both what the demo shows and the quality of the code!)
  • add SGX support – we hope that we’re closing in on this very soon
  • make the various components production-ready (the keepmgr, for instance, doesn’t manage multiple enarx-keepldr components very well yet)
  • define the wire protocol fully (somewhere other than in my head)
  • document everything!

But most of that’s easy: it’s just engineering. 🙂

We’d love you to become involved. If you’re interested, read some of my articles, visit project home page and repositories, hang out on our chat server or watch some of our videos on YouTube. We really welcome involvement – and not just from engineers, either. Come and have a play!