Review of CCC members by business interests

Reflections on the different types of member in the Confidential Computing Consortium

This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.

The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.

The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.

What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.

Cloud Service Providers (CSPs)

Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.

  • Google
  • IBM
  • Microsoft

The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.

Silicon vendors

Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.

  • AMD
  • Arm
  • Huawei
  • IBM
  • Intel
  • Nvidia

Hardware manufacturers

Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.

  • Cisco
  • Google
  • Huawei
  • IBM
  • Nvidia
  • Western Digital
  • Xilinx

Service companies

In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.

  • Accenture
  • Ant Group
  • Bytedance
  • Facebook
  • Google
  • Huawei
  • IBM
  • Microsoft
  • Red Hat
  • Swisscom

ISVs

There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.

What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.

  • Anjuna
  • Anqlave
  • Bytedance
  • Cosmian
  • Cysec
  • Decentriq
  • Edgeless Systems
  • Fortanix
  • Google
  • Huawei
  • IBM
  • r3
  • Red Hat
  • VMware

Blockchain

As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.

  • Appliedblockchain
  • Google
  • IBM
  • iExec
  • Microsoft
  • Phala network
  • r3

5 signs that you may be a Rust programmer

I’m an enthusiastic evangelist. I’m also not a very good Rustacean.

I’m a fairly recent convert to Rust, which I started to learn around the end of April 2020 (when we assumed there would only be the one lockdown, and that Covid-19 would be “over by Christmas” – oh, the youthful folly). But, like many converts, I’m an enthusiastic evangelist. I’m also not a very good Rustacean, truth be told, in that my coding style isn’t great, and I don’t write particularly idiomatic Rust. This is partly, I suspect, because I never really finished learning Rust before diving in and writing quite a lot of code (some of which is coming back to haunt me) and partly because I’m just not that good a programmer.

But I love Rust, and so should you. It’s friendly – well, more friendly than C or C++ – it’s ready for low-level systems tasks – more so than Python – it’s well-structured – more than Perl – and, best of all, it’s completely open source, from the design level up – much more than Java, for instance. Despite my lack of expertise, I noticed a few things which I suspect are common to many Rust enthusiasts and programmers, the first of which was sparked by some exciting recent news.

The word Foundation excites you

For Rust programmers, the word “Foundation” will no longer be associated first and foremost with Isaac Asimov, but with the newly formed Rust Foundation. Microsoft, Huawei, Google, AWS and Mozilla are providing the directors (and presumably most of the initial funding) for the foundation, which will look after all aspects of the language, “heralding Rust’s arrival as an enterprise production-ready technology”, according to the  Interim Executive Director, Ashley Williams (on a side note, it’s great to see a woman heading up such a major industry initiative).

The Foundation seems committed to safe-guarding the philosophy of Rust and ensuring that everybody has the opportunity to get involved. Rust is, in many ways, a poster-child example of an open source project. Not that it’s perfect (either the language or the community!), but in that there seem to be sufficient enthusiasts who are dedicated to preserving the high-involvement, low-bar approach to community which I think of as core to the much of open source. I strongly welcome the move, which I think can only help to promote Rust adoption and maturity over the coming years and months.

You get frustrated by news feed references to Rust (the game)

There’s another computer-related thing out there which goes by the name “Rust”, and it’s a “multi-player only survival video game”. It’s newer than Rust the language (having been announced only in 2013 and released in 2018), but I was once searching for Rust-related swag and, coming across it, made the mistake of searching for it. The Interwebs being what they are (thanks, Facebook, Google et al.), this meant that my news feed is now infected with this alternative Rust beast and I now get random updates from their fandom and PR folks. Thisis low-key annoying, but I’m pretty sure that I’m not alone in the Rust (language) community. I strongly suggest that if you do want to find out more about this upstart in the computing world, you use a privacy-improving (I refuse to say “privacy-preserving”) browser such as DuckDuckGo or even Tor to do your research.

The word “unsafe” makes you recoil in horror

Rust (the language, again) does a really good job of helping you do the Right Thing[TM]. Certainly in terms of memory safety, which is a major concern within C and C++ (not because it’s impossible, but because it’s really hard to get right consistently). Dave Herman wrote a post in 2016 on why safety is such a positive attribute of the Rust language: Safety is Rust’s Fireflower. Safety (memory, type safety) may not be sexy, but it’s something that you become used to, and grateful for, as you write more Rust – particularly if you’re involved in any systems programming, which is where Rust often excels.

Now, Rust doesn’t stop you from doing the Wrong Thing[TM], but it does make you make a conscious decision when you wish to go outside the bounds of safety, by making you use the unsafe keyword. This is good not only for you, as it will (hopefully) make you think really, really carefully about what you’re putting in any code block which uses it, but also for anyone reading your code: it’s a trigger word which makes any half-sane Rustacean shiver at least slightly, sit upright in their chair and think: “hmm, what’s going on here? I need to pay special attention”. If you’re lucky, that person person reading your code may be able to think of ways of rewriting your code in such a way that it does make use of Rust’s safety features, or at least reduces the amount of unsafe code that gets committed and released.

You wonder why there’s no emoji for ?; or {:?} or ::<>

Everybody loves (to hate) the turbofish (::<>), but there are other semantic constructs that you see regularly in Rust code, in particular {:?} (for string formatting) and ?; (? is a way of propagating errors up the calling stack, and ; ends the line/block, so you often see them together). They’re so common in Rust code that you just learn to parse them as you go, and they’re also so useful that I sometimes wonder why they’ve not made it into normal conversation, at least as emojis. There are probably others, too: what would be your suggestions? (Please, please no answers from Lisp adherents.)

Clippy is your friend (and not an animated paperclip)

Clippy, the Microsoft animated paperclip, was a “feature” that Office users learned very quickly to hate, and which has become the starting point for many memes. cargo clippy, on the other hand, is one of those amazing cargo commands that should become part of every Rust programmer’s toolkit. Clippy is a language linter and helps improve your code to make it cleaner, tidier, more legible, more idiomatic and generally less embarrassing when you share it with your colleagues or the rest of the world. Cargo has arguably rehabilitated the name “Clippy”, and although it’s not something I’d choose to name one of my kids, I don’t feel a sense of unease whenever I come across the term on the web anymore.

The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.

Enarx end-to-end complete!

We now have a fully working end-to-end proof of concept, with no smoke and mirrors.

I’ve written lots about the Enarx project, a completely open source project around deploying workloads to Trusted Execution Environments, and you can find a few of the articles here:

I have some very exciting news to announce.

A team effort

Yesterday was a huge day for the Enarx project, in that we now have a fully working end-to-end proof of concept, with no smoke and mirrors (we don’t believe in those). The engineers on the team have been working really hard on getting all of the low-level pieces in place, with support from other members on CI/CD, infrastructure, documentation, community outreach and beyond. I won’t mention everyone, as I don’t want to miss anyone out, and I also don’t have their permission, but it’s been fantastic working with everyone. We’ve been edging closer and closer to having all the main pieces ready to go, and just before Christmas/New Year we got attested AMD SEV Keeps working, with the ability to access information from that attestation within the Keep. This allowed us to move to the final step, which is creating an end-to-end client-server architecture. It is this that we got running yesterday.

I happened to be the lucky person to be able to complete this part of the puzzle, building on work by the rest of the team. I don’t have the low-level expertise that many of the team have, but my background is in client-server and peer-to-peer distributed systems, and after I started learning Rust around March 2020, I decided to see if I could do something useful for the project code base: this is my contribution to the engineering. To give you an idea of what we’ve implemented, let’s look at a simple architectural diagram of an Enarx deployment.

Simple Enarx architectural diagram

Much of the work that’s been going on has been concentrated in the Enarx runtime component, getting WebAssembly working in SGX and SEV Trusted Execution Environments, working on syscall implementations and attestation. There’s also been quite a lot of work on glue – how we transfer information around the system in a standards-compliant way (we’re using CBOR encoding throughout). The pieces that I’ve been putting together have been the Enarx client agent, the Enarx host agent (or Enarx Keep Manager) and two pieces which aren’t visible in this diagram (but are in the more detailed one below): the Enarx Keep Loader and Enarx Wasm Loader (“App loader” in the detailed view).

Detailed Enarx architectural diagram

The components

Let’s look at what these components do, and then explain exactly what we’ve achieved. The name in bold refers to the diagram, the name in italics relates to the Rust crate (and, where already merged, the github repository) associated with the component.

  • Enarx Client Agent (client) – responsible to talking to the enarx-keepmgr and requesting a Keep. It checks that the Keep is correctly set up and attested and then sends the workload (a WebAssembly package) to the enarx-wasmldr component, using HTTPS with a one-use certificate derived from the attestation process.
  • Enarx Keep Manager (enarx-keepmgr) – creates enarx-keepldr components at the request of the client, proxying communications to them from the client as required (for certain attestation flows, for instance). It is untrusted by the client.
  • Enarx Keep Loader (enarx-keepldr) – there is an enarx-keepldr per Keep, and it performs the loading of components into the Trusted Execution Environment itself. It sits outside the TEE instance, and is therefore untrusted by the client.
  • Enarx App Loader (enarx-wasmldr) – the enarx-wasmldr component resides within the TEE instance, and is therefore has confidentiality and integrity protection from the rest of the host. It receives the WebAssembly (Wasm) workload from the client component and may access secret information provisioned into the Keep during the attestation process.

Here’s the post I made to the Enarx chat #development channel yesterday to announce what we managed to achieve:

  1. client -> keepmgr: “create sev keep”
  2. keepmgr launches sev keep via systemd
  3. client -> keepmgr: “perform attestation, include this private key” (note – private key is encrypted from keepmgr)
  4. keepmgr -> keepldr: “attestation + private key”
  5. keepldr creates keep, passes private key to it
  6. wasmldr creates certificate from private key
  7. wasmldr waits for workload
  8. client sends workload of HTTPS to wasmldr
  9. wasmldr accepts workload over HTTPS
  10. wasmldr executes workload

WE HAVE A FULLY WORKING END-TO-END DEMO! Thank you everyone

What does this mean? Well, everything works! The client requests a Keep using with an AMD SEV instance, it’s created, attested, listens for an incoming connection over HTTPS, and the client sends the workload, which then executes. The workload was written in Rust and compiled to WebAssembly – it’s a real application, in other words, and not a hand-crafted piece of WebAssembly for the purposes of testing.

What’s next?

There’s lots left to do, including:

  • merging all of the code into the main repositories (I was working in a separate set to avoid undue impact on other efforts)
  • tidying it to make it more presentable (both what the demo shows and the quality of the code!)
  • add SGX support – we hope that we’re closing in on this very soon
  • make the various components production-ready (the keepmgr, for instance, doesn’t manage multiple enarx-keepldr components very well yet)
  • define the wire protocol fully (somewhere other than in my head)
  • document everything!

But most of that’s easy: it’s just engineering. 🙂

We’d love you to become involved. If you’re interested, read some of my articles, visit project home page and repositories, hang out on our chat server or watch some of our videos on YouTube. We really welcome involvement – and not just from engineers, either. Come and have a play!

Trust in Computing and the Cloud

I wrote a book.

I usually write and post articles first thing in the morning, before starting work, but today is different. For a start, I’m officially on holiday (so definitely not planning to write any code for Enarx, oh no), and second, I decided that today would be the day that I should finish my book, if I could.

Towards the end of 2019, I signed a contract with Wiley to write a book (which, to be honest, I’d already started) on trust. There’s lots of literature out there on human trust, organisational trust and how humans trust each other, but despite a growing interest in concepts such as zero trust, precious little on how computer systems establish and manage trust relationships to each other. I decided it was time to write a book on this, and also on how trust works (or maybe doesn’t) in the Cloud. I gave myself a target of 125,000 words, simply by looking at a couple of books at the same sort of level and then doing some simple arithmetic around words per page and number of pages. I found out later that I’d got it a bit wrong, and this will be quite a long book – but it turns out that the book that needed writing (or that I needed to write, which isn’t quite the same thing) was almost exactly this long, as when I finished around 1430 GMT today, I found that I was at 124,939 words. I have been tracking it, but not writing to the target, given that my editor told me that I had some latitude, but I’m quite amused by how close it was.

Anyway, I emailed my editor on completion, who replied (despite being on holiday), and given that I’m 5 months or so ahead of schedule, he seems happy (I think they prefer early to late).

I don’t have many more words in me today, so I’m going to wrap up here, but do encourage you to read articles on this blog labelled with “trust”, several of which are edited excerpts from the book. I promise I’ll keep you informed as I get information about publication dates, etc.

Keep safe and have a Merry Christmas and Happy New Year, or whatever you celebrate.

Uninterrupted power for the (CI)A

I’d really prefer not to have to restart every time we have a minor power cut.

Regular readers of my blog will know that I love the CIA triad – confidentiality, integrity and availability – as applied to security. Much of the time, I tend to talk about the first two, and spend little time on the third, but today I thought I’d address the balance a little bit by talking about UPSes – Uninterruptible Power Supplies. For anyone who was hoping for a trolling piece on giving extra powers to US Government agencies, it’s time to move along to another blog. And anyone who thinks I’d stoop to a deliberately misleading article title just to draw people into reading the blog … well, you’re here now, so you might as well read on (and you’re welcome to visit others of my articles such as Do I trust this package?, A cybersecurity tip from Hazzard County, and, of course, Defending our homes).

Years ago, when I was young and had more time on my hands, I ran an email server for my own interest (and accounts). This was fairly soon after we moved to ADSL, so I had an “always-on” connection to the Internet for the first time. I kept it on a pretty basic box behind a pretty basic firewall, and it served email pretty well. Except for when it went *thud*. And the reason it went *thud* was usually because of power fluctuations. We live in a village in the East Anglian (UK) countryside where the electricity supply, though usually OK, does go through periods where it just stops from time to time. Usually for under a minute, admittedly, but from the point of view of most computer systems, even a second of interruption is enough to turn them off. Usually, I could reboot the machine and, after thinking to itself for a while, it would come back up – but sometimes it wouldn’t. It was around this time, if I remember correctly, that I started getting into journalling file systems to reduce the chance of unrecoverable file system errors. Today, such file systems are custom-place: in those days, they weren’t.

Even when the box did come back up, if I was out of the house, or in the office for the day, on holiday, or travelling for business, I had a problem, which was that the machine was now down, and somebody needed to go and physically turn it on if I wanted to be able to access my email. What I needed was a way to provide uninterruptible power to the system if the electricity went off, and so I bought a UPS: an Uninterruptible Power System. A UPS is basically a box that sits between your power socket and your computer, and has a big battery in it which will keep your system going for a while in the event of a (short) power failure and the appropriate electronics to provide AC power out from the battery. Most will also have some sort of way to communicate with your system such as a USB port, and software which you can install to talk to it that your system can decide whether or not to shut itself down – when, for instance, the power has been off for long enough that the battery is about to give out. If you’re running a data centre, you’ll typically have lots of UPS boxes keeping your most important servers up while you wait for your back-up generators to kick in, but for my purposes, knowing that my email server would stay up for long enough that it would ride out short power drops, and be able to shut down gracefully if the power was out for longer, was enough: I have no interest in running my own generator.

That old UPS died probably 15 years ago, and I didn’t replace it, as I’d come to my senses and transferred my email accounts to a commercial provider, but over the weekend I bought a new one. I’m running more systems now, some of them are fairly expensive and really don’t like power fluctuations, and there are some which I’d really prefer not to have to restart every time we have a minor power cut. Here’s what I decided I wanted:

  • a product with good open source software support;
  • something which came in under £150;
  • something with enough “juice” (batter power) to tide 2-3 systems over a sub-minute power cut;
  • something with enough juice to keep one low-powered box running for a little longer than that, to allow it to coordinate shutting down the other boxes first, and then take itself down if required;
  • something with enough ports to keep a a couple of network switches up while the previous point happened (I thought ahead!);
  • Lithium Ion rather than Lead battery if possible.

I ended up buying an APC BX950UI, which meets all of my requirements apart from the last one: it turns out that only high-end UPS systems currently seem to have moved to Lithium Ion battery technology. There are two apparently well-maintained open source software suites that support APC UPS systems: apcupsd and nut, both of which are available for my Linux distribution of choice (Fedora). As it happens, they both also have support for Windows and Mac, so you can mix and match if needs be. I chose nut, which doesn’t explicitly list my model of UPS, but which supports most of the product lower priced product line with its usbhid-ups driver – I knew that I could move to apsupsd if this didn’t work out, but nut worked fine.

Set up wasn’t difficult, but required a little research (and borrowing a particular cable/lead from a kind techie friend…), and I plan to provide details of the steps I took in a separate article or articles to make things easier for people wishing do replicate something close to my set-up. However, I’m currently only using it on one system, so haven’t set up the coordinated shutdown configuration. I’ll wait till I’ve got it more set up before trying that.

What has this got to do with security? Well, I’m planning to allow VPN access to at least one of the boxes, and I don’t want it suddenly to disappear, leaving a “hole in the network”. I may well move to a central authentication mechanism for this and other home systems (if you’re interested, check out projects such as yubico-pam): and I want the box that provides that service to stay up. I’m also planning some home automation projects where access to systems from outside the network (to view cameras, for instance) will be a pain if things just go down: IoT devices may well just come back up in the event of a power failure, but the machines which coordinate them are less likely to do so.

Why I’m writing a book about trust

Who spends their holiday in their office? Authors.

Last week, I was on holiday. That is, I took 5 days off work in which I could have relaxed, read novels, watched TV, played lots of games, gone for long walks on the countryside and (in happier, less Covid-19 times) have spend time away from home, wrapped up warm enjoying a view of waves crashing onto a beach. Instead, I spent those 5 days – and fair amount of the 4 weekend days that bracketed them – squirreled away in my office, sitting at my computer. Which is rather similar to what I would have been doing if I hadn’t taken the time off.

The reason I did this is that I’m writing a book at the moment. Last week I managed to write well over 12,500 words of it, taking me to more than 80% of my projected word count (over 82% if you could bibliographic material), so the time felt well-spent. But why am I doing this in the first place?

I’m doing it at one level because I have a contract to do it. Around July-August last year (2019), I sent emails to a few (3-4) publishers pitching the idea for a book. I included a detailed Table of Contents, evidence that I’ve written before (including some links to this blog), and a bit about me, including a link to my LinkedIn profile. One of the replies I had (within 24 hours, to my amazement) was from Jim Minatel, a commissioning editor from Wiley Technology. Over a number of weeks, I talked to him and editors from other publishers, finally signing a contract with Wiley for a book on Trust in Computing and the Cloud, with a planned word count of 125,000 words. I’m not going to provide details of the contract, but I can say that :

  1. I don’t expect it to make me rich (which was never the point anyway);
  2. it has options for another book or books (I must be insane even to be considering the idea, but Jim and I have already have some preliminary conversations);
  3. there are clauses in it about film/movie rights (nobody, but nobody, is ever going to want to make or watch a film about this book: fascinating as it may be, Tom Clancy or John Grisham it is not).

This kind of explains why I spent last week closeted in my office, tapping away on a computer keyboard, but why did I get in touch with these publishers in the first place, pitching the idea of a book that is consuming a fair number of my non-work waking hours?

The basic reason is that I got cross. In fact, I got so annoyed about something that I went to a couple of people – my boss was one, a good friend with a publishing background was another – and announced that I planned to write a book. I was half hoping that they would dissuade me, but they both enthusiastically endorsed the idea, which meant that I had, at least in my head, now committed to doing it. This was in early May 2019, and it took me a couple of months to gather my thoughts, put together some materials and a find few candidate publishers before actually pitching the book to them and ending up with a contract.

But what actually got me to a position where I was cross enough about something to pitch an idea which would take up so much of my time and energy? The answer? I was tipped over the edge by hearing someone speak about trust in a way that made it clear that they had no idea what they were talking about. It was a session at a conference – I can remember the conference, but I can’t remember the session or the speaker – where the subject was security: IT security. This is my field, so that’s good. What wasn’t good was what the speaker said when speaking about trust: it didn’t hold together, it wasn’t consistent with how I felt about trust, and I didn’t feel that it was broadly applicable to computing or the Cloud.

This made me cross. And then I thought: why is there no consensus on what we mean by “trust”? Why do people talk about “zero trust” without really knowing what trust actually means? Why is there no literature on this subject that I’ve been thinking about for nearly 20 years? Why do I keep having conversations with people where they agree that trust is really interesting, but we discover that we don’t have a common starting point as there’s no theoretical underpinning describing exactly we’re discussing? Why are people deploying important workloads and designing business-critical systems without a good framework around what seems like a fundamental concept that everybody is always eager to include in their architectures? Why are we pushing forward with Confidential Computing when people don’t even understand the impact of the trust relationships which underlie it?

And I realised that I could keep asking these questions, and keep having these increasingly frustrating conversations, whilst waiting for somebody to publish some sort of definitive meisterwerk on the subject, or I could just admit that no-one was going to do it, and that I might get on and write something, even if it was never going to be the perfect treatment of what is, after all, a very complex subject.

And so that’s what I decided to do. I thought about what interests me in the field of trust in the realms of computing and the Cloud, about what I’ve heard people talk very badly about, and about what I’d had interesting conversations, decided that I might have something to say about all of those, and then put together a book structure. Wiley liked the idea, and asked me to flesh it out and then write something. It turns out that there’s loads of literature around human-to-human and human-to-organisational trust, and also on human-to-computer system trust, but very little on how computers can trust each other. Given how many organisations run much of their business in the Cloud these days, and the complex trust relationships that exist there, I wanted to write something about how to manage and understand these.

These are topics I’ve thought about (and, increasingly, written about) for around 20 years, since I did some research into the possibility of a PhD (which never materialised) in a related topic. They’ve stayed with me since, and I was involved in some theoretical and standards-based work around trust while involved with the ETSI NFV group nearly 10 years ago. I’m not pretending that I’m perfectly qualified to discuss this topic, but then again, I’m not sure that anybody is, and I feel that putting out some sort of book on this topic makes sense, if only to get the conversation started, and to give people an opportunity to converse with a shared language. The book starts with some theoretical underpinnings, looks at some of the technologies, what their implications are, the place of open source, the commercial and organisational impacts, and then suggests some future and frameworks. I hope to have the manuscript (well, typescript) completed and with Wiley by mid-spring (Northern Hemisphere) 2021: I don’t know when it’s actually likely to appear in print.

I hope people find it interesting, and that it acts as a catalyst for further discussion. I don’t expect it to be the last word on the subject – in fact I hope it’s not – but I do hope that it forces more people to realise that trust is really important in our world of computers, security and risk, and currently ill-understood. And if you happen to be a successful producer of Hollywood blockbusters, then I’m available to talk. Just as soon as I get these last couple of chapters submitted. ..

How open source builds distributed trust

Trust in open source is a positive feedback loop

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley, and leads on from a previous article I wrote called Trust & choosing open source.

In that article, I asked the question: what are we doing when we say “I trust open source software”? In reply, I suggested that what we are doing is making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable. I also introduced the idea of distributed trust.

The concept of distributing trust across a community is an application of the theory of the wisdom of the crowd, posited by Aristotle, where the assumption is that the opinions of many typically show more wisdom than the opinion of one, or a few. While demonstrably false in its simplest form in some situations – the most obvious example being examples of popular support for totalitarian regimes – this principle can provide a very effective mechanism for establishing certain information.

This distillation of collective experience allows what we refer as distributed trust, and is collected through numerous mechanisms on the Internet. Some, like TripAdvisor or Glassdoor, record information about organisations or the services they provide, while others, like UrbanSitter or LinkedIn, allow users to add information about specific people (see, for instance, LinkedIn’s “Recommendations” and “Skills & Endorsements” sections in individual’s profiles). The benefits that can accrue from of these examples are significantly increased by the network effect, as the number of possible connections between members increases exponentially as the number of members increases. Other examples of distributed trust include platforms like Twitter, where the number of followers that an account receives can be seen as a measure of their reputation, and even of their trustworthiness, a calculation which we should view with a strong degree of scepticism: indeed, the company Twitter felt that it had to address the social power of accounts with large numbers of followers and instituted a “verified accounts” mechanism to let people know “that an account of public interest is authentic”. Interestingly, the company had to suspend the service after problems related to users’ expectations of exactly what “verified” meant or implied: a classic case of differing understandings of context between different groups.

Where is the relevance to open source, then? The community aspect of open source is actually a driver towards building distributed trust. This is because once you become a part of the community around an open source project, you assume one or more of the roles that you start trusting once you say that you “trust” an open source project (see my previous article): examples include: architect, designer, developer, reviewer, technical writer, tester, deployer, bug reporter or bug fixer. The more involvement you have in a project, the more one becomes part of the community, which can, in time, become a community of practice. Jean Lave and Etienne Wenger introduced the concept of communities of practice in their book Situated Learning: Legitimate Peripheral Participation, where groups evolve into communities as their members share a passion and participate in shared activities, leading to their improving their skills and knowledge together. The core concept here is that as participants learn around a community of practice, they become members of it at the same time:

“Legitimate peripheral participation refers both to the development of knowledgeably skilled identities in practice and to the reproduction and transformation of communities of practice.”

Wenger further explored the concept of communities of practice, how they form, requirements for their health and how they encourage learning in Communities of Practice: Learning, meaning and identity. He identified negotiability of meaning (“why are we working together, what are we trying to achieve?”) as core to a community of practice, and noted that without engagement, imagination and alignment by individuals, communities of practice will not be robust.

We can align this with our views of how distributed trust is established and built: when you realise that your impact on open source can be equal to that of others, the distributed trust relationships that you hold to members of a community become less transitive (second- or third-hand or even more remote), and more immediate. You understand that the impact that you can have on the creation, maintenance, requirements and quality of the software which you are running can be the same as all of those other, previously anonymous contributors with whom you are now forming a community of practice, or whose existing community of practice you are joining. Then you yourself become part of a network of trust relationships which are distributed, but at less of a remove to that which you experience when buying and operating proprietary software. The process does not stop there, however: a common property of open source projects is cross-pollination, where developers from one project also work on others. This increases as the network effect of multiple open source projects allow reuse and dependencies on other projects to rise, and greater take-up across the entire set of projects.

It is easy to see why many open source contributors become open source enthusiasts or evangelists not just for a single project, but for open source as a whole. In fact, work by Mark Granovetter suggests that too many strong ties within communities can lead to cliques and stagnation, but weak ties provide for movement of ideas and trends around communities. This awareness of other projects and the communities that exist around them, and the flexibility of ideas across projects, leads to distributed trust being able to be extended (albeit with weaker assurances) beyond the direct or short-chain indirect relationships that contributors experience within projects of which they have immediate experience and out towards other projects where external observation or peripheral involvement shows that similar relationships exist between contributors. Put simply, the act of being involved in an open source project and building trust relationships through participation leads to stronger distributed trust towards similar open source projects, or just to other projects which are similarly open source.

What does this mean for each of us? It means that the more we get involved in open source, the more trust we can have in open source, as there will be a corresponding growth in the involvement – and therefore trust – of other people in open source. Trust in open source isn’t just a network effect: it’s a positive feedback loop!

My top 7 Cargo (Rust) commands

I heartily recommend that you spend some time investigating Cargo

This is the third article in my mini-series for Rust newbs like me. I’ve been using Rust for a little over six months now, and I’m far from an expert, but I have stumbled across many, many gotchas and learned many, many things along the way: things which I hope will be of use to record for those who are learning what is easily my favourite language. You can find my other excursions into Rust here:

I plan to write more over time, but this one is about Cargo. I’m ashamed to admit that I don’t use Cargo’s power as widely as a should, but researching this article has given me a better view into the command’s capabilities. In fact, I wasn’t even aware of some of the options available until I started looking in more detail. I’m going to assume a basic familiarity with Cargo – you have it installed, and you can create a package using cargo new <package>, for instance – so let’s work from there. I could have provided more (there are many options!), but here’s your “lucky 7”.

  1. cargo help <command> You can always find out more about a command with the --help option. The same goes for cargo itself: cargo --help will give you a quick intro to what’s out there. For more information on a particular command (more like a man page), you can try using the command “new”: cargo help new will give you extended information about cargo new, for instance. This behaviour is pretty typical for command line tools, particularly in the Linux/Unix world, but it’s very expressively implemented for cargo, and you can gain lots of quick information.
  2. cargo build --bin <target> What happens when you have multiple .rs files in your package, but you only want to build one of them? I have a package called test-and-try which I use for, well, testing and trying functionality, features, commands and crates, and which has around a dozen different files in it. By default, cargo build will try to build all of them, and as they’re often in various states of repair (some of them generating lots of warnings, some of them not even fully compiling), this can be a real pain. Instead, I place a section in my Cargo.toml file for each one thus:

[[bin]]
name = "warp-body"
path = "src/warp-body.rs"

I can then use cargo build --bin warp-body to build just this file (and any dependencies). I can then run it with a similar command: cargo run --bin warp-body

  1. cargo test I have an admission: I am not as assiduous about creating automatic tests in my Rust code as I ought to be. This is because I’m currently mainly writing PoC (Proof of Concept) rather than production code, and also because I’m lazy. Maybe changing this behaviour should be a New Year’s resolution, but when I do get round to writing tests, cargo is there to help me (as it is for you). All you need to do is add a line before the test code in your .rs file thus:

#[cfg(test)]

When you run cargo test, Cargo will automagically find these tests, run them, and tell you if you have problems. As with many of the commands here, you’ll find much more information online, but it’s particularly worth familiarising yourself with the basics of this capability in the relevant Rust By Example section.

  1. cargo search <query> This is one of the commands that I didn’t even know existed until I started researching this article – and which would have saved me so much time over the past few months if I’d known about it. What is does is search crates-io, Rust’s repository of public (and sometimes maintained) packages, and tells you which ones may be relevant (you can specify a different repository if you want, with the intuitively named --registry option). I’ve recently been doing some work on network protocols for non-String data, so I’ve been working with CBOR. Let’s see what happens if I use cargo search:

This is great! I can, of course, also combine with tools like grep to narrow down the search yet further like so: cargo search cbor --limit 70 | grep serde

  1. cargo tree Spoiler alert: this one may scare you. You’ve probably noticed, when you first build a new package, or when you add a new dependency, or just do a cargo clean and then cargo build, that you’ll see long list of crates being printed out as Cargo pulls them down from the relevant repository and then compiles them. How can you tell ahead of time, however, what will be pulled down, and what version it will be? More importantly, how can you know what other dependencies a new crate has pulled into your build? The answer is cargo tree. Just to warn you: for any marginally complex project, you can expect to have a lot of dependencies. I tried cargo tree | wc -l to count the number of dependent crates for a smallish project I’m working on, and got an answer of 350! I tried providing an example, but it didn’t display well, so I recommend that you try it yourself: be prepared for lots of output!
  2. cargo clippy If you tried running this and it didn’t work, that’s because I’ve cheated a little with these last two commands; you may have to install them explicitly (depending on your initial set-up). For this one, run cargo install clippy – you’ll be glad you did. Clippy is Rust’s linter – it goes through your code, looking at ways in which you can reduce and declutter it by removing or changing commands. I try to run cargo clippy before every git commit – partly because the git repositories to which I tend to be committing have automatic actions to reject files which need linting, and partly to keep my code generally more tidy. Let’s see an example:

Let’s face it: this isn’t a major issue (though clippy will also find errors, too, if you run it on non-compiling code), but it’s an easy fix, and so you might as well deal with it – either by removing the code, or prefixing the variable an underscore. As I plan to use this variable later on, but haven’t yet implemented the function to consume it, I plan to perform the latter fix.

  1. cargo readme Not the most earth-shattering of commands, this is another which is very useful (and which you may need to install explicitly, as with cargo clippy, above). If you add the relevant lines to your .rs files, you can output README files from Cargo. For instance, I have the following lines at the beginning of my main.rs file:

I’ll leave the output of the command cargo readme as an exercise to the reader, but it’s interesting to me that the Licence (or “License”, if you must) declaration is added. Use this to create simple documentation for your users, and make them happy with minimal effort (always a good approach!).

I’ve just scratched the surface of Cargo’s capabilities in this article: all of the commands I’ve listed above are actually way more powerful than I’ve described. I heartily recommend that you spend some time investigating Cargo and finding out how it can make your life better.

Do I trust this package?

The area of software supply chain management is growing in importance.

This isn’t one of those police dramas where a suspect parcel arrives at the precinct and someone realises just in time that it may be a bomb – what we’re talking about here is open source software packages (though the impact on your application may be similar if you’re not sufficiently suspicious). Open source software is everywhere these days – which is great – but how can you be sure that you should trust the software you’ve downloaded to do what you want? The area of software supply chain management – of which this discussion forms a part – is fairly newly visible in the industry, but is growing in importance. We’re going to consider a particular example.

There’s a huge conversation to be had here about what trust means (see my article “What is trust?” as a starting point, and I have a forthcoming book on Trust in Computing and the Cloud for Wiley), but let’s assume that you have a need for a library which provides some cryptographic protocol implementation. What do you need to know, and what are you choices? We’ll assume, for now, that you’ve already made what is almost certainly the right choice, and gone with an open source implementation (see many of my articles on this blog for why open source is just best for security), and that you don’t want to be building everything from source all the time: you need something stable and maintained. What should be your source of a new package?

Option 1 – use a vendor

There are many vendors out there now who provide open source software through a variety of mechanisms – typically subscription. Red Hat, my employer (see standard disclosure) is one of them. In this case, the vendor will typically stand behind the fitness for use of a particular package, provide patches, etc.. This is your easiest and best choice in many cases. There may be times, however, when you want to use a package which is not provided by a vendor, or not packaged by your vendor of choice: what do you do then? Equally, what decisions do vendors need to make about how to trust a package?

Option 2 – delve deeper

This is where things get complex. So complex, in fact, that I’m going to be examining them at some length in my book. For the purposes of this article, though, I’ll try to be brief. We’ll start with the assumption that there is a single maintainer of the package, and multiple contributors. The contributors provide code (and tests and documentation, etc.) to the project, and the maintainer provides builds – binaries/libraries – for you to consume, rather than your taking the source code and compiling it yourself (which is actually what a vendor is likely to do, though they still need to consider most of the points below). This is a library to provide cryptographic capabilities, so it’s fairly safe to assume that we care about its security. There are at least five specific areas which we need to consider in detail, all of them relying on the maintainer to a large degree (I’ve used the example of security here, though very similar considerations exist for almost any package): let’s look at the issues.

  1. build – how is the package that you are consuming created? Is the build process performed on a “clean” (that is, non-compromised) machine, with the appropriate compilers and libraries (there’s a turtles problem here!)? If the binary is created with untrusted tools, then how can we trust it at all, so what measures does the maintainer take to ensure the “cleanness” of the build environment? It would be great if the build process is documented as a repeatable build, so that those who want to check it can do so.
  2. integrity – this is related to build, in that we want to be sure that the source code inputs to the build process – the code coming, for instance, from a git repository – are what we expect. If, somehow, compromised code is injected into the build process, then we are in a very bad position. We want to know exactly which version of the source code is being used as the basis for the package we are consuming so that we can track features – and bugs. As above, having a repeatable build is a great bonus here.
  3. responsiveness – this is a measure of how responsive – or not – the maintainer is to changes. Generally, we want stable features, tied to known versions, but a quick response to bug and (in particular) security patches. If the maintainer doesn’t accept patches in a timely manner, then we need to worry about the security of our package. We should also be asking questions like, “is there a well-defined security disclosure of vulnerability management process?” (See my article Security disclosure or vulnerability management?), and, if so, “is it followed”?
  4. provenance – all code is not created equal, and one of the things of which a maintainer should be keeping track is the provenance of contributors. If a large amount of code in a part of the package which provides particularly sensitive features is suddenly submitted by an unknown contributor with a pseudonymous email address and no history of contributions of security functionality, this should raise alarm bells. On the other hand, if there is a group of contributors employed by a company with a history of open source contributions and well-reviewed code who submit a large patch, this is probably less troublesome. This is a difficult issue to manage, and there are typically no definite “OK” or “no-go” signs, but the maintainer’s awareness and management of contributors and their contributions is an important point to consider.
  5. expertise – this is the most tricky. You may have a maintainer who is excellent at managing all of the points above, but is just not an expert in certain aspects of the functionality of the contributed code. As a consumer of the package, however, I need to be sure that it is fit for purpose, and that may include, in the case of the security-related package we’re considering, being assured that the correct cryptographic primitives are used, that bounds-checking is enforced on byte streams, that proper key lengths are used or that constant time implementations are provided for particular primitives. This is very, very hard, and the job of maintainer can easily become a full-time one if they are acting as the expert for a large and/or complex project. Indeed, best practice in such cases is to have a team of trusted, experienced experts who work either as co-maintainers or as a senior advisory group for the project. Alternatively, having external people or organisations (such as industry bodies) perform audits of the project at critical junctures – when a major release is due, or when an important vulnerability is patched, for instance – allows the maintainer to share this responsibility. It’s important to note that the project does not become magically “secure” just because it’s open source (see Disbelieving the many eyes hypothesis), but that the community, when it comes together, can significantly improve the assurance that consumers of a project can have in the packages which is produces.

Once we consider these areas, we then need to work out how we measure and track each of them. Who is in a position to judge the extent to which any particular maintainer is fulfilling each of the areas? How much can we trust them? These are a set of complex issues, and one about which much more needs to be written, but I am passionate about exposing the importance of explicit trust in computing, particularly in open source. There is work going on around open source supply chain management, for instance the new (at time of writing Project Rekor – but there is lots of work still to be done.

Remember, though: when you take a package – whether library or executable – please consider what you’re consuming, what about it you can trust, and on what assurances that trust is founded.