5 signs that you may be a Rust programmer

I’m an enthusiastic evangelist. I’m also not a very good Rustacean.

I’m a fairly recent convert to Rust, which I started to learn around the end of April 2020 (when we assumed there would only be the one lockdown, and that Covid-19 would be “over by Christmas” – oh, the youthful folly). But, like many converts, I’m an enthusiastic evangelist. I’m also not a very good Rustacean, truth be told, in that my coding style isn’t great, and I don’t write particularly idiomatic Rust. This is partly, I suspect, because I never really finished learning Rust before diving in and writing quite a lot of code (some of which is coming back to haunt me) and partly because I’m just not that good a programmer.

But I love Rust, and so should you. It’s friendly – well, more friendly than C or C++ – it’s ready for low-level systems tasks – more so than Python – it’s well-structured – more than Perl – and, best of all, it’s completely open source, from the design level up – much more than Java, for instance. Despite my lack of expertise, I noticed a few things which I suspect are common to many Rust enthusiasts and programmers, the first of which was sparked by some exciting recent news.

The word Foundation excites you

For Rust programmers, the word “Foundation” will no longer be associated first and foremost with Isaac Asimov, but with the newly formed Rust Foundation. Microsoft, Huawei, Google, AWS and Mozilla are providing the directors (and presumably most of the initial funding) for the foundation, which will look after all aspects of the language, “heralding Rust’s arrival as an enterprise production-ready technology”, according to the  Interim Executive Director, Ashley Williams (on a side note, it’s great to see a woman heading up such a major industry initiative).

The Foundation seems committed to safe-guarding the philosophy of Rust and ensuring that everybody has the opportunity to get involved. Rust is, in many ways, a poster-child example of an open source project. Not that it’s perfect (either the language or the community!), but in that there seem to be sufficient enthusiasts who are dedicated to preserving the high-involvement, low-bar approach to community which I think of as core to the much of open source. I strongly welcome the move, which I think can only help to promote Rust adoption and maturity over the coming years and months.

You get frustrated by news feed references to Rust (the game)

There’s another computer-related thing out there which goes by the name “Rust”, and it’s a “multi-player only survival video game”. It’s newer than Rust the language (having been announced only in 2013 and released in 2018), but I was once searching for Rust-related swag and, coming across it, made the mistake of searching for it. The Interwebs being what they are (thanks, Facebook, Google et al.), this meant that my news feed is now infected with this alternative Rust beast and I now get random updates from their fandom and PR folks. Thisis low-key annoying, but I’m pretty sure that I’m not alone in the Rust (language) community. I strongly suggest that if you do want to find out more about this upstart in the computing world, you use a privacy-improving (I refuse to say “privacy-preserving”) browser such as DuckDuckGo or even Tor to do your research.

The word “unsafe” makes you recoil in horror

Rust (the language, again) does a really good job of helping you do the Right Thing[TM]. Certainly in terms of memory safety, which is a major concern within C and C++ (not because it’s impossible, but because it’s really hard to get right consistently). Dave Herman wrote a post in 2016 on why safety is such a positive attribute of the Rust language: Safety is Rust’s Fireflower. Safety (memory, type safety) may not be sexy, but it’s something that you become used to, and grateful for, as you write more Rust – particularly if you’re involved in any systems programming, which is where Rust often excels.

Now, Rust doesn’t stop you from doing the Wrong Thing[TM], but it does make you make a conscious decision when you wish to go outside the bounds of safety, by making you use the unsafe keyword. This is good not only for you, as it will (hopefully) make you think really, really carefully about what you’re putting in any code block which uses it, but also for anyone reading your code: it’s a trigger word which makes any half-sane Rustacean shiver at least slightly, sit upright in their chair and think: “hmm, what’s going on here? I need to pay special attention”. If you’re lucky, that person person reading your code may be able to think of ways of rewriting your code in such a way that it does make use of Rust’s safety features, or at least reduces the amount of unsafe code that gets committed and released.

You wonder why there’s no emoji for ?; or {:?} or ::<>

Everybody loves (to hate) the turbofish (::<>), but there are other semantic constructs that you see regularly in Rust code, in particular {:?} (for string formatting) and ?; (? is a way of propagating errors up the calling stack, and ; ends the line/block, so you often see them together). They’re so common in Rust code that you just learn to parse them as you go, and they’re also so useful that I sometimes wonder why they’ve not made it into normal conversation, at least as emojis. There are probably others, too: what would be your suggestions? (Please, please no answers from Lisp adherents.)

Clippy is your friend (and not an animated paperclip)

Clippy, the Microsoft animated paperclip, was a “feature” that Office users learned very quickly to hate, and which has become the starting point for many memes. cargo clippy, on the other hand, is one of those amazing cargo commands that should become part of every Rust programmer’s toolkit. Clippy is a language linter and helps improve your code to make it cleaner, tidier, more legible, more idiomatic and generally less embarrassing when you share it with your colleagues or the rest of the world. Cargo has arguably rehabilitated the name “Clippy”, and although it’s not something I’d choose to name one of my kids, I don’t feel a sense of unease whenever I come across the term on the web anymore.

The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.

Acting (and coding) your age

With seniority comes perks, but it also comes with responsibilities.

I dropped a post on LinkedIn a few days ago:

I’m now 50 years old and writing the most complex code in my career (for Enarx) in a language (Rust) that I only started learning 9 months ago and I’ve just finished the first draft of a book (for Wiley). Not sure what’s going on (and I wouldn’t have believed you if you’d told me this 25 years ago). #codingtips #writing #security #confidentialcomputing #rustlang

I’ve never received such attention. Lots of comments, lots of “likes” and other reactions, lots of people wanting to connect. It was supposed to be a throw-away comment, and I certainly had no intention either to boast or elicit sympathy: I am genuinely surprised by all of the facts mentioned – including my age, given that I feel that I’m somewhere between 23 and 31 (both primes, of course).

I remember in my mid- to late-twenties thinking “this business stuff is pretty simple: why don’t the oldies move aside and let talented youngsters[1] take over, or at least provide them some inspired advice?” Even at the time I realised that this was a little naive, and that there is something to be said for breadth of experience and decades of acquired knowledge, but I’m pretty certain that this set of questions has been asked by pretty much every generation since Ogg looked at the failings in his elders’ flint spear-head knapping technique and later got into a huff when his mum wouldn’t let him lead the mammoth hunt that afternoon.

Why expertise matters

Sadly (for young people), there really are benefits associated with praxis (actually doing things), even if you’ve absorbed all of the theory (and you haven’t, which is one of the things you learn with age). Of course, there’s also the Dunning-Kruger effect, which is a cognitive bias (Trust you? I can’t trust myself.) which leads the inexperienced to overestimate their own ability and experts to underestimate theirs.

Given this, there are some interesting and bizarre myths around about software/coding being a “young man’s game”. Leaving aside the glaring gender bias in that statement[2], this is rather odd. I know some extremely talented over-40 and over-50 software engineers, and I’m sure that you can think of quite a few if you try. There are probably a few factors at play here:

  • the lionisation of the “start-ups in the garage” young (mainly white) coders turning their company into “unicorn” trope;
  • the (over-)association of programming with mathematical ability, where a certain set of mathematicians are considered to have done their best work in their twenties;
  • the relative scarcity of roles (particularly in organisations which aren’t tech-specific) of “individual contributor” career tracks with roles where it’s possible to rise in seniority (and pay) without managing other people;
  • a possible tendency (which I’m positing without much evidence) for a sizeable proportion of senior software folks to take a broader view of the discipline and to move into architectural roles which are required by the industry but are difficult to perform without a good grounding in engineering basics.

In my case, I moved away from writing software maybe 15 years ago, and honestly never thought I’d do any serious coding again, only to discover a gap in the project I’m working on (Enarx) which nobody else had the time to fill, but which I felt merited some attention. That, and a continuous desire to learn new things, which had led me to starting to learn Rust, brought me to some serious programming, which I’ve really enjoyed.

We need old coders: people who have been around the block a few times, have made the mistakes and learned from them. People who can look at competing technologies and make reasoned decisions about which is the best fit for a project, rather than just choosing the newest and “coolest”[3].

Why old people should step aside

Having got all of the above out of my system, I’m now going to put forward an extremely important counter-argument. First, some context. I volunteer for the East of England Ambulance Service Trust as a Community First Responder, a role where I attend patients in (possible) emergency situations and work with ambulance staff, paramedics, etc.. I’ve become very interested in some of the theory around patient safety, which it turns out is currently being strongly influenced by lessons learned over the past few decades from transport safety, particularly aviation safety[5].

I need to do more study around this topic, as there are some really interesting lessons that can be applied to our sector (in fact, some are already be learned from our sector, particularly in how DevOps/WebOps respond to incidents), but there are two points that have really hit home for me this week, and which are relevant to the point at hand. They are specifically discussed with relation to high-intensity, stressful situations, but I think there’s broader applicability.

1. With experience comes expectation

While experience is enormously useful – bringing insights and knowledge that others may not have, or will find difficult to synthesise – it can also lead you down paths which are incorrect. If you’ve seen the same thing 99 times, you’re likely to assume that the 100th will be the same: bringing in other voices, including less experienced ones, allows other views to be considered, giving a better chance that the correct conclusion will be met. You increase diversity of opinion and allow alternatives to be brought into the mix. The less experience team members may be wrong, but from time to time, you’ll be wrong, and everyone will benefit from this. By allowing other people a voice, you’re also setting an example that speaking up and offering alternative views is not only acceptable, but valued. You and the team get to learn from each other, whether it’s when you’re wrong, or when you’re right, but you get to discuss with others how you came to your conclusions, and welcome their probing and questions around how you got there.

2. Sometimes you need to step aside to apply yourself elsewhere

Perhaps equally important is that sometimes, tempting as it may be to get your hands dirty and apply your expertise to a particular problem (particularly one which is possibly trivial to you), there are times when it’s best to step aside and let someone less experienced than you do it. Not only because they need the experience themselves, but also because your skills may be better applied at a systems level or dealing with other problems in other contexts (such as funding or resource management). The example sometimes given in healthcare is when a senior clinician arrives on scene at an incident: rather than their taking over the treatment of patients (however skilled the senior clinician may be), their role is to see the larger situation, to prioritise patients for treatment, assess risks to staff on scene, manage transport and the rest. Sometimes they may need to knuckle down and apply their clinical skills directly (much as senior techies may end up coding to meet a demo deadline, for instance), but most of the time, they are best deployed in stepping aside.

Conclusion

With seniority comes perks: getting to do the interesting stuff, taking decisions, having junior folks make the tea and bring the doughnuts in[6]. But it also comes with responsibilities: helping other people learn, seeing the bigger picture, giving less experienced team members the chance to make mistakes, removing barriers imposed by organisational hierarchy and getting the first round in at the pub[7]. Look back at what you were thinking about the beginning of your career, and give your successors (because they will be your successors) the chances that you were so keen for back then. Show them respect, and you (and your organisation) will benefit.


1 – I think that the “like me” is pretty implicit here, yes?

2 – which, sadly, reflects another bias in the market.

3 – there’s an important point here: many of us older folks love new shiny things just as much as the youngsters, and are aware of the problems of the old approaches and languages – but we’re also aware that there are risks and pain points associated with the new, which need to be taken into account[4].

4 – that really made me sound old, didn’t it?

5 – in large part influenced by the work of Martin Bromiley, a civil aviation pilot whose wife Elaine died in a “routine” operation in 2005 and who has worked (and is working) to help the health care sector transition to a no-blame, “just” culture around patient safety.

6 – this is a joke: if you have ever, ever find yourself in an office or team where this is the norm, and hierarchy shows in this sort of way, either get out or change that culture just as soon as you can. It’s toxic.

7 – I’m writing this in the middle of the UK’s second Covid-19 lockdown, and can barely remember what a “pub” or a “round” even is.

7 steps to a (bad) tech demo

Give the demo, look condescending, go home.

I’m currently involved with putting together a demo to show off the amazing progress we’ve made with Enarx recently. I’ve watched – and given – quite a few demos in my time, and I considered writing a guide to presenting a good demo. But then I thought: why? Demos should be about showing how clever we are, not about the audience – that’s pretty clear from most of the ones I’ve seen – so I decided to write a guide to a what many audiences would consider a bad demo, in other words, the type that most of us in IT are most practised at. Follow this guide, and you can be pretty certain that you will join the ranks of know-it-all, disdainful techies who are better than their audiences and have no interest in engaging with lower, less intelligent beings such as colleagues, user groups or potential customers. If you’re in marketing, sales, documentation or another function, you can still learn from this guide, and use it as part of your journey as you strive to achieve the arrogant sanctimoniousness which we techies cultivate and for which we are (in)famous. Luckily, many demos already exist which exhibit these characteristics, so you shouldn’t need to look far to find examples.

1. Assume your audience is you

You only need one demo, because everybody who matters will come from the same background as you: ultra-technical. You will, of course, be using a terminal, which means text commands to drive the demo. Don’t pander to lesser mortals by expanding parameters, for instance: why use df --all --human-readable --portability /home/mbursell/ when df -ahT ~/ is available and will save you space and time. If you really must provide graphical output (sometimes even hard-core techies have to work with GUIs, to their intense annoyance and disappointment), ensure that you’re using non-standard colours and icons. Why not use a smiley-face emoji for the “irretrievable delete” function or a thumbs-up icon for “cancel”? And you get extra points if you use a browser which still supports the <blink> tag: everybody’s favourite from the mid-90s.

Your audience shouldn’t need any context before the demo, either: if they’ve come to hear you speak, all you’re doing is giving them an update on exactly how far you have come – in other words, how clever you are. Find ways to exhibit that, and they’ll be impressed by your expertise and intelligence, which is what demos are for in the first place.

2. Don’t check it works

Your demo will, of course, work perfectly. Every time. Which means that there’s no need to check it just before delivering it. In fact, you might as well make a few last tweaks just beforehand – preferably without saving the “last good state”. Everyone will be amused if anything goes wrong – and if it does, it absolutely won’t be your fault. Here’s a list of useful things/people to blame if anything does happen:

  • the conference wifi (more difficult if you’re presenting virtually)
  • the VPN (no need to specify what VPN, or even if you’re using one)
  • other developers (who you can accuse of making last minute changes)
  • the Cloud Service Provider (again, no need to specify which one, or even if you’re using one)
  • DNS
  • certificate expiry
  • neutrinos
  • the marketing department.

3. Use small fonts and icons

Assume that everyone watching your demo:

  1. has perfect eyesight
  2. has perfect colour perception
  3. is sufficiently close to the projected image to see it (physical demo)
  4. is viewing on a screen as large as yours (online)
  5. is viewing on a screen at equal resolution to yours (online)
  6. has sufficient bandwidth that everything displays quickly and clearly enough for them to be able to see (online)
  7. can read and decipher unfamiliar text, commands, icons, obscure diagrams and dread sigils at least as quickly as you can display – and then hide – them.

4. Don’t explain

As noted above, anybody who is worthy of viewing the demo that you have put together should be immediately able to understand any context that is relevant, including any assumptions that you have made when creating the demo. “Obviously, I’ve created this WebAssembly binary directly from the Rust source file with the release flag, which means that we get good portability and type safety but decent on-the-wire speed and encryption time” is more than enough detail. You should probably have gone with something shorter like: “here’s an ls -al of the .wasm file. It’s cross-platform, safe and quick. Compiled with cargo +nightly build --release --wasm32-wasi, obviously.” Who needs a long and boring-to-deliver explanation of why you chose WebAssembly to allow users to run their applications on multiple different types of system, and the design and build decisions you made to reduce loading time over the network? No-one you care about, certainly.

5. Be very quick or very slow

This important point is related to several of the ones before. If you’ve been paying attention, you’ll know that there’s no good reason to worry about your audience and whether they’re following along, because if they’re the right type of audience (basically, they’re you), then they will already understand what’s going on. You can therefore speak as fast as you like, and whether you’re speaking their first language or not, they should pick up enough to appreciate the brilliance of the demo (in other words, your brilliance).

There is an alternative, of course, which is to speak really slowly. This should never be because you’re allowing your audience to pay attention and catch up however, but instead because you’re going into extreme detail about every single aspect of your demo, from the choice of your compilation options (see above) to the font family you use in your (many) terminals. This isn’t boring: it’s about you and your choices, so it’s interesting (to anyone who matters – see above).

6. Don’t record the demo

Your demo will work first time (unless you’re hit by problems caused by someone or something else – see 2 above), so there’s no need to record it, is there? What is more, anyone who is sufficiently interested in you, your project and your demo will attend in real-time, so there’s no need to record it for late-comers or for people to watch later. You’ll have made lots of changes within the next couple of weeks anyway, so what’s the point?

7. Don’t answer questions

Demos are for you to tell people about your project, and about you. They are not excuses for postulants to ask their questions of you. Questions generally fall into three types, none of which are of any interest to you:

  1. Stupid questions which betray how little an audience member understands about your demo. This is their fault, as they not clever enough to get what you’re doing. Ignore.
  2. Annoying questions which would be clear if people paid enough attention. These may be questions which are relevant to the demo, but which should be obvious to anyone who has done sufficient research into your work. Why should you clarify your work for lazy people? Ignore.
  3. Dangerous questions which point out possible mistakes or “improvements” to what you’ve done. You know best – you’re not showing a demo to get suggestions, but in order to expose your expertise. Ignore.

Conclusion

You are brilliant. Your demo is brilliant. Anybody who doesn’t see that is at fault, and it’s not your job to make their lives easier. Give the demo, look condescending, go home.

Enarx end-to-end complete!

We now have a fully working end-to-end proof of concept, with no smoke and mirrors.

I’ve written lots about the Enarx project, a completely open source project around deploying workloads to Trusted Execution Environments, and you can find a few of the articles here:

I have some very exciting news to announce.

A team effort

Yesterday was a huge day for the Enarx project, in that we now have a fully working end-to-end proof of concept, with no smoke and mirrors (we don’t believe in those). The engineers on the team have been working really hard on getting all of the low-level pieces in place, with support from other members on CI/CD, infrastructure, documentation, community outreach and beyond. I won’t mention everyone, as I don’t want to miss anyone out, and I also don’t have their permission, but it’s been fantastic working with everyone. We’ve been edging closer and closer to having all the main pieces ready to go, and just before Christmas/New Year we got attested AMD SEV Keeps working, with the ability to access information from that attestation within the Keep. This allowed us to move to the final step, which is creating an end-to-end client-server architecture. It is this that we got running yesterday.

I happened to be the lucky person to be able to complete this part of the puzzle, building on work by the rest of the team. I don’t have the low-level expertise that many of the team have, but my background is in client-server and peer-to-peer distributed systems, and after I started learning Rust around March 2020, I decided to see if I could do something useful for the project code base: this is my contribution to the engineering. To give you an idea of what we’ve implemented, let’s look at a simple architectural diagram of an Enarx deployment.

Simple Enarx architectural diagram

Much of the work that’s been going on has been concentrated in the Enarx runtime component, getting WebAssembly working in SGX and SEV Trusted Execution Environments, working on syscall implementations and attestation. There’s also been quite a lot of work on glue – how we transfer information around the system in a standards-compliant way (we’re using CBOR encoding throughout). The pieces that I’ve been putting together have been the Enarx client agent, the Enarx host agent (or Enarx Keep Manager) and two pieces which aren’t visible in this diagram (but are in the more detailed one below): the Enarx Keep Loader and Enarx Wasm Loader (“App loader” in the detailed view).

Detailed Enarx architectural diagram

The components

Let’s look at what these components do, and then explain exactly what we’ve achieved. The name in bold refers to the diagram, the name in italics relates to the Rust crate (and, where already merged, the github repository) associated with the component.

  • Enarx Client Agent (client) – responsible to talking to the enarx-keepmgr and requesting a Keep. It checks that the Keep is correctly set up and attested and then sends the workload (a WebAssembly package) to the enarx-wasmldr component, using HTTPS with a one-use certificate derived from the attestation process.
  • Enarx Keep Manager (enarx-keepmgr) – creates enarx-keepldr components at the request of the client, proxying communications to them from the client as required (for certain attestation flows, for instance). It is untrusted by the client.
  • Enarx Keep Loader (enarx-keepldr) – there is an enarx-keepldr per Keep, and it performs the loading of components into the Trusted Execution Environment itself. It sits outside the TEE instance, and is therefore untrusted by the client.
  • Enarx App Loader (enarx-wasmldr) – the enarx-wasmldr component resides within the TEE instance, and is therefore has confidentiality and integrity protection from the rest of the host. It receives the WebAssembly (Wasm) workload from the client component and may access secret information provisioned into the Keep during the attestation process.

Here’s the post I made to the Enarx chat #development channel yesterday to announce what we managed to achieve:

  1. client -> keepmgr: “create sev keep”
  2. keepmgr launches sev keep via systemd
  3. client -> keepmgr: “perform attestation, include this private key” (note – private key is encrypted from keepmgr)
  4. keepmgr -> keepldr: “attestation + private key”
  5. keepldr creates keep, passes private key to it
  6. wasmldr creates certificate from private key
  7. wasmldr waits for workload
  8. client sends workload of HTTPS to wasmldr
  9. wasmldr accepts workload over HTTPS
  10. wasmldr executes workload

WE HAVE A FULLY WORKING END-TO-END DEMO! Thank you everyone

What does this mean? Well, everything works! The client requests a Keep using with an AMD SEV instance, it’s created, attested, listens for an incoming connection over HTTPS, and the client sends the workload, which then executes. The workload was written in Rust and compiled to WebAssembly – it’s a real application, in other words, and not a hand-crafted piece of WebAssembly for the purposes of testing.

What’s next?

There’s lots left to do, including:

  • merging all of the code into the main repositories (I was working in a separate set to avoid undue impact on other efforts)
  • tidying it to make it more presentable (both what the demo shows and the quality of the code!)
  • add SGX support – we hope that we’re closing in on this very soon
  • make the various components production-ready (the keepmgr, for instance, doesn’t manage multiple enarx-keepldr components very well yet)
  • define the wire protocol fully (somewhere other than in my head)
  • document everything!

But most of that’s easy: it’s just engineering. 🙂

We’d love you to become involved. If you’re interested, read some of my articles, visit project home page and repositories, hang out on our chat server or watch some of our videos on YouTube. We really welcome involvement – and not just from engineers, either. Come and have a play!

7 tips on how not to write a book

If you’re in the unenviable position of having to write a book: read this.

Just before Christmas – about 6 months ago, it feels like – I published a blog post to announce that I’d finished writing my book: Trust in Computing and the Cloud. I’ve spent much of the time since then in shock that I managed it – and feeling smug that I delivered the text to Wiley some 4-5 months before my deadline. As well as the core text of the book, I’ve created diagrams (which I expect to be redrawn by someone with actual skills), compiled a bibliography, put together an introduction, written a dedication and rustled up a set of acknowledgements. I even added a playlist of some of the tracks to which I’ve listened while writing it all. The final piece of text that the publisher is expecting is, I believe, a biography – I’m waiting to hear what they’d like it to look like.

All that said, I’m aware that the process is far from over: there is going to be lots of editing to be done, from checking my writing to correcting glaring technical errors. There’s an index to be created (thankfully this is not my job – it’s a surprisingly complex task best carried out those with skill and experience in the task), renaming of some chapters and sections, decisions on design issues like cover design (I hope to have some input, but don’t expect to be the final arbiter – I know my limits[1]). And then there’s the actual production process – in which I don’t expect to be particularly involved – followed by publicity and, well, selling copies. After which comes the inevitable fame, fortune and beach house in Malibu[2]. So, there’s lots more to do: I also expect to create a website to go with the book – I’ll work with my publisher on this closer to the time.

Having spent over a year writing a book (and having written a few fiction works which nobody seemed that interested in taking up), I’m still not entirely sure how I managed it, so instead of doing the obvious “how to write a book” article, I thought I’d provide an alternative, which I feel fairly well qualified to produce: how not to write a book. I’m going to assume that, for whatever reason, you are expected to write a book, but that you want to make sure that you avoid doing so, or, if you have to do it, that you’ll make the worst fist of it possible: a worthy goal. If you’re in the unenviable position of having to write a book: read this.

1. Avoid passion

If you don’t care about your subject, you’re on good ground. You’ll have little incentive to get your head into the right space for writing, because well, meh. If you’re not passionate about the subject, then actually buckling down and writing the text of the book probably won’t happen, and if, somehow, a book does get written, then it’s likely that any readers who pick it up will fast realise that the turgid, disinterested style[3] you have adopted reflects your ennui with the topic and won’t get much further than the first few pages. Your publisher won’t ask you to produce a second edition: you’re safe.

2. Don’t tell your family

I mean, they’ll probably notice anyway, but don’t tell them before you start, and certainly don’t attempt to get their support and understanding. Failing to write a book is going to be much easier if your nearest and dearest barge into your workspace demanding that you perform tasks like washing up, tidying, checking their homework, going shopping, fixing the Internet or “speaking to the children about their behaviour because I’ve had enough of the little darlings and if you don’t come out of your office right now and take over some of the childcare so that I can have that gin I’ve been promising myself, then I’m not going to be responsible for my actions, so help me.”[4]

3. Assume you know everything already

There’s a good chance that the book you’re writing is on a topic about which you know a fair amount. If this is the case, and you’re a bit of an expert, then there’s a danger that you’ll realise that you don’t know everything about the subject: there’s a famous theory[5] that those who are inexpert think they know more than they do, whereas those who are expert may actually believe they are less expert than they are. Going by this theory, if you don’t realise that you’re inexpert, then you’re sorted, and won’t try to find more information, but if you’re in the unhappy position of actually knowing what you’re up to, you will need to make an effort to avoid referencing other material, reading around the subject or similar. Just put down what you think about the issue, and assume that your aura of authority and the fact that your words are actually in print will be enough to convince your readers (should you get any).

4. Backups are for wimps

I usually find that when I forget to make a backup of a work and it gets lost through my incompetence, power cuts, cat keyboard interventions and the like, it comes out better when I rewrite it. For this reason, it’s best to avoid taking backups of your book as you produce it. My book came to almost exactly 125,000 words, and if I type at around 80wpm, that’s only 1,500[6] or 60(ish) days of writing. And it’ll be better second time around (see above!), so everybody wins.

5. Write for everyone

Your book is going to be a work of amazing scholarship, but accessible to humanities (arts) and science graduates, school children, liberals, conservatives, an easy read of great gravitas. Even if you’re not passionate about the subject (see 1), then your publisher is keen enough on it to have agreed to publish your book, so there must be a market – and the wider the market, the more they can sell! For that reason, you clearly want to ensure that you don’t try to write for particular audience (lectience?), but change your style chapter by chapter – or, even better, section by section or paragraph by paragraph.

6. Ignore deadlines

Douglas Adams said it best: “I love deadlines. I love the whooshing noise they make as they go by.” Your publisher has deadlines to keep themselves happy, not you. Write when you feel like it – your work is so good (see 5) that it’ll stay relevant whenever it finally gets published. Don’t give in to the tyranny of deadlines – even if you agreed to them previously. You’ll end up missing them anyway as you rewrite the entirety of the book when you lose the text and have no backup (see 4).

7. Expect no further involvement after completion

Once you’re written the book, you’re done, right? You might tell a couple of friends or colleagues, but if you do any publicity for your publisher, or post anything on social media, you’re in danger of it becoming a success and having to produce a second edition (see 1). In fact, you need to put your foot down before you even get to that stage. Once you’ve sent your final text to the publisher, avoid further contact. Your editor will only want you to “revise” or “check” material. This is a waste of your time: you know that what you produced was perfect first time round, so why bother with anything further? Your job was authoring, not editing, revising or checking.


(I should apologise to everyone at Wiley for this post, and in particular the team with whom I’ve been working. You can rest assured that none[7] of these apply to me – or you.)


1 – my wife and family would dispute this. How about “I know some of my limits”?

2 – maybe not, if only because I associate Malibu with a certain rum-based liqueur and ill-advised attempts to appear sophisticated at parties in my youth.

3 – this is not to suggest that authors who are interested in their book’s subject don’t sometimes write in a turgid, disinterested style. I just hope that I’ve managed to avoid it.

4- disclaimer: getting their support doesn’t mean that you won’t have to perform any of these tasks, just that there may be a little more scope for negotiation. For the couple of weeks or so, at least.

5 – I say it’s famous, but I can’t be bothered to look it up or reference it, because I assume that I know enough about the topic already. See? It’s easy when you know.

6 – it’s also worth avoiding accurate figures in technical work: just round in whichever direction you prefer.

7 – well, probably none. Or not all of them, anyway.

Trust in Computing and the Cloud

I wrote a book.

I usually write and post articles first thing in the morning, before starting work, but today is different. For a start, I’m officially on holiday (so definitely not planning to write any code for Enarx, oh no), and second, I decided that today would be the day that I should finish my book, if I could.

Towards the end of 2019, I signed a contract with Wiley to write a book (which, to be honest, I’d already started) on trust. There’s lots of literature out there on human trust, organisational trust and how humans trust each other, but despite a growing interest in concepts such as zero trust, precious little on how computer systems establish and manage trust relationships to each other. I decided it was time to write a book on this, and also on how trust works (or maybe doesn’t) in the Cloud. I gave myself a target of 125,000 words, simply by looking at a couple of books at the same sort of level and then doing some simple arithmetic around words per page and number of pages. I found out later that I’d got it a bit wrong, and this will be quite a long book – but it turns out that the book that needed writing (or that I needed to write, which isn’t quite the same thing) was almost exactly this long, as when I finished around 1430 GMT today, I found that I was at 124,939 words. I have been tracking it, but not writing to the target, given that my editor told me that I had some latitude, but I’m quite amused by how close it was.

Anyway, I emailed my editor on completion, who replied (despite being on holiday), and given that I’m 5 months or so ahead of schedule, he seems happy (I think they prefer early to late).

I don’t have many more words in me today, so I’m going to wrap up here, but do encourage you to read articles on this blog labelled with “trust”, several of which are edited excerpts from the book. I promise I’ll keep you informed as I get information about publication dates, etc.

Keep safe and have a Merry Christmas and Happy New Year, or whatever you celebrate.

The importance of process (and people and rules)

If there is no process, you can throw technology at it as much as you want, but you are still likely to fail.

Those of us in Europe awoke to the news that the US electoral college have voted for Joe Biden as 46th President of the United States of America. Getting to this point has seemed (at least from the outside) to be a rather tortuous route, but from my understanding of how the US Constitution works[1], this is it: the process is complete and Joe Biden will be sworn in a President of the United States on a (probably very chilly) day next month, at the beginning of 2021. I have no intention of weighing the pros and cons of the candidates, nor even of examining the process (sometime labelled “arcane” by journalists”) by which US presidents are elected, but I do want to spend some time on the fact that there is a process, and thinking about how that works, and what supports it.

This is, first and foremost, a blog about IT security (though I have been known to post on a much wider range of issues from time to time), and so I unsurprisingly spend quite a lot of time discussing technology, but on this occasion I want to avoid doing that, as far as possible. If we look at the process for electing a US president, one of the most striking things about it, we might note, is the lack of technology. Yes, there are electronic voting machines to allow votes to be cast, yes, a myriad computers are deployed by psephologists[2] to forecast the results, but the actual process is lacking in much that we would normally think of as technology.

We often fixate on technology, but if there is no process in place to get from point A to point B, then you can throw technology at it as much as you want, but you are still likely to fail. Those points may be getting from having no president-elect to having a new president, completing a transaction to buy a house or a paperclip, hiring a new CEO or sous-chef, moving from a set of requirements to a working software program, or literally getting from a point A on a map to point B – they all require a process.

What is a process? Google, courtesy of Oxford Languages, offers the definition: a series of actions or steps taken in order to achieve a particular end. This seems like a useful description, but in the contexts we’re describing, it is the fact that the actions or steps are defined which is important. In the world of computing, we might say that there is an algorithm to be followed to complete the process. This algorithm allows a variety of things, all of which are important:

  1. the writing down and codification of the process;
  2. the allocation of different people to different roles in the process;
  3. norms, rules, regulation and/or legislation to be created to ensure the correct following of the process;
  4. the application of technology to simplify, speed up or automate parts of the process.

I don’t want to talk about point 4 particularly – I spend far too much of my time on that in most of my life – and the ways of achieving point 1 are so diverse as to defy consideration in this context, so let’s briefly discuss points 2 and 3.

Allocating people

If you have a process, you can break that process into steps, you can assign roles and responsibilities to those steps. This is useful in a variety of ways, the first of which is that you can start to scale the process by having different people working on different steps – sometimes in parallel. Imagine having one person having to count all of the votes in the US presidential election, or even having multiple people doing it, but having to do so in series: it might work, but it’s going to take way too long. Another benefit is one on which the Industrial Revolution was built: specialisation. Some people will be good at some parts of the process, and others at other parts of the process. You can increase efficiency by putting those with expertise on the right pieces of the process. A third, unrelated to efficiency, is separation of responsibilities. Sometimes, it’s important that certain people, who are experts or certified to perform a particular role, are the ones who do that. Often, it’s even more important that certain people don’t perform those roles. An example of this would be if one of the candidates in an election was the one to perform the final tally of votes and hand the result to the person making the announcement, or if they made the announcement themselves. This is equally true for other types of process: your bank does not want you to be the person who provides the final approval for your loan, and a company does not want a spouse, partner or family member to be providing sign-off for a hiring decision.

Norms, rules, regulation and legislation

In the UK, we have strong social norms around the process of queuing, and you will be subject to social (and sometimes stronger!) censure if you break them. Rules around other processes may be stronger, and sometimes regulation by an industry body or even legislation at the nation level (or multi-national level such as EU or UN) is required to safeguard the appropriate execution of a process. The ability for courts to intervene where vote-rigging may have taken place is a good example in the US election process, but legislation and regulation around anything from wiring a house to what fertilisers are allowed on particular crops provide additional levels of checking and assurance that processes are following correctly (by including censure or punishment for those who have contravened them) or can be remedied when not (through other processes such as legal review or court cases).

Legislation and regulation can be annoying, but without them (or equivalent rules and norms for other types of process), we cannot be sure of what we are getting into, or whether, if we get into it improperly, that we will ever get out of it. People support and are subject to these checks and balances, and without the combination of all of them (not forgetting the technology as well), processes are next to useless.


1 – I am not a lawyer. Nor a constitutional expert. Nor even a US citizen. Basically, do not take my word for any of this.

2 – I love this word. We should use it more often.

Why physical compromise is game over

Systems have physical embodiments and are actually things we can touch.

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

We spend a lot of our time – certainly I do – worrying about how malicious attackers might break into systems that I’m running, architecting, designing, implementint or testing. And most of that time is spent thinking about logical – that is software-based – attacks. But these systems don’t exist solely in the logical realm: they have physical embodiments, and are actually things we can touch. And we need to worry about that more than we typically do – or, more accurately, more of us need to worry about that.

If a system is compromised in software, it’s possible that you may be able to get it back and clean, but there is a general principle in IT security that if an attacker has physical access to a system, then that system should be considered compromised. In trust terms (something I care about a lot, given my professional interests), “if an attacker has physical access to a system, then any assurances around expected actions should be considered reduced or void, thereby requiring a re-evaluation of any related trust relationships”. Tampering (see my previous article on this, Not quantum-safe, not tamper-proof, not secure) is the typical concern when considering physical access, but exactly what an attacker will be able to achieve given physical access will depend on a number of factors, not least the skill of the attacker, resources available to them and the amount of time they have physical access. Scenarios range from an unskilled person attaching a USB drive to a system in short duration Evil Maid[1] attacks and long-term access by national intelligence services. But it’s not just running (or provisioned, but not currently running) systems that we need to be worried about: we should extend our scope to those which have yet to be provisioned, or even necessarily assembled, and to those which have been decommissioned.

Many discussions in the IT security realm around supply chain, concentrate mainly on software, though there are some very high profile concerns that some governments (and organisations with nationally sensitive functions) have around hardware sourced from countries with whom they do not share entirely friendly diplomatic, military or commercial relations. Even this scope is too narrow: there are many opportunities for other types of attackers to attack systems at various points in their life-cycles. Dumpster diving, where attackers look for old computers and hardware which has been thrown out by organisations but not sufficiently cleansed of data, is an old and well-established technique. At the other end of the scale, an attacker who was able to get a job at a debit or credit card personalisation company and was then able to gain information about the cryptographic keys inserted in the bank card magnetic strips or, better yet, chips, might be able to commit fraud which was both extensive and very difficult to track down. None of these attacks require damage to systems, but they do require physical access to systems or the manufacturing systems and processes which are part of the systems’ supply chain.

An exhaustive list and description of physical attacks on systems is beyond the scope of this article (readers are recommended to refer to Ross Anderson’s excellent Security Engineering: A Guide to Building Dependable Distributed Systems for more information on this and many other topics relevant to this blog), but some examples across the range of threats may serve to give an idea of sort of issues that may be of concern.

AttackLevel of sophisticationTime requiredDefences
USB drive to retrieve dataLowSecondsDisable USB ports/use software controls
USB drive to add malware to operating systemLowSecondsDisable USB ports/use software controls
USB drive to change boot loaderMediumMinutesChange BIOS settings
Attacks on Thunderbolt ports[2]MediumMinutesFirmware updates; turn off machine when unattended
Probes on buses and RAMHighHoursPhysical protection of machine
Cold boot attack[3]HighMinutesPhysical protection of machine/TPM integration
Chip scraping attacksHighDaysPhysical protection of machine
Electron microscope probesHighDaysPhysical protection of machine

The extent to which systems are vulnerable to these attacks varies enormously, and it is particularly notable that systems which are deployed at the Edge are particularly vulnerable to some of them, compared to systems in an on-premises data centre or run by a Cloud Service Provider in one of theirs. This is typically either because it is difficult to apply sufficient physical protections to such systems, or because attackers may be able to achieve long-term physical access with little likelihood that their attacks will be discovered, or, if they are, with little danger of attribution to the attackers.

Another interesting point about the majority of the attacks noted above is that they do not involve physical damage to the system, and are therefore unlikely to show tampering unless specific measures are in place to betray them. Providing as much physical protection as possible against some of the more sophisticated and long-term attacks, alongside visual checks for tampering, is the best defence for techniques which can lead to major, low-level compromise of the Trusted Computing Base.


1 – An Evil Maid attack assumes that an attacker has fairly brief access to a hotel room where computer equipment such as a laptop is stored: whilst there, they have unfettered access, but are expected to leave the system looking and behaving in the same way it was before they arrived. This places some bounds on the sorts of attacks available to them, but such attacks are notoriously different to defend.

2 – I wrote an article on one of these: Thunderspy – should I care?

2- A cold boot attack allows an attacker with access to RAM to access data recently held in memory.

Uninterrupted power for the (CI)A

I’d really prefer not to have to restart every time we have a minor power cut.

Regular readers of my blog will know that I love the CIA triad – confidentiality, integrity and availability – as applied to security. Much of the time, I tend to talk about the first two, and spend little time on the third, but today I thought I’d address the balance a little bit by talking about UPSes – Uninterruptible Power Supplies. For anyone who was hoping for a trolling piece on giving extra powers to US Government agencies, it’s time to move along to another blog. And anyone who thinks I’d stoop to a deliberately misleading article title just to draw people into reading the blog … well, you’re here now, so you might as well read on (and you’re welcome to visit others of my articles such as Do I trust this package?, A cybersecurity tip from Hazzard County, and, of course, Defending our homes).

Years ago, when I was young and had more time on my hands, I ran an email server for my own interest (and accounts). This was fairly soon after we moved to ADSL, so I had an “always-on” connection to the Internet for the first time. I kept it on a pretty basic box behind a pretty basic firewall, and it served email pretty well. Except for when it went *thud*. And the reason it went *thud* was usually because of power fluctuations. We live in a village in the East Anglian (UK) countryside where the electricity supply, though usually OK, does go through periods where it just stops from time to time. Usually for under a minute, admittedly, but from the point of view of most computer systems, even a second of interruption is enough to turn them off. Usually, I could reboot the machine and, after thinking to itself for a while, it would come back up – but sometimes it wouldn’t. It was around this time, if I remember correctly, that I started getting into journalling file systems to reduce the chance of unrecoverable file system errors. Today, such file systems are custom-place: in those days, they weren’t.

Even when the box did come back up, if I was out of the house, or in the office for the day, on holiday, or travelling for business, I had a problem, which was that the machine was now down, and somebody needed to go and physically turn it on if I wanted to be able to access my email. What I needed was a way to provide uninterruptible power to the system if the electricity went off, and so I bought a UPS: an Uninterruptible Power System. A UPS is basically a box that sits between your power socket and your computer, and has a big battery in it which will keep your system going for a while in the event of a (short) power failure and the appropriate electronics to provide AC power out from the battery. Most will also have some sort of way to communicate with your system such as a USB port, and software which you can install to talk to it that your system can decide whether or not to shut itself down – when, for instance, the power has been off for long enough that the battery is about to give out. If you’re running a data centre, you’ll typically have lots of UPS boxes keeping your most important servers up while you wait for your back-up generators to kick in, but for my purposes, knowing that my email server would stay up for long enough that it would ride out short power drops, and be able to shut down gracefully if the power was out for longer, was enough: I have no interest in running my own generator.

That old UPS died probably 15 years ago, and I didn’t replace it, as I’d come to my senses and transferred my email accounts to a commercial provider, but over the weekend I bought a new one. I’m running more systems now, some of them are fairly expensive and really don’t like power fluctuations, and there are some which I’d really prefer not to have to restart every time we have a minor power cut. Here’s what I decided I wanted:

  • a product with good open source software support;
  • something which came in under £150;
  • something with enough “juice” (batter power) to tide 2-3 systems over a sub-minute power cut;
  • something with enough juice to keep one low-powered box running for a little longer than that, to allow it to coordinate shutting down the other boxes first, and then take itself down if required;
  • something with enough ports to keep a a couple of network switches up while the previous point happened (I thought ahead!);
  • Lithium Ion rather than Lead battery if possible.

I ended up buying an APC BX950UI, which meets all of my requirements apart from the last one: it turns out that only high-end UPS systems currently seem to have moved to Lithium Ion battery technology. There are two apparently well-maintained open source software suites that support APC UPS systems: apcupsd and nut, both of which are available for my Linux distribution of choice (Fedora). As it happens, they both also have support for Windows and Mac, so you can mix and match if needs be. I chose nut, which doesn’t explicitly list my model of UPS, but which supports most of the product lower priced product line with its usbhid-ups driver – I knew that I could move to apsupsd if this didn’t work out, but nut worked fine.

Set up wasn’t difficult, but required a little research (and borrowing a particular cable/lead from a kind techie friend…), and I plan to provide details of the steps I took in a separate article or articles to make things easier for people wishing do replicate something close to my set-up. However, I’m currently only using it on one system, so haven’t set up the coordinated shutdown configuration. I’ll wait till I’ve got it more set up before trying that.

What has this got to do with security? Well, I’m planning to allow VPN access to at least one of the boxes, and I don’t want it suddenly to disappear, leaving a “hole in the network”. I may well move to a central authentication mechanism for this and other home systems (if you’re interested, check out projects such as yubico-pam): and I want the box that provides that service to stay up. I’m also planning some home automation projects where access to systems from outside the network (to view cameras, for instance) will be a pain if things just go down: IoT devices may well just come back up in the event of a power failure, but the machines which coordinate them are less likely to do so.