8 tips to rekindle Linux nostalgia (and pain)

Give us back our uber-geek status: make Linux hard again.

I bought a new machine the other day, and decided to put Fedora 35 on it. I’ve been a Fedora user since joining Red Hat in August 2016, and decided to continue using it when I left to (co-)found Profian in mid 2021, having gone through several Linux distros over the years (Red Hat Enterprise Linux before it was called that, Slackware, Debian, SuSE, Ubuntu, and probably some more I tried for a while and never adopted). As a side note, I like Fedora: it gives a good balance of stability, newish packages and the ability to mess at lower levels if you feel like it, and the move to Gnome Shell extensions to deliver UI widgets is an easy way to deliver more functionality on the desktop.

Anyway, the point of this article is that installing Linux on my most recent machines has become easy. Far too easy. When I started using Linux in around 1997, it was hard. Properly hard. You had to know what hardware might work, what things you might do which could completely brick your machine what was going on, have detailed understanding of obscure kernel flags, and generally have to do everything yourself. There was a cachet to using Linux, because only very skilled[1] experts could build a Linux machine and run it successfully as their main desktop. Servers weren’t too difficult, but desktops? They required someone special[2].

I yearn for those days, and I’m sure that many of my readers do, too. What I thought I’d do in the rest of this article is to suggest ways that Linux distributions could allow those of us who enjoy pain and to recapture the feeling of “specialness” which used to come with running Linux desktops. Much of the work they will need to do is to remove options from installers. Installers these days take away all of the fun. They’re graphical, guide you through various (sensible) options and just get on with things. I mean: what’s the point? Distributions: please rework the installers, and much of the joy will come back into my Linux distribution life.

1. Keyboards

I don’t want the installer to guess what keyboard I’m using, and, from there, also make intelligent suggestions about what the default language of the system should be. I’d much prefer an obscure list of keyboards, listing the number of keys (which I’ll have to count several times to be sure), layouts and types. Preferably by manufacturer, and preferably listing several which haven’t been available for retail sale for at least 15 years. As an added bonus, I want to have to plug my keyboard into a special dongle so that it will fit into the back of the motherboard, which will have several colour-coded plastic ports which disappear into the case if you push them too hard. USB? It’s for wimps.

2. Networking

Networking: where do I start? To begin with, if we’re going to have wifi support, I want to set it up by hand, because the card I’ve managed to find is so obscure that it’s not listed anywhere online, and the Access Point to which I’m connecting doesn’t actually support any standard protocol versions. Alternatively, it does, but the specific protocol it supports is so bleeding edge that I need to download a new firmware version to the card. This should be packaged for Windows (CE or ME, preferably), and only work if I disable all of the new features that the AP has introduced.

I’d frankly much prefer Ethernet only support. And to have to track down the actual chipset of the (off-board) network card in order to find the right drivers.

In fact, best of all would be to have to configure a modem. I used to love modems. The noises they made. The lights they flashed. The configuration options they provided.

Token ring enthusiasts are on their own.

3. Monitors

Monitors were interesting. It wasn’t usually too hard to get one working without X (that’s the windowing system we used to use – well, was it a windowing system? And why was the display the Xserver, and the well, server the Xclient? Nobody ever really knew), but get X working? That was difficult and scary. Difficult because there was a set of three different variables you had to work with for at least three different components: X resolution, Y resolution and refresh rate, each with different levels of support by your monitor, your graphics card and your machine (which had to be powerful enough actually to drive the graphics card). Scary because there were frequent warnings that if you chose an unsupported mode, you could permanently damage your monitor

Now, I should be clear that I never managed to damage a monitor by choosing an unsupported mode, and have never, to my knowledge, met anyone else who did, either, but the people who wrote the (generally impenetrable) documentation for configuring your XServer (or was it XClient…?) had put in some very explicit and concerning warnings which meant that every time you changed the smallest setting in your config file, you crossed your fingers as you moved from terminal to GUI.

Oh – for extra fun, you needed to configure your mouse in the same config file, and that never worked properly either. We should go back to that, too.

4. Disks

Ah, disks. These days, you buy a disk, you put it into your machine, and it works. In fact, it may not actually be a physical disk: it’s as likely to be a Solid State Device, with no moving parts at all. What a con. Back in the day, disks didn’t just work, and, like networking, seemed to be full of clever “new” features that the manufacturers used to con us into buying them, but which wouldn’t be supported by Linux for at least 7 months, and which were therefore completely irrelevant to us. Sometimes, if I recall correctly, a manufacturer would release a new disk which would be found to work with Linux, only for latter iterations of the same model to cease working because they used subtly different components.

5. Filesystems

Once you’d got a disk spinning up (and yes, I mean that literally), you then needed to put a filesystem on it. There were a variety of these supported, most of which were irrelevant to all but a couple of dozen enthusiasts running 20 year old DEC or IBM mini-computers. And how were you supposed to choose which filesystem type to use? What about journalling? Did you need it? What did it do? Did you want to be able to read the filesystem from Windows? Write it from Windows? Boot Windows from it? What about Mac? Did you plan to mount it remotely over a network? Which type of network? Was it SCSI? (Of course it wasn’t SCSI, unless you’d managed to blag an old drive enclosure + card from your employer, who was throwing out an old 640Kb drive which was already, frankly on its last legs).

Assuming you’d got all that sorted out, you needed to work out how to partition it. Anyone who was anyone managed to create too small a swap partition to allow your machine to run without falling over regularly, or managed to initialise so small a boot partition that you were unable to update your kernel when the next one came out. Do you need a separate home partition? What about /usr/lib? /usr/src? /opt? /var? All those decisions. So much complexity. Fantastic. And then you have to decide if you’re going to encrypt it. And at what level. There’s a whole other article right there.

6. Email

Assuming that you’ve got X up and running,that your keyboard types the right characters (most of the time), that your mouse can move at least to each edge of the screen, and that the desktop is appearing with the top at the top of the screen, rather than the bottom, or even the right or left, you’re probably ready to try doing something useful with your machine. Assuming also that you’ve managed to set up networking, then one task you might try is email.

First, you’ll need an email provider. There is no Google, not Hotmail. You’ll probably get email from your ISP, and if you’re lucky, they sent you printed instructions for setting up email at the same time they sent you details for the modem settings. You’d think that these could be applied directly to your email client, but they’re actually intended for Windows or Mac users, and your Linux mail client (Pine or Mutt if you’re hardcore, emacs if you’re frankly insane) will need completely different information set up[3]. What ports? IMAP or POP? Is it POP3? What type of autehntication? If you’re feelnig really brave, you might set up your own sendmail instance. Now, that’s real pain.

7. Gaming

Who does gaming on Linux? I mean, apart from Doom and Quake 3D Arena, all you really need is TuxRacer and a poorly configured wine installation which just about runs Minesweeper. Let’s get rid of all the other stuff. Steam? Machines should be steam-powered, not running games via Steam. Huh. *me tuts*

8. Kernels

Linux distros have always had a difficult line to tread: they want to support as many machine configurations as possible. This makes for a large, potentially slow default kernel. Newer distributions tune this automatically, to make a nice, slimline version which suits your particular set-up. This approach is heresy.

The way it should work is that you download the latest kernel source (over your 56k6 modem – this didn’t take as long as you might think, as the source was much smaller: only 45 minutes or so, assuming nobody tried to make a phone call on the line during the download process), read the latest change log in the hopes that the new piece of kit you’d purchased for your machine was now at least in experimental release state, find patches from a random website when you discovered it wasn’t, apply the patches, edit your config file by hand, cutting down the options to a bare minimum, run menuconfig (I know, I know: this isn’t as hardcore as it might be, but it’s probably already 11pm by now, and you’ve been at this since 6pm), deal with clashes and missing pieces, start compiling a kernel, go to get some food, come back, compress the kernel, save it to /boot (assuming you made the partition large enough – see above), reboot, and then swear when you get a kernel panic.

That’s the way it should be. I mean, I still compile my own kernels from time to time, but the joy’s gone from it. I don’t need to, and taking the time to strip it down to essentials takes so long it’s hardly worth it, particularly when I can compile it in what seems like milliseconds.

The good news

There is good news. Some things still don’t work easily on Linux. Some games manufacturers don’t bother with us (and Steam can’t run every game (yet?)). Fingerprint readers seem particularly resistant to Linuxification. And video-conferencing still fails to work properly on a number of platforms (I’m looking at you, Teams, and you, Zoom, at least where sharing is concerned).

Getting audio to work for high-end set-ups seems complex, too, but I’m led to believe that this is no different on Windows systems. Macs are so over-engineered that you can probably run a full professional recording studio without having to install any new software, but they don’t count.

Hopefully, someone will read this article and take pity on those of us who took pride in the pain we inflicted on ourselves. Give us back our uber-geek status: make Linux hard again.


1 – my wife prefers the words “sad”, “tragic” and “obsessed”.

2 – this is a a word which my wife does apply to me, but possibly with a different usage to the one I’m employing.

3 – I should be honest: I still enjoy setting these up by hand, mainly because I can.

WebAssembly: the importance of language(s)

We provide a guide so that you can try each lanuage for yourself.

Over at Enarx, we’re preparing for another release. They’re coming every four weeks now, and we’re getting into a good rhythm. Thanks to all contributors, and also those working on streamlining the release process. It’s a complex project with lots of dependencies – some internal, and some external – and we’re still feeling our way about how best to manage it all. One thing that you will be starting to see in our documentation, and which we intend to formalise in coming releases, is support for particular languages. I don’t mean human languages (though translations of Enarx documentation into different languages, to support as diverse a community as we can, is definitely of interest), but programming languages.

Enarx is, at its heart, a way to deploy applications into different environments: specifically, Trusted Execution Environments (though we do support testing in kvm). The important word here is “execution”, because applications need a runtime in which to execute. Runtimes come in many different flavours: ELF (“Executable and Linking Format”, the main standard for Linux systems), JVM (“Java Virtual Machine”, for compiled Java classes) and PE (“Portable Executable”, used by Windows), to give but a few examples. Enarx uses WebAssembly, or, to be more exact, WASI, which you can think of as a “headless” version of WebAssembly: whereas WebAssembly was originally designed to run within browsers, WASI-compliant runtimes support server-type applications. The runtime which Enarx supports is called wasmtime, which is a Bytecode Alliance project, and written in Rust (like Enarx itself).

This is great, but (almost) nobody writes native WebAssembly code (there is actually a “human-readable” format supported by the standard, but I personally wouldn’t want to be writing in it directly!). One of the great things about WebAssembly is that it’s largely language-neutral: you can write in your favourite language and then compile your application to a “wasm” binary to be executed by the runtime (wasmtime, in our case). WebAssembly is attracting lots of attention within the computing community at the moment, and so people have created lots of different mechanisms to allow their favourite languages to create wasm binaries. There’s a curated list here, though it’s not always updated very frequently, and given the amount of interest in the space, it may be a little out of date when you visit the page. In the list, you’ll find common languages like C, C++, Rust, Golang, .Net, Python and Javascript, as well as less obvious ones like Haskell, COBOL and Scheme. Do have a look – you may be surprised to find support for your favourite “obscure” language is already started, or even quite mature.

This proliferation of languages with what we could call “compile target support” for WebAssembly is excellent news for Enarx, because it means that people writing in these languages may be able to write applications that we can run. I say may, because there’s a slight complication, which is that not all of these compile targets support WASI, which is the specific interface supported by wasmtime, and therefore by Enarx.

This is where the Enarx community has started to step in. They – we – have been creating a list of languages which do allow you to compile wasm binaries that execute under wasmtime, and therefore in Enarx. You’ll find a list over at our WebAssembly Guide and, at time of writing, it includes Rust, C++, C, Golang, Ruby, .NET, TypeScript, AssemblyScript, Grain, Zig and JavaScript[1]. You can definitely expect to see more coming in the near future. With this list, we don’t just say “you can run applications compiled from this language”, but provide a guide so that you can try each lanuage for yourself! Currently the structure of how the information is presented varies from language to language – we should probably try to regularise this – but in each case, there should be sufficient information for someone fairly familiar with the lanaguage to write a simple program and run it in Enarx.

As I noted above, not all languages with compile target support for WebAssembly will work yet, but we’re also doing “upstream” work in some cases to help particular languages get to a position where they will work by submitting patches to fix specific issues. This is an area where more involvement from the community (that means you!) can help: the more people contributing to this work, or noting how important it is to them, the quicker we’ll gain support for more languages.

And here’s where we hope to be: in upcoming releases, we want to be in a position where Enarx officially supports particular languages. What exactly that “support” entails is something we haven’t yet fully defined, but, at minimum, we hope to be able to say something like “applications written in this language using this set of capabilities/features are expected to work”, based on automated testing of “known good” code on a per-release basis. This will mean that users of Enarx will be able to have high confidence that an application working on one release will behave exactly the same on the next: a really important property for a project intended for commercial deployments.

How can you get involved? Well, the most obvious is to visit the page in our docs relating to your favourite language. Try it out, give us feedback or offer to improve the documentation if you think it needs it, or even go upstream and offer patches. If no such page exists, you could visit our chat channels and ask to see if anyone is working on support and/or create an issue requesting support, explaining why you think it’s important.

Finally, to encourage upstream developers to realise how important supporting “their” language is, you can provide a GitHub star by visiting https://enarx.dev or https://github.com/enarx/enarx. “Starring” the project is a way to register your interest, and to show the community that Enarx is something you’re interested in.


1 – Huge thanks to everyone involved in these efforts, with a special shout-out to Deepanshu Arora, who’s done lots of work in this area.

WebAssembly logo: By Carlos Baraza – Own work / https://github.com/carlosbaraza/web-assembly-logo, CC0, https://commons.wikimedia.org/w/index.php?curid=56494100

7 tips for how not to design a self-driving car

Consider how a self-driving car should behave if you want it to seem like a(n inexperienced) human driver

About a month ago, one of my children started learning to drive. Much to our (mutual) surprise, it turns out that I’m the favoured parent to be sitting in the passenger seat, which means that I’ve spent quite a few hours watching a (human) driver learning how to operate a car safely. Most of the time, at least.

As a result, I consider myself an expert in general stuff about driving, automotive safety and the rest, and I’ve decided that this extends to the subject of self-driving cars. I’m not such an expert that I feel qualified to tell you how to you should design such a car, but I do feel that I can give some pretty useful pointers as to what not to do, along the lines of such august previous articles as:

What we’re looking to do here is consider how a self-driving car should behave if you want it to seem like a(n inexperienced) human driver. Here are some tips.

1. Teach it to stall

This is a key part of learning to drive in the UK, where most drivers (until recently, at least) have learned on cars with manual (that is “non-automatic”) gearboxes. As a result, learning to stall at inopportune moments is a key skill that the self-driving car needs to master. Alongside this is the ability to judder and “bunny hop” when trying to move off having recovered from the stall. Key places to do so are junctions, roundabouts, and traffic lights. If you can get your automatic gearbox-equipped vehicle to stall, you get extra points: impressive.

2. One thing at a time

Single-threaded programming is much, much simpler than multi-threading. This is a stroke of luck, because new drivers can only concentrate on one thing at a time. They can steer, change gear watch your speed (more on this below), observe roadsigns or avoid other road-users, but you shouldn’t expect them to do all at one time. The same should go for the vehicle that you’re designing. Use a random number generator to decide which one of these to prioritise, and then ignore the others until you’re done

3. Full beam all the way

When you’re driving in the dark, you need to be able to see what you’re doing. The best way to do this is to ensure that your headlights are on full beam. All of the time. Some self-driving cars don’t rely on visible light to negotiate the road, but headlights are also important for letting other road-users know you’re there. And the best way to do this is not only to put them on full beam when you set off, but also not to dip them when other vehicles approach. That way, everyone will know you’re there. They may not be happy, but they’ll know you’re there.

4. (Very) Slow is good

Exactly how fast you go shouldn’t be easily guessable – that would make life too simple for other road-users. It’s important to have some sort of algorithm for this, so let me suggest one:

speed = limit/5 * rand (limit/2)

This is a good start, but there are times when you want to add extra surprise: at random intervals, the car should accelerate to about 10% over the speed limit. For added realism, you should be careful to avoid this happening at obvious times such as at the beginning or end of a reduced speed limit – somewhere in the middle is perfect.

5. Fear birds

We live in a rural village, which means that there are lots of birds sitting in the middle of the road most of the time. Favourites are pheasants, partridges, magpies and wood pigeons. Nobody wants feathers (and worse) in their front grill, so it’s important to avoid them. The best way to do this is to make sure that you don’t get too close to them, but this is difficult if they’re everywhere. Fear them. Slow down wherever you can, and not just where you see birds, but where you suspect there might be some – but make sure that other road-users don’t have too much notice. For this, see our next point below.

6. Manoeuvre, signal, mirror (or something like that)

New drivers are taught that there are three important things you need to do every time you do anything like turn off the road, come to a stop or avoid a obstruction (such as birds – see above). These should allegedly be done in a particular order, but real human learner drivers don’t stick to this order, so it’s important to bring a little randomness into the equation. Obviously, self-driving cars don’t need to use mirrors, so in lieu of this, add a little swerve here or there – which is something that learner drivers do when they use their mirrors, so should serve instead.

7. Have a “tutting” passenger

One of the key parts of getting used to driving, as any learner will tell you, is having a passenger next to you, making disapproving noises at (for you) unexpected moments. You’ll need to ensure that there’s some mechanism in the self-driving vehicle to deliver such sound effects. For added realism, you can add panicked cries of “slow, slower, SLOWER, STOP, STOP NOW!” Followed by “Right. We were very lucky there. Get out of the driver’s seat: I’m taking over.” Once you get there, you know that you’ve really succeeded.


  1. I should point out that to make any extrapolations from this article to the actual driving of the child mentioned above would be grossly unfair.

Title image by Colin on Flickr.

More Enarx milestones

It’s been a big month for Enarx.

It’s being a big month for Enarx. Last week, I announced that we’d released Enarx 0.3.0 (Chittorgarh Fort), with some big back-end changes, and some new functionality as well. This week, the news is that we’ve hit a couple of milestones around activity and involvement in the project.

1500 commits

The first milestone is 1500 commits to the core project repository. When you use a git-based system, each time you make a change to a file or set of files (including deleting old ones, creating new one and editing or removing sections), you create a new commit. Each commit has a rather long identifier, and its position in the project is also recorded, along with the name provided by the committer and any comments. Commit 1500 to the enarx was from Nathaniel McCallum, and entitled feat(wasmldr): add Platform API. He committed it on Saturday, 2022-03-19, and its commit number is 8ec77de0104c0f33e7dd735c245f3b4aa91bb4d2.

I should point out that this isn’t the 1500th commit to the Enarx project, but the 1500th commit to the enarx/enarx repository on GitHub. This is the core repository for the Enarx project, but there are quite a few others, some of which also have lots of commits. As an example, the enarx/enarx-shim-sgx repository ,which provides some SGX-specific capabilities within Enarx, had 968 commits at time of writing.

500 Github stars

The second milestone is 500 GitHub stars. Stars are measure of how popular a repository or project is, and you can think of them as the Github of a “like” on social media: people who are interested in it can easily click a button on the repository page to “star” it (they can “unstar” it, too, if they change their mind). We only tend to count stars on the main enarx/enarx repository, as that’s the core one for the Enarx project. The 500th star was given to the project by a GitHub user going by the username shebuel-oss, a self-described “Open Source contributor, Advocate, and Community builder”: we’re really pleased to have attracted their interest!

There’s a handy little website which allows you to track stars to a project called GitHub Star History where you can track the addition (or removal!) of stars, and compare other projects. You can check the status of Enarx whenever you’re reading by following this link, but for the purposes of this article, the important question is how did we get to 500? Here’s a graph:

Enarx GitHub star history to 500 stars

You’ll see a nice steep line towards the end which corresponds to Nick Vidal’s influence as community manager, actively working to encourage more interest and involvement, and contributions to the Enarx project.

Why do these numbers matter?

Objectively, they don’t, if I’m honest: we could equally easily have chosen a nice power of two (like 512) for the number of stars, or the year that Michelangelo started work on the statue David (1501) for the number of commits. Most humans, however like round decimal numbers, and the fact that we hit 1500 and 500 commits and stars respectively within a couple of days of each provides a nice visual symmetry.

Subjectively, there’s the fact that we get to track the growth in interest (and the acceleration in growth) and contribution via these two measurements and their historical figures. The Enarx project is doing very well by these criteria, and that means that we’re beginning to get more visibility of the project. This is good for the project, it’s good for Profian (the company Nathaniel and I founded last year to take Enarx to market) and I believe that it’s good for Confidential Computing and open source more generally.

But don’t take my word for it: come and find out about the project and get involved.

Enarx 0.3.0 (Chittorgarh Fort)

Write some applications and run them in an Enarx Keep.

I usually post on a Tuesday, but this week I wanted to wait for a significant event: the release Enarx v0.3.0, codenamed “Chittorgarh Fort”. This happened after I’d gone to bed, so I don’t feel too bad about failing to post on time. I announced Enarx nearly three years ago, in the article Announcing Enarx on the 7th May 2019. and it’s admittedly taken us a long time to get to where we are now. That’s largely because we wanted to do it right, and building up a community, creating a start-up and hiring folks with the appropriate skills is difficult. The design has evolved over time, but the core principles and core architecture are the same as when we announced the project.

You can find more information about v0.3.0 at the release page, but I thought I’d give a few details here and also briefly add to what’s on the Enarx blog about the release.

What’s Enarx?

Enarx is a deployment framework for running applications within Trusted Execution Environments (TEEs). We provide a WebAssembly runtime and – this is new functionality that we’ve started adding in this release – attestation so that you can be sure that your application is protected within a TEE instance.

What’s new in v0.3.0?

A fair amount of the development for this release has been in functionality which won’t be visible to most users, including a major rewrite of the TEE/host interface component that we call sallyport. You will, however, notice that TLS support has been added to network connections from applications within the Keep. This is transparent to the application, so “Where does the certificate come from?” I hear you ask. The answer to that is from the attestation service that’s also part of this release. We’ll be talking more about that in further releases and articles, but key to the approach we’re taking is that interactions with the service (we call it the “Steward”) is pretty much transparent to users and applications.

How can I get involved?

What can you do to get involved? Well, visit the Enarx website, look at the code and docs over at our github repositories (please star the project!), get involved in the chat. The very best thing you can do, having looked around, is to write some applications and run them in an Enarx Keep. And then tell us about your experience. If it worked first time, then wow! We’re still very much in development, but we want to amass a list of applications that are known to work within Enarx, so tell us about it. If it doesn’t work, then please also tell us about it, and have a look at our issues page to see if you’re the first person to run across this problem. If you’re not, then please add your experiences to an existing issue, but if you are, then create a new one.

Enarx isn’t production ready, but it’s absolutely ready for initial investigations (as shown by our interns, who created a set of demos for v0.2.0, curated and aided by our community manager Nick Vidal).

Why Chittorgarh Fort?

It’s worth having a look at the Wikipedia entry for the fort: it’s really something! We decided, when we started creating official releases, that we wanted to go with the fortification theme that Enarx has adopted (that’s why you deploy applications to Enarx Keeps – a keep is the safest part of a castle). We started with Alamo, then went to Balmoral Castle, and then to Chittorgarh Fort (we’re trying to go with alphabetically sequential examples as far as we can!). I suggested Chittorgarh Fort to reflect the global nature of our community, which happens to include a number of contributors from India.

Who was involved?

I liked the fact that the Enarx blog post mentioned the names of some (most?) of those involved, so I thought I’d copy the list of github account names from there, with sincere thanks:

@MikeCamel @npmccallum @haraldh @connorkuehl @lkatalin @mbestavros @wgwoods @axelsimon @ueno @ziyi-yan @ambaxter @squidboylan @blazebissar @michiboo @matt-ross16 @jyotsna-penumaka @steveeJ @greyspectrum @rvolosatovs @lilienbm @CyberEpsilon @kubkon @nickvidal @uudiin @zeenix @sagiegurari @platten @greyspectrum @bstrie @jarkkojs @definitelynobody @Deepansharora27 @mayankkumar2 @moksh-pathak


Rahultalreja11 at English Wikipedia, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

What’s a secure channel?

Always beware of products and services which call themselves “secure”. Because they’re not.

A friend asked me what I considered a secure channel a couple of months ago, and it made me think. Many of us have information that we wish to communicate which we’d rather other people can’t look at, for all sorts of reasons. These might range from present ideas for our spouse or partner sent by a friend to my phone to diplomatic communications about espionage targets sent between embassies over the Internet, with lots in between: intellectual property discussions, bank transactions and much else. Sometimes, we want to ensure that people can’t change what’s in the messages we send: it might be OK for other people to know that I pay £300 in rent, but not for them to be able to change the amount (or the bank account into which it goes). These two properties are referred to as confidentiality (keeping information secret) and integrity (keeping information unchangeable), and often you want to combine them – in the case of our espionage plans, I’d prefer that my enemies don’t know what targets are at risk, but also that they don’t change the targets I’ve selected to something less bothersome for them.

Modern encryption systems generally provide both confidentiality and integrity for messages and data, so I’m going to treat these as standard properties for an encrypted channel. Which means that if I use encryption on a channel, it’s secure, right?

Hmm. Let’s step back a bit, because, unfortunately, there’s rather a lot more to unpack than that. Three of the questions we need to tackle should give us pause. They are: “secure from whom?”, “secure for how long?” and “secure where?”. The answers we give to these questions will be important, and though they are all somewhat intertwined, I’m going to deal with them in order, and I’m going to use the examples of the espionage message and the present ideas to discuss them. I’m also going to talk more about confidentiality than integrity – though we’ll assume that both properties are important to what we mean by “secure”.

Secure from whom?

In our examples, we have very different sets of people wanting to read our messages – a nation state and my spouse. Unless my spouse has access to skills and facilities of which I’m unaware (and I wouldn’t put it past her), the resources that she has at her disposal to try to break the security of my communication are both fewer and less powerful than those of the nation state. A nation state may be able to apply cryptologic attacks to messages, attack the software (and even firmware or hardware) implementations of the encryption system, mess with the amount of entropy available for key generation at either or both ends of the channel, perform interception (e.g. Person-In-The-Middle) attacks, coerce the sender or recipient of the message and more. I’m hoping that most of the above are not options for my wife (though coercion might be, I suppose!). The choice of encryption system, including entropy sources, cipher suite(s), hardware and software implementation are all vital in the diplomatic message case, as are vetting of staff and many other issues. In the case of gift ideas for my wife’s birthday, I’m assuming that a standard implementation of a commercial messaging system should be enough.

Secure for how long?

It’s only a few days till my wife’s birthday (yes, I have got her a present, though that does remind me; I need a card…), so I only have to keep the gift ideas secure for a little longer. It turns out that, in this case, the time sensitivity of the integrity of the message is different to that of the confidentiality: even if she managed to change what the gift idea in the message was, it wouldn’t make a difference to what I’ve got her at this point. However, I’d still prefer if she didn’t know what the gift ideas are.

In the case of the diplomatic espionage message, we can assume that confidentiality and the integrity are both important for a much longer time, but we’ll concentrate on the confidentiality. Obviously an attacking country would prefer it if the target were unaware of an attack before it happened, but if the enemy managed to prove an attack was performed by the message sender’s or recipient’s country, even a decade or more in the future, this could also lead to major (and negative) consequences. We want to ensure that whatever steps we take to protect the message are sufficient that access to a copy of the message taken when it was sent (via wire-tapping, for instance) or retrieved at a later date (via access to a message store in the future), is insufficient to allow it to be cracked. This is tricky, and the history of cryptologic attacks on encryption schemes, not to mention human failures (such as leaks) and advances in computation (such as quantum computing) should serve as a strong warning that we need to consider very carefully what mechanisms we should use to protect our messages.

Secure where?

Are the embassies secure? Are all the machines between the embassies secure? Is the message stored before delivery? If so, is it stored on a machine within the embassy or on a server elsewhere? Is it end-to-end encrypted, or is it decrypted before delivery and then re-encrypted (I really, really hope not). While this is unlikely in the case of diplomatic messages, a good number of commercially sensitive messages (including much email) is not end-to-end encrypted, leading to vulnerabilities if someone trying to break the security can get access to the system where they are stored, or intercept them between decryption and re-encryption.

Typically, we have better control over different parts of the infrastructure which carry or host our communications than we do over others. For most of the article above, I’ve generally assumed that the nation state trying to read embassy message is going to have more relevant resources to try to breach the security of the message than my wife does, but there’s a significant weakness in protecting my wife’s gift idea: she has easy access to my phone. I tend to keep it locked, and it has a PIN, but, if I’m honest, I don’t tend to go out of my way to keep her out: the PIN is to deter someone who might steal it. Equally, it’s entirely possible that I may be sharing some material (a video or news article) with her at exactly the time that the gift idea message arrives from our mutual friend, leading her to see the notification. In either case, there’s a good chance that the property of confidentiality is not that strong after all.

Conclusion

I’ve said it before, and I plan to say it again (and again, and again): there is no “secure”. When we talk about secure channels, we must be aware that what we mean should be “channels secured with appropriate measures to protect against the risks associated with the security being compromised”. This is a long way of saying “if I’m protecting diplomatic messages, I need to make greater efforts than if I’m trying to stop my wife finding out ahead of time what she’s getting for her birthday”, but it’s important to understand this. Part of the problem is that we’re bombarded with words like “secure”, which are unqualified, and may lead us to think that they’re absolute, when they’re absolutely not. Another part of the problem is that once we’ve put one type of security in place, particularly when it’s sold or marketed as “best in breed” or “best practice”, that it addresses all of the issues we might have. This is clearly not the case – using the strongest encryption possible for messages between my friend and me isn’t going to stop my wife from knowing I’ve bought her if knows the PIN for my phone. Please, please, consider what you need when you’re protecting your communications (and other data, of course), and always beware of products and services which call themselves “secure”. Because they’re not.

Emotional about open source

Enarx is available to all, usable by all.

Around October 2019, Nathaniel McCallum and I founded the Enarx project. Well, we’d actually started it before then, but it’s around then that the main GitHub repo starts showing up, when I look at available info. In the middle of 2021, we secured funding a for a start-up (now named Profian), and since then we’ve established a team of engineers to work on the project, which is itself part of the Confidential Computing Consortium. Enarx is completely open source, and that’s really central to the project. We want (and need) the community to get involved, try it out, improve it, and use it. And, of course, if it’s not open source, you can’t trust it, and that’s really important for security.

The journey has been hard at times, and there were times when we nearly gave up on the funding, but neither Nathaniel nor I could see ourselves working on anything else – we really, truly believe that there’s something truly special going on, and we want to bring it to the world. I’m glad (and relieved) that we persevered. Why? Because last week, on Thursday, was the day that this came true for me. The occasion was OC3, a conference in Confidential Computing organised by Edgeless Systems. I was giving a talk on Understanding trust relationships for Confidential Computing, which I was looking forward to, but Nick Vidal, Community Manager for the Enarx project, also had a session earlier on. His session was entitled From zero to hero: making Confidential Computing accessible, and wasn’t really his at all: it was taken up almost entirely by interns in the project, with a brief introduction and summing up by Nick.In his introduction, Nick explained that he’d be showing several videos recorded by the interns of demos they had recorded. These demos took the Enarx project and ran applications that the (they interns) had created within Keeps, using the WebAssembly runtime provided within Enarx. The interns and their demos were:

  • TCP Echo Server (Moksh Pathak & Deepanshu Arora) – Mosksh and Deepanshu showed two demos: a ROT13 server which accepts connections, reads text from them and returns the input, ROT13ed; and a simple echo server.
  • Fibonacci number generator (Jennifer Chukwu) – a simple Fibonacci number generator running in a Keep
  • Machine learning with decision tree algorithm on Diabetes data set (Jennifer Kumar & Ajay Kumar) – implementation of Machine Learning, operating on a small dataset.
  • Zero Knowledge Proof using Bulletproof (Shraddha Inamdar) – implementation of a Zero Knowledge Proof with verification.

What is exciting about these demos is several-fold:

  1. three of them have direct real-world equivalent use cases:
    1. The ROT13 server, while simple, could be the basis for an encryption/decryptions service.
    2. the Machine Learning service is directly relevant to organisations who wish to run ML workloads in the Cloud, but need assurances that the data is confidentiality and integrity protected.
    3. the Zero Knowledge Proof demo provides an example of a primitive required for complex transaction services.
  2. none of the creators of the demos knew anything about Confidential Computing until a few months ago.
  3. none of the creators knew much – if anything – about WebAssembly before coming to the project.
  4. none of the creators is a software engineering professional (yet!). They are all young people with an interest in the field, but little experience.

What this presentation showed me is that what we’re building with Enarx (though it’s not even finished at this point) is a framework that doesn’t require expertise to use. It’s accessible to beginners, who can easily write and deploy applications with obvious value. This is what made me emotional: Enarx is available to all, usable by all. Not just security experts. Not just Confidential Computing gurus. Everyone. We always wanted to build something that would simplify access to Confidential Computing, and that’s what we, the community, have brought to the world.

I’m really passionate about this, and I’d love to encourage you to become passionate about it, too. If you’d like to know more about Enarx, and hopefully even try it yourself, here are some ways to do just that;

  • visit our website, with documentation, examples and a guide to getting started
  • join our chat and then one of our stand-ups
  • view the code over at GitHub (and please star the project: it encourages more people to get involved!)
  • read the Enarx blog
  • watch the video of the demos.

I’d like to finish this post by thanking not only the interns who created the demos, but also Nick Vidal, for the incredible (and tireless!) work he’s put into helping the interns and into growing the community. And, of course, everyone involved in the project for their efforts in getting us to where we are (and the vision to continue to the next exciting stages: subscribe to this blog for upcoming details).

Enarx 0.2.0 – Balmoral Castle

Now it’s possible to write applications that you can talk to over the network

The big news this week from the Enarx project is our latest release: 0.2.0, which is codenamed “Balmoral Castle”, to continue with our castle/fortification theme.

The big change in Enarx 0.2.0 is the addition of support for networking. Until now, there wasn’t much you could really do in an Enarx Keep, honestly: you could run an application, but all it could to for input and output was read from stdin and write to stdout or stderr. While this was enough to prove that you could write and compile applications to WebAssembly and run them, any more complex interaction with the world outside the Keep was impossible.

So, why is this big news? Well, now it’s possible to write applications that you can talk to over the network. The canonical example which we’ve provided as part of the release is a simple “echo” server, which you start in a Keep and then listens on a port for incoming connections. You make a connection (for instance using the simple command-line utility ncat), and send it a line of text. The server accepts the connection, receives the text and sends it right back you. It can handle multiple connections and will send the text back to the right one (hopefully!).

This is new functionality with Enarx 0.2.0, and the ability to use networking mirrors an important change within WASI (the WebAssembly System Interface) specification, the runtime implemented within an Enarx Keep. Specifically, WASI snapshot preview 1, released in January (2022) now supports the the ACCEPT capability on sockets. The way that WASI works with managing permissions and capabilities is carefully designed, and we (the Profian folks working on Enarx) coordinated closely with the open source WASI/Wasm community to add this in a way which is consistent with the design philosophy of the project. Once the capability was added to the snapshot, there was one more step needed before Enarx could implement support, which was that it needed to appear in wasmtime, the WebAssembly runtime we use within Keeps to allow you to run your applications. This happened last week, in wasmtime release 0.34.0, and that allowed us to make this new release of Enarx.

This may not sound very exciting … but with this in place, you start to build proper applications and micro-services. What about an HTTP server? A ROT13 “encryption” service? A chatbot? An email server? A Wordle implementation[1]? And it’s not just text that you can send over a network connection, of course. What might write to process other types of data? A timestamp server? A logging service? With a network connection, you have the ability to write any of these. Micro-services are all about accepting connections, processing the data that’s come in, and then sending out the results. All of that is possible with this new release.

What we’d love you to do is to start writing applications (using networking) and running them in Enarx. Tell us what works – even better, tell us what doesn’t by creating an issue in our github repository . Please publish examples, join our chat channels, give us a github star, get involved.

What’s coming next? Well, keep an eye on the Enarx site, but be assured that I’ll announce major news here as well. You can expect work in attestation and deployment in the near future – watch this space…


1 – at time of writing, everyone’s talking about Wordle. For those of you coming from the future (say a couple of weeks from now), you can probably ignore this example.

[Image of Edward VII at Balmoral Castle from Wikimedia].