Enarx and Pi (and Wasm)

It’s not just Raspberry Pi, but also Macs.

A few weeks ago, I wrote a blog post entitled WebAssembly: the importance of language(s), in which I talked about how important it is for Enarx that WebAssembly supports multiple languages. We want to make it easy for as many people as possible to use Enarx. Today, we have a new release of Enarx – Elmina Castle – and with it comes something else very exciting: Raspberry Pi support. In fact, there’s loads more in this release – it’s not just Raspberry Pi, but also Macs – but I’d like to concentrate on what this means.

As of this release, you can run WebAssembly applications on your Raspberry Pi, using Enarx. Yes, that’s right: you can take your existing Raspberry Pi (as long as it’s running a 64bit kernel), and run Wasm apps with the Enarx framework.

While the Enarx framework provides the ability to deploy applications in Keeps (TEE[1] instances), one of the important features that it also brings is the ability to run applications outside these TEEs so that you can debug and test your apps. The ability to do this much more simply is what we’re announcing today.

3 reasons this is important

1. WebAssembly just got simpler

WebAssembly is very, very hot at the moment, and there’s a huge movement behind adoption of WASI, which is designed for server-based (that is, non-browser) applications which want to take advantage of all the benefits that Wasm brings – cross-architecture support, strong security model, performance and the rest.

As noted above, Enarx is about running apps within Keeps, protected within TEE instances, but access to the appropriate hardware to do this is difficult. We wanted to make it simple for people without direct access to the hardware to create and test their applications on whatever hardware they have, and lots of people have Raspberry Pis (or Macs).

Of course, some people may just want to use Enarx to run their Wasm applications, and while that’s not the main goal of the project, that’s just fine, of course!

2. Tapping the Pi dev community

The Raspberry Pi community is one of the most creative and vibrant communities out there. It’s very open source friendly, and Raspberry Pi hardware is designed to be cheap and accessible to as many people as possible. We’re very excited about allowing anyone with access to a Pi to start developing WebAssembly and deploying apps with Enarx.

The Raspberry Pi community also has a (deserved) reputation for coming up with new and unexpected uses for technology, and we’re really interested to see what new applications arise: please tell us.

3. Preparing for Arm9 Realms

Last, and far from least, is the fact that in 2021, Arm announced their CCA (Confidential Compute Architecture), coming out with the Arm9 architecture. This will allow the creation of TEEs called Realms, which we’re looking forward to supporting with Enarx. Running Enarx on existing Arm architecture (which is what powers Raspberry Pis) is an important step towards that goal. Extending Enarx Keeps beyond the x86 architecture (as embodied by the Intel SGX and AMD SEV architectures) has always been a goal of the project, and this provides a very important first step which will allow us to move much faster when chips with the appropriate capabilities start becoming available.

How do I try it on my Raspberry Pi?

First, you’ll need a Raspberry Pi running a 64bit kernel. Instructions for this are available over at the Raspberry Pi OS pages, and the good news is that the default installer can easily put this on all of the more recent hardware models.

Next, you’ll need to follow the instructions over at the Enarx installation guide. That will walk you through it, and if you have any problems, you can (and should!) report them, by chatting with the community over at our chat or by searching for/adding bug issues at our issue tracker.

We look forward to hearing how you’re doing. If you think this is cool (and we certainly do!), then please head to our main repository at https://github.com/enarx/enarx and give us a star.


1 – Trusted Execution Environments, such as Intel’s SGX and AMD’s SEV.

Image: Michael H. („Laserlicht“) / Wikimedia Commons

8 tips to rekindle Linux nostalgia (and pain)

Give us back our uber-geek status: make Linux hard again.

I bought a new machine the other day, and decided to put Fedora 35 on it. I’ve been a Fedora user since joining Red Hat in August 2016, and decided to continue using it when I left to (co-)found Profian in mid 2021, having gone through several Linux distros over the years (Red Hat Enterprise Linux before it was called that, Slackware, Debian, SuSE, Ubuntu, and probably some more I tried for a while and never adopted). As a side note, I like Fedora: it gives a good balance of stability, newish packages and the ability to mess at lower levels if you feel like it, and the move to Gnome Shell extensions to deliver UI widgets is an easy way to deliver more functionality on the desktop.

Anyway, the point of this article is that installing Linux on my most recent machines has become easy. Far too easy. When I started using Linux in around 1997, it was hard. Properly hard. You had to know what hardware might work, what things you might do which could completely brick your machine what was going on, have detailed understanding of obscure kernel flags, and generally have to do everything yourself. There was a cachet to using Linux, because only very skilled[1] experts could build a Linux machine and run it successfully as their main desktop. Servers weren’t too difficult, but desktops? They required someone special[2].

I yearn for those days, and I’m sure that many of my readers do, too. What I thought I’d do in the rest of this article is to suggest ways that Linux distributions could allow those of us who enjoy pain and to recapture the feeling of “specialness” which used to come with running Linux desktops. Much of the work they will need to do is to remove options from installers. Installers these days take away all of the fun. They’re graphical, guide you through various (sensible) options and just get on with things. I mean: what’s the point? Distributions: please rework the installers, and much of the joy will come back into my Linux distribution life.

1. Keyboards

I don’t want the installer to guess what keyboard I’m using, and, from there, also make intelligent suggestions about what the default language of the system should be. I’d much prefer an obscure list of keyboards, listing the number of keys (which I’ll have to count several times to be sure), layouts and types. Preferably by manufacturer, and preferably listing several which haven’t been available for retail sale for at least 15 years. As an added bonus, I want to have to plug my keyboard into a special dongle so that it will fit into the back of the motherboard, which will have several colour-coded plastic ports which disappear into the case if you push them too hard. USB? It’s for wimps.

2. Networking

Networking: where do I start? To begin with, if we’re going to have wifi support, I want to set it up by hand, because the card I’ve managed to find is so obscure that it’s not listed anywhere online, and the Access Point to which I’m connecting doesn’t actually support any standard protocol versions. Alternatively, it does, but the specific protocol it supports is so bleeding edge that I need to download a new firmware version to the card. This should be packaged for Windows (CE or ME, preferably), and only work if I disable all of the new features that the AP has introduced.

I’d frankly much prefer Ethernet only support. And to have to track down the actual chipset of the (off-board) network card in order to find the right drivers.

In fact, best of all would be to have to configure a modem. I used to love modems. The noises they made. The lights they flashed. The configuration options they provided.

Token ring enthusiasts are on their own.

3. Monitors

Monitors were interesting. It wasn’t usually too hard to get one working without X (that’s the windowing system we used to use – well, was it a windowing system? And why was the display the Xserver, and the well, server the Xclient? Nobody ever really knew), but get X working? That was difficult and scary. Difficult because there was a set of three different variables you had to work with for at least three different components: X resolution, Y resolution and refresh rate, each with different levels of support by your monitor, your graphics card and your machine (which had to be powerful enough actually to drive the graphics card). Scary because there were frequent warnings that if you chose an unsupported mode, you could permanently damage your monitor

Now, I should be clear that I never managed to damage a monitor by choosing an unsupported mode, and have never, to my knowledge, met anyone else who did, either, but the people who wrote the (generally impenetrable) documentation for configuring your XServer (or was it XClient…?) had put in some very explicit and concerning warnings which meant that every time you changed the smallest setting in your config file, you crossed your fingers as you moved from terminal to GUI.

Oh – for extra fun, you needed to configure your mouse in the same config file, and that never worked properly either. We should go back to that, too.

4. Disks

Ah, disks. These days, you buy a disk, you put it into your machine, and it works. In fact, it may not actually be a physical disk: it’s as likely to be a Solid State Device, with no moving parts at all. What a con. Back in the day, disks didn’t just work, and, like networking, seemed to be full of clever “new” features that the manufacturers used to con us into buying them, but which wouldn’t be supported by Linux for at least 7 months, and which were therefore completely irrelevant to us. Sometimes, if I recall correctly, a manufacturer would release a new disk which would be found to work with Linux, only for latter iterations of the same model to cease working because they used subtly different components.

5. Filesystems

Once you’d got a disk spinning up (and yes, I mean that literally), you then needed to put a filesystem on it. There were a variety of these supported, most of which were irrelevant to all but a couple of dozen enthusiasts running 20 year old DEC or IBM mini-computers. And how were you supposed to choose which filesystem type to use? What about journalling? Did you need it? What did it do? Did you want to be able to read the filesystem from Windows? Write it from Windows? Boot Windows from it? What about Mac? Did you plan to mount it remotely over a network? Which type of network? Was it SCSI? (Of course it wasn’t SCSI, unless you’d managed to blag an old drive enclosure + card from your employer, who was throwing out an old 640Kb drive which was already, frankly on its last legs).

Assuming you’d got all that sorted out, you needed to work out how to partition it. Anyone who was anyone managed to create too small a swap partition to allow your machine to run without falling over regularly, or managed to initialise so small a boot partition that you were unable to update your kernel when the next one came out. Do you need a separate home partition? What about /usr/lib? /usr/src? /opt? /var? All those decisions. So much complexity. Fantastic. And then you have to decide if you’re going to encrypt it. And at what level. There’s a whole other article right there.

6. Email

Assuming that you’ve got X up and running,that your keyboard types the right characters (most of the time), that your mouse can move at least to each edge of the screen, and that the desktop is appearing with the top at the top of the screen, rather than the bottom, or even the right or left, you’re probably ready to try doing something useful with your machine. Assuming also that you’ve managed to set up networking, then one task you might try is email.

First, you’ll need an email provider. There is no Google, not Hotmail. You’ll probably get email from your ISP, and if you’re lucky, they sent you printed instructions for setting up email at the same time they sent you details for the modem settings. You’d think that these could be applied directly to your email client, but they’re actually intended for Windows or Mac users, and your Linux mail client (Pine or Mutt if you’re hardcore, emacs if you’re frankly insane) will need completely different information set up[3]. What ports? IMAP or POP? Is it POP3? What type of autehntication? If you’re feelnig really brave, you might set up your own sendmail instance. Now, that’s real pain.

7. Gaming

Who does gaming on Linux? I mean, apart from Doom and Quake 3D Arena, all you really need is TuxRacer and a poorly configured wine installation which just about runs Minesweeper. Let’s get rid of all the other stuff. Steam? Machines should be steam-powered, not running games via Steam. Huh. *me tuts*

8. Kernels

Linux distros have always had a difficult line to tread: they want to support as many machine configurations as possible. This makes for a large, potentially slow default kernel. Newer distributions tune this automatically, to make a nice, slimline version which suits your particular set-up. This approach is heresy.

The way it should work is that you download the latest kernel source (over your 56k6 modem – this didn’t take as long as you might think, as the source was much smaller: only 45 minutes or so, assuming nobody tried to make a phone call on the line during the download process), read the latest change log in the hopes that the new piece of kit you’d purchased for your machine was now at least in experimental release state, find patches from a random website when you discovered it wasn’t, apply the patches, edit your config file by hand, cutting down the options to a bare minimum, run menuconfig (I know, I know: this isn’t as hardcore as it might be, but it’s probably already 11pm by now, and you’ve been at this since 6pm), deal with clashes and missing pieces, start compiling a kernel, go to get some food, come back, compress the kernel, save it to /boot (assuming you made the partition large enough – see above), reboot, and then swear when you get a kernel panic.

That’s the way it should be. I mean, I still compile my own kernels from time to time, but the joy’s gone from it. I don’t need to, and taking the time to strip it down to essentials takes so long it’s hardly worth it, particularly when I can compile it in what seems like milliseconds.

The good news

There is good news. Some things still don’t work easily on Linux. Some games manufacturers don’t bother with us (and Steam can’t run every game (yet?)). Fingerprint readers seem particularly resistant to Linuxification. And video-conferencing still fails to work properly on a number of platforms (I’m looking at you, Teams, and you, Zoom, at least where sharing is concerned).

Getting audio to work for high-end set-ups seems complex, too, but I’m led to believe that this is no different on Windows systems. Macs are so over-engineered that you can probably run a full professional recording studio without having to install any new software, but they don’t count.

Hopefully, someone will read this article and take pity on those of us who took pride in the pain we inflicted on ourselves. Give us back our uber-geek status: make Linux hard again.


1 – my wife prefers the words “sad”, “tragic” and “obsessed”.

2 – this is a a word which my wife does apply to me, but possibly with a different usage to the one I’m employing.

3 – I should be honest: I still enjoy setting these up by hand, mainly because I can.

WebAssembly: the importance of language(s)

We provide a guide so that you can try each lanuage for yourself.

Over at Enarx, we’re preparing for another release. They’re coming every four weeks now, and we’re getting into a good rhythm. Thanks to all contributors, and also those working on streamlining the release process. It’s a complex project with lots of dependencies – some internal, and some external – and we’re still feeling our way about how best to manage it all. One thing that you will be starting to see in our documentation, and which we intend to formalise in coming releases, is support for particular languages. I don’t mean human languages (though translations of Enarx documentation into different languages, to support as diverse a community as we can, is definitely of interest), but programming languages.

Enarx is, at its heart, a way to deploy applications into different environments: specifically, Trusted Execution Environments (though we do support testing in kvm). The important word here is “execution”, because applications need a runtime in which to execute. Runtimes come in many different flavours: ELF (“Executable and Linking Format”, the main standard for Linux systems), JVM (“Java Virtual Machine”, for compiled Java classes) and PE (“Portable Executable”, used by Windows), to give but a few examples. Enarx uses WebAssembly, or, to be more exact, WASI, which you can think of as a “headless” version of WebAssembly: whereas WebAssembly was originally designed to run within browsers, WASI-compliant runtimes support server-type applications. The runtime which Enarx supports is called wasmtime, which is a Bytecode Alliance project, and written in Rust (like Enarx itself).

This is great, but (almost) nobody writes native WebAssembly code (there is actually a “human-readable” format supported by the standard, but I personally wouldn’t want to be writing in it directly!). One of the great things about WebAssembly is that it’s largely language-neutral: you can write in your favourite language and then compile your application to a “wasm” binary to be executed by the runtime (wasmtime, in our case). WebAssembly is attracting lots of attention within the computing community at the moment, and so people have created lots of different mechanisms to allow their favourite languages to create wasm binaries. There’s a curated list here, though it’s not always updated very frequently, and given the amount of interest in the space, it may be a little out of date when you visit the page. In the list, you’ll find common languages like C, C++, Rust, Golang, .Net, Python and Javascript, as well as less obvious ones like Haskell, COBOL and Scheme. Do have a look – you may be surprised to find support for your favourite “obscure” language is already started, or even quite mature.

This proliferation of languages with what we could call “compile target support” for WebAssembly is excellent news for Enarx, because it means that people writing in these languages may be able to write applications that we can run. I say may, because there’s a slight complication, which is that not all of these compile targets support WASI, which is the specific interface supported by wasmtime, and therefore by Enarx.

This is where the Enarx community has started to step in. They – we – have been creating a list of languages which do allow you to compile wasm binaries that execute under wasmtime, and therefore in Enarx. You’ll find a list over at our WebAssembly Guide and, at time of writing, it includes Rust, C++, C, Golang, Ruby, .NET, TypeScript, AssemblyScript, Grain, Zig and JavaScript[1]. You can definitely expect to see more coming in the near future. With this list, we don’t just say “you can run applications compiled from this language”, but provide a guide so that you can try each lanuage for yourself! Currently the structure of how the information is presented varies from language to language – we should probably try to regularise this – but in each case, there should be sufficient information for someone fairly familiar with the lanaguage to write a simple program and run it in Enarx.

As I noted above, not all languages with compile target support for WebAssembly will work yet, but we’re also doing “upstream” work in some cases to help particular languages get to a position where they will work by submitting patches to fix specific issues. This is an area where more involvement from the community (that means you!) can help: the more people contributing to this work, or noting how important it is to them, the quicker we’ll gain support for more languages.

And here’s where we hope to be: in upcoming releases, we want to be in a position where Enarx officially supports particular languages. What exactly that “support” entails is something we haven’t yet fully defined, but, at minimum, we hope to be able to say something like “applications written in this language using this set of capabilities/features are expected to work”, based on automated testing of “known good” code on a per-release basis. This will mean that users of Enarx will be able to have high confidence that an application working on one release will behave exactly the same on the next: a really important property for a project intended for commercial deployments.

How can you get involved? Well, the most obvious is to visit the page in our docs relating to your favourite language. Try it out, give us feedback or offer to improve the documentation if you think it needs it, or even go upstream and offer patches. If no such page exists, you could visit our chat channels and ask to see if anyone is working on support and/or create an issue requesting support, explaining why you think it’s important.

Finally, to encourage upstream developers to realise how important supporting “their” language is, you can provide a GitHub star by visiting https://enarx.dev or https://github.com/enarx/enarx. “Starring” the project is a way to register your interest, and to show the community that Enarx is something you’re interested in.


1 – Huge thanks to everyone involved in these efforts, with a special shout-out to Deepanshu Arora, who’s done lots of work in this area.

WebAssembly logo: By Carlos Baraza – Own work / https://github.com/carlosbaraza/web-assembly-logo, CC0, https://commons.wikimedia.org/w/index.php?curid=56494100

More Enarx milestones

It’s been a big month for Enarx.

It’s being a big month for Enarx. Last week, I announced that we’d released Enarx 0.3.0 (Chittorgarh Fort), with some big back-end changes, and some new functionality as well. This week, the news is that we’ve hit a couple of milestones around activity and involvement in the project.

1500 commits

The first milestone is 1500 commits to the core project repository. When you use a git-based system, each time you make a change to a file or set of files (including deleting old ones, creating new one and editing or removing sections), you create a new commit. Each commit has a rather long identifier, and its position in the project is also recorded, along with the name provided by the committer and any comments. Commit 1500 to the enarx was from Nathaniel McCallum, and entitled feat(wasmldr): add Platform API. He committed it on Saturday, 2022-03-19, and its commit number is 8ec77de0104c0f33e7dd735c245f3b4aa91bb4d2.

I should point out that this isn’t the 1500th commit to the Enarx project, but the 1500th commit to the enarx/enarx repository on GitHub. This is the core repository for the Enarx project, but there are quite a few others, some of which also have lots of commits. As an example, the enarx/enarx-shim-sgx repository ,which provides some SGX-specific capabilities within Enarx, had 968 commits at time of writing.

500 Github stars

The second milestone is 500 GitHub stars. Stars are measure of how popular a repository or project is, and you can think of them as the Github of a “like” on social media: people who are interested in it can easily click a button on the repository page to “star” it (they can “unstar” it, too, if they change their mind). We only tend to count stars on the main enarx/enarx repository, as that’s the core one for the Enarx project. The 500th star was given to the project by a GitHub user going by the username shebuel-oss, a self-described “Open Source contributor, Advocate, and Community builder”: we’re really pleased to have attracted their interest!

There’s a handy little website which allows you to track stars to a project called GitHub Star History where you can track the addition (or removal!) of stars, and compare other projects. You can check the status of Enarx whenever you’re reading by following this link, but for the purposes of this article, the important question is how did we get to 500? Here’s a graph:

Enarx GitHub star history to 500 stars

You’ll see a nice steep line towards the end which corresponds to Nick Vidal’s influence as community manager, actively working to encourage more interest and involvement, and contributions to the Enarx project.

Why do these numbers matter?

Objectively, they don’t, if I’m honest: we could equally easily have chosen a nice power of two (like 512) for the number of stars, or the year that Michelangelo started work on the statue David (1501) for the number of commits. Most humans, however like round decimal numbers, and the fact that we hit 1500 and 500 commits and stars respectively within a couple of days of each provides a nice visual symmetry.

Subjectively, there’s the fact that we get to track the growth in interest (and the acceleration in growth) and contribution via these two measurements and their historical figures. The Enarx project is doing very well by these criteria, and that means that we’re beginning to get more visibility of the project. This is good for the project, it’s good for Profian (the company Nathaniel and I founded last year to take Enarx to market) and I believe that it’s good for Confidential Computing and open source more generally.

But don’t take my word for it: come and find out about the project and get involved.

Enarx 0.3.0 (Chittorgarh Fort)

Write some applications and run them in an Enarx Keep.

I usually post on a Tuesday, but this week I wanted to wait for a significant event: the release Enarx v0.3.0, codenamed “Chittorgarh Fort”. This happened after I’d gone to bed, so I don’t feel too bad about failing to post on time. I announced Enarx nearly three years ago, in the article Announcing Enarx on the 7th May 2019. and it’s admittedly taken us a long time to get to where we are now. That’s largely because we wanted to do it right, and building up a community, creating a start-up and hiring folks with the appropriate skills is difficult. The design has evolved over time, but the core principles and core architecture are the same as when we announced the project.

You can find more information about v0.3.0 at the release page, but I thought I’d give a few details here and also briefly add to what’s on the Enarx blog about the release.

What’s Enarx?

Enarx is a deployment framework for running applications within Trusted Execution Environments (TEEs). We provide a WebAssembly runtime and – this is new functionality that we’ve started adding in this release – attestation so that you can be sure that your application is protected within a TEE instance.

What’s new in v0.3.0?

A fair amount of the development for this release has been in functionality which won’t be visible to most users, including a major rewrite of the TEE/host interface component that we call sallyport. You will, however, notice that TLS support has been added to network connections from applications within the Keep. This is transparent to the application, so “Where does the certificate come from?” I hear you ask. The answer to that is from the attestation service that’s also part of this release. We’ll be talking more about that in further releases and articles, but key to the approach we’re taking is that interactions with the service (we call it the “Steward”) is pretty much transparent to users and applications.

How can I get involved?

What can you do to get involved? Well, visit the Enarx website, look at the code and docs over at our github repositories (please star the project!), get involved in the chat. The very best thing you can do, having looked around, is to write some applications and run them in an Enarx Keep. And then tell us about your experience. If it worked first time, then wow! We’re still very much in development, but we want to amass a list of applications that are known to work within Enarx, so tell us about it. If it doesn’t work, then please also tell us about it, and have a look at our issues page to see if you’re the first person to run across this problem. If you’re not, then please add your experiences to an existing issue, but if you are, then create a new one.

Enarx isn’t production ready, but it’s absolutely ready for initial investigations (as shown by our interns, who created a set of demos for v0.2.0, curated and aided by our community manager Nick Vidal).

Why Chittorgarh Fort?

It’s worth having a look at the Wikipedia entry for the fort: it’s really something! We decided, when we started creating official releases, that we wanted to go with the fortification theme that Enarx has adopted (that’s why you deploy applications to Enarx Keeps – a keep is the safest part of a castle). We started with Alamo, then went to Balmoral Castle, and then to Chittorgarh Fort (we’re trying to go with alphabetically sequential examples as far as we can!). I suggested Chittorgarh Fort to reflect the global nature of our community, which happens to include a number of contributors from India.

Who was involved?

I liked the fact that the Enarx blog post mentioned the names of some (most?) of those involved, so I thought I’d copy the list of github account names from there, with sincere thanks:

@MikeCamel @npmccallum @haraldh @connorkuehl @lkatalin @mbestavros @wgwoods @axelsimon @ueno @ziyi-yan @ambaxter @squidboylan @blazebissar @michiboo @matt-ross16 @jyotsna-penumaka @steveeJ @greyspectrum @rvolosatovs @lilienbm @CyberEpsilon @kubkon @nickvidal @uudiin @zeenix @sagiegurari @platten @greyspectrum @bstrie @jarkkojs @definitelynobody @Deepansharora27 @mayankkumar2 @moksh-pathak


Rahultalreja11 at English Wikipedia, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Emotional about open source

Enarx is available to all, usable by all.

Around October 2019, Nathaniel McCallum and I founded the Enarx project. Well, we’d actually started it before then, but it’s around then that the main GitHub repo starts showing up, when I look at available info. In the middle of 2021, we secured funding a for a start-up (now named Profian), and since then we’ve established a team of engineers to work on the project, which is itself part of the Confidential Computing Consortium. Enarx is completely open source, and that’s really central to the project. We want (and need) the community to get involved, try it out, improve it, and use it. And, of course, if it’s not open source, you can’t trust it, and that’s really important for security.

The journey has been hard at times, and there were times when we nearly gave up on the funding, but neither Nathaniel nor I could see ourselves working on anything else – we really, truly believe that there’s something truly special going on, and we want to bring it to the world. I’m glad (and relieved) that we persevered. Why? Because last week, on Thursday, was the day that this came true for me. The occasion was OC3, a conference in Confidential Computing organised by Edgeless Systems. I was giving a talk on Understanding trust relationships for Confidential Computing, which I was looking forward to, but Nick Vidal, Community Manager for the Enarx project, also had a session earlier on. His session was entitled From zero to hero: making Confidential Computing accessible, and wasn’t really his at all: it was taken up almost entirely by interns in the project, with a brief introduction and summing up by Nick.In his introduction, Nick explained that he’d be showing several videos recorded by the interns of demos they had recorded. These demos took the Enarx project and ran applications that the (they interns) had created within Keeps, using the WebAssembly runtime provided within Enarx. The interns and their demos were:

  • TCP Echo Server (Moksh Pathak & Deepanshu Arora) – Mosksh and Deepanshu showed two demos: a ROT13 server which accepts connections, reads text from them and returns the input, ROT13ed; and a simple echo server.
  • Fibonacci number generator (Jennifer Chukwu) – a simple Fibonacci number generator running in a Keep
  • Machine learning with decision tree algorithm on Diabetes data set (Jennifer Kumar & Ajay Kumar) – implementation of Machine Learning, operating on a small dataset.
  • Zero Knowledge Proof using Bulletproof (Shraddha Inamdar) – implementation of a Zero Knowledge Proof with verification.

What is exciting about these demos is several-fold:

  1. three of them have direct real-world equivalent use cases:
    1. The ROT13 server, while simple, could be the basis for an encryption/decryptions service.
    2. the Machine Learning service is directly relevant to organisations who wish to run ML workloads in the Cloud, but need assurances that the data is confidentiality and integrity protected.
    3. the Zero Knowledge Proof demo provides an example of a primitive required for complex transaction services.
  2. none of the creators of the demos knew anything about Confidential Computing until a few months ago.
  3. none of the creators knew much – if anything – about WebAssembly before coming to the project.
  4. none of the creators is a software engineering professional (yet!). They are all young people with an interest in the field, but little experience.

What this presentation showed me is that what we’re building with Enarx (though it’s not even finished at this point) is a framework that doesn’t require expertise to use. It’s accessible to beginners, who can easily write and deploy applications with obvious value. This is what made me emotional: Enarx is available to all, usable by all. Not just security experts. Not just Confidential Computing gurus. Everyone. We always wanted to build something that would simplify access to Confidential Computing, and that’s what we, the community, have brought to the world.

I’m really passionate about this, and I’d love to encourage you to become passionate about it, too. If you’d like to know more about Enarx, and hopefully even try it yourself, here are some ways to do just that;

  • visit our website, with documentation, examples and a guide to getting started
  • join our chat and then one of our stand-ups
  • view the code over at GitHub (and please star the project: it encourages more people to get involved!)
  • read the Enarx blog
  • watch the video of the demos.

I’d like to finish this post by thanking not only the interns who created the demos, but also Nick Vidal, for the incredible (and tireless!) work he’s put into helping the interns and into growing the community. And, of course, everyone involved in the project for their efforts in getting us to where we are (and the vision to continue to the next exciting stages: subscribe to this blog for upcoming details).

Enarx 0.2.0 – Balmoral Castle

Now it’s possible to write applications that you can talk to over the network

The big news this week from the Enarx project is our latest release: 0.2.0, which is codenamed “Balmoral Castle”, to continue with our castle/fortification theme.

The big change in Enarx 0.2.0 is the addition of support for networking. Until now, there wasn’t much you could really do in an Enarx Keep, honestly: you could run an application, but all it could to for input and output was read from stdin and write to stdout or stderr. While this was enough to prove that you could write and compile applications to WebAssembly and run them, any more complex interaction with the world outside the Keep was impossible.

So, why is this big news? Well, now it’s possible to write applications that you can talk to over the network. The canonical example which we’ve provided as part of the release is a simple “echo” server, which you start in a Keep and then listens on a port for incoming connections. You make a connection (for instance using the simple command-line utility ncat), and send it a line of text. The server accepts the connection, receives the text and sends it right back you. It can handle multiple connections and will send the text back to the right one (hopefully!).

This is new functionality with Enarx 0.2.0, and the ability to use networking mirrors an important change within WASI (the WebAssembly System Interface) specification, the runtime implemented within an Enarx Keep. Specifically, WASI snapshot preview 1, released in January (2022) now supports the the ACCEPT capability on sockets. The way that WASI works with managing permissions and capabilities is carefully designed, and we (the Profian folks working on Enarx) coordinated closely with the open source WASI/Wasm community to add this in a way which is consistent with the design philosophy of the project. Once the capability was added to the snapshot, there was one more step needed before Enarx could implement support, which was that it needed to appear in wasmtime, the WebAssembly runtime we use within Keeps to allow you to run your applications. This happened last week, in wasmtime release 0.34.0, and that allowed us to make this new release of Enarx.

This may not sound very exciting … but with this in place, you start to build proper applications and micro-services. What about an HTTP server? A ROT13 “encryption” service? A chatbot? An email server? A Wordle implementation[1]? And it’s not just text that you can send over a network connection, of course. What might write to process other types of data? A timestamp server? A logging service? With a network connection, you have the ability to write any of these. Micro-services are all about accepting connections, processing the data that’s come in, and then sending out the results. All of that is possible with this new release.

What we’d love you to do is to start writing applications (using networking) and running them in Enarx. Tell us what works – even better, tell us what doesn’t by creating an issue in our github repository . Please publish examples, join our chat channels, give us a github star, get involved.

What’s coming next? Well, keep an eye on the Enarx site, but be assured that I’ll announce major news here as well. You can expect work in attestation and deployment in the near future – watch this space…


1 – at time of writing, everyone’s talking about Wordle. For those of you coming from the future (say a couple of weeks from now), you can probably ignore this example.

[Image of Edward VII at Balmoral Castle from Wikimedia].

Open source and cyberwar

If cyberattacks happen to the open source community, the impact may be greater than you expect.

There are some things that it’s more comfortable not thinking about, and one of them is war. For many of us, direct, physical violence is a long way from us, and that’s something for which we can be very thankful. As the threat of physical violence recedes, however, it’s clear that the spectre of cyberattacks as part of a response to aggression – physical or virtual – is becoming more and more likely.

It’s well attested that many countries have “cyber-response capabilities”, and those will include aggressive as well as protective measures. And some nation states have made it clear not only that they consider cyberwarfare part of any conflict, but that they would be entirely comfortable with initiating cyberwarfare with attacks.

What, you should probably be asking, has that to do with us? And by “us”, I mean the open source software community. I think that the answer, I’m afraid, is “a great deal”. I should make it clear that I’m not speaking from a place of privileged knowledge here, but rather from thoughtful and fairly informed opinion. But it occurs to me that the “old style” of cyberattacks, against standard “critical infrastructure” like military installations, power plants and the telephone service, was clearly obsolete when the Two Towers collapsed (if not in 1992, when the film Sneakers hypothesised attacks against targets like civil aviation). Which means that any type of infrastructure or economic system is a target, and I think that open source is up there. Let me explore two ways in which open source may be a target.

Active targets

If we had been able to pretend that open source wasn’t a core part of the infrastructure of nations all over the globe, that self-delusion was finally wiped away by the log4j vulnerabilities and attacks. Open source is everywhere now, and whether or not your applications are running any open source, the chances are that you deploy applications to public clouds running open source, at least some of your employees use an open source operating system on their phones, and that the servers running your chat channels, email providers, Internet providers and beyond make use – extensive use – of open source software: think apache, think bind, think kubernetes. At one level, this is great, because it means that it’s possible for bugs to be found and fixed before they can be turned into vulnerabilities, but that’s only true if enough attention is being paid to the code in the first place. We know that attackers will have been stockpiling exploits, and many of them will be against proprietary software, but given the amount of open source deployed out there, they’d be foolish not to be collecting exploits against that as well.

Passive targets

I hate to say it, but there also are what I’d call “passive targets”, those which aren’t necessarily first tier targets, but whose operation is important to the safe, continued working of our societies and economies, and which are intimately related to open source and open source communities. Two of the more obvious ones are GitHub and GitLab, which hold huge amounts of our core commonwealth, but long-term attacks on foundations such as the Apache Foundation and the Linux Foundation, let alone kernel.org, could also have impact on how we, as a community, work. Things are maybe slightly better in terms of infrastructure like chat services (as there’s a choice of more than one, and it’s easier to host your own instance), but there aren’t that many public servers, and a major attack on either them or the underlying cloud services on which many of them rely could be crippling.

Of course, the impact on your community, business or organisation will depend on your usage of difference pieces of infrastructure, how reliant you are on them for your day-to-day operation, and what mitigations you have available to you. Let’s quickly touch on that.

What can I do?

The Internet was famously designed to route around issues – attacks, in fact – and that helps. But, particularly where there’s a pretty homogeneous software stack, attacks on infrastructure could still have very major impact. Start thinking now:

  • how would I support my customers if my main chat server went down?
  • could I continue to develop if my main git provider became unavailable?
  • would we be able to offer at least reduced services if a cloud provider lost connectivity for more than an hour or two?

By doing an analysis of what your business dependencies are, you have the opportunity to plan for at least some of the contingencies (although, as I note in my book, Trust in Computer Systems and the Cloud, the chances of your being able to analyse the entire stack, or discover all of the dependencies, is lower than you might think).

What else can you do? Patch and upgrade – make sure that whatever you’re running is the highest (supported!) version. Make back-ups of anything which is business critical. This should include not just your code but issues and bug-tracking, documentation and sales information. Finally, consider having backup services available for time-critical services like a customer support chat line.

Cyberattacks may not happen to your business or organisation directly, but if they happen to the open source community, the impact may be greater than you expect. Analyse. Plan. Mitigate.

How to hire an open source developer

Our view was that a pure “algorithm coding” exercise was pretty much useless for what we wanted.

We’ve recently been hiring developers to work on the Enarx project, a security project, written almost exclusively in Rust (with a bit of Assembly), dealing with Confidential Computing. By “we”, I mean Profian, the start-up for which I’m the CEO and co-founder. We’ve now found all the people we’re looking for initially on the team (with a couple due to start in the next few weeks), though we absolutely welcome contributors to Enarx, and, if things continue to go well, we’ll definitely want to hire some more folks in the future.

Hiring people is not easy, and we were hit with a set of interesting requirements which made the task even more difficult. I thought it would be useful and interesting for the community to share how we approached the problem.

What were we looking for?

I mentioned above some interesting requirements. Here’s what the main ones were:

  • systems programming – we mainly need people who are happy programming at the systems layer. This is pretty far down the stack, with lots of interactions directly with hardware or the OS. Where we are creating client-server pieces, for instance, we’re having to write quite a lot of the protocols, manage the crypto, etc., and the tools we’re using aren’t all very mature (see “Rust” below).
  • Rust – almost all of the project is written in Rust, and what isn’t is written in Assembly language (currently exclusively x86, though that may change as we add more platforms). Rust is new, cool and exciting, but it’s still quite young, and some areas don’t have all the support you might like, or aren’t as mature as you might hope – everything from cryptography through multi-threading libraries and compiler/build infrastructure.
  • distributed team – we’re building a team of folks where can find them: we have developers in Germany, Finland, the Netherlands, North Carolina (US), Massachusetts (US), Virginia (US) and Georgia (US), I’m in the UK, our community manager is in Brazil and we have interns in India and Nigeria. We knew from the beginning that we wouldn’t have everyone in one place, and this required people who we were happy would be able to communicate and collaborate with people via video, chat and (at worst) email.
  • security – Enarx is a security project, and although we weren’t specifically looking for security experts, we do need people who are able to think and work with security top of mind, and design and write code which is applicable and appropriate for the environment.
  • git – all of our code is stored in git (mainly GitHub, with a little bit of GitLab thrown in), and so much of our interaction around code revolves around git that anybody joining us would need to be very comfortable using it as a standard tool in their day-to-day work.
  • open source – open source isn’t just a licence, it’s a mindset, and, equally important, a way of collaborating. A great deal of open source software is created by people who aren’t geographically co-located, and who might not even see themselves as a team. We needed to be sure that the people we were hiring, while gelling as a close team within the company, will also be able to collaborate with people outside the organisation and be able to embrace Profian’s “open by default” culture not just for code, but for discussions, communications and documentation.

How did we find them?

As I’ve mentioned before, in Recruiting is hard. We ended up using a variety of means to find candidates, with varying levels of success:

  • LinkedIn job adverts
  • LinkedIn searches
  • Language-specific discussion boards and hiring boards (e.g. Reddit)
  • An external recruiter (shout out to Gerald at Interstem)
  • Word-of-mouth/personal recommendations

It’s difficult to judge between them in terms of quality, but without an external recruiter, we’d certainly have struggled with quantity (and we had some great candidates from that pathway, too).

How did we select them?

We needed to measure all of the candidates against all of the requirements noted above, but not all of them were equal. For instance, although we were keen to hire Rust programmers, we were pretty sure that someone with strong C/C++ skills at the systems level would be able to pick up Rust quickly enough to be useful. On the other hand, a good knowledge of using git was absolutely vital, as we couldn’t spend time working with new team members to bring them up-to-speed on our way of working. A strong open source background was, possibly surprisingly, not a requirement, but the mindset to work in that sort of model was, and anyone with a history of open source involvement is likely to have a good knowledge of git. The same goes for the ability to work in a distributed team: so much of open source is distributed that involvement in almost any open source community was a positive indicator. Security we decided was a “nice-to-have”.

How to proceed? We wanted to keep the process simple and quick – we don’t have a dedicated HR or People function, and we’re busy trying to get code written. What we ended up was this (with slight variations), which we tried to get complete within 1-2 weeks:

  1. Initial CV/resume/github/gitlab/LinkedIn review – this to decide whether to interview
  2. 30-40 minute discussion with me as CEO, to find out if they might be a good cultural fit, to give them a chance to find out about us, and get an idea if they were as technically adept as they appeared from the first step
  3. Deep dive technical discussion led by Nathaniel, usually with me there
  4. Chat with other members of the team
  5. Coding exercise
  6. Quick decision (usually within 24 hours)

The coding exercise was key, but we decided against the usual approach. Our view was that a pure “algorithm coding” exercise of the type so beloved by many tech companies was pretty much useless for what we wanted. What we wanted to understand was whether candidates could quickly understand a piece of code, fix some problems and work with the team to do so. We created a github repository (in fact, we ended up using two – one for people a little higher up the stack) with some almost-working Rust code in it, some instructions to fix it, perform some git-related processes on it, and then improve it slightly, adding tests along the way. A very important part of the test was to get candidates to interact with the team via our chat room(s). We scheduled 15 minutes on a video call for set up and initial questions, 2 hours for the exercise (“open book” – as well as talking to the team, candidates were encouraged to use all resources available to them on the Internet), followed by a 30 minute wrap-up session where the team could ask questions and the candidate could also reflect on the task. This also allowed us to get an idea of how well the candidate was able to communicate with the team (combined with the chat interactions during the exercise). Afterwards, the candidate would drop off the call, and we’d generally make a decision within 5-10 minutes as to whether we wanted to hire them.

This generally worked very well. Some candidates struggled with the task, some didn’t communicate well, some failed to do well with the git interactions – these were the people we didn’t hire. It doesn’t mean they’re not good coders, or that they might not be a good fit for the project or the company later on, but they didn’t immediate meet the criteria we need now. Of the ones we hired, the levels of Rust experience and need for interaction with the team varied, but the level of git expertise and their reactions to our discussions afterwards was always sufficient for us to decide to take them.

Reflections

On the whole, I don’t think we’d change a huge amount about the selection process – though I’m pretty sure we could do better with the search process. The route through to the coding exercise allowed us to filter out quite a few candidates, and the coding exercise did a great job of helping us pick the right people. Hopefully everyone who’s come through the process will be a great fit and will produce great code (and tests and documentation and …) for the project. Time will tell!

Open source Christmas presents

Give the gift of open source to more people.

If you find this post interesting, you’ll find a lot more about how community and open source are important in my book Trust in Computer Systems and the Cloud, published by Wiley.

Whether you celebrate Christmas or not (our family does, as it happens), this time of year is one where presents are often given and received. I thought it might be nice to think about what presents we could give in the spirit of open source. Now, there are lots of open source projects out there, and you could always use one to create something for a friend, colleague or loved one (video, audio, blog post, image, website) or go deeper with a project which combines open source software and hardware, such as Mycroft or Crowdsupply. Or you could go in the other direction, and get people involved in projects you’re part of or enjoy. That’s what I’d like to suggest in this article: give the gift of open source to more people, or just make open source more accessible to more people: that’s a gift in itself (to them and to the project!).

Invite

First of all, people need to know about projects. “Evangelism” is a word that’s often used around open source projects, because people need to be told about them before they can get involved. Everyone can do evangelism, whether it’s word of mouth, laptop stickers, blog posts, videos, speaking at conferences, LinkedIn mentions, podcasts, Slack, IRC, TikTok[1], Twitter, ICQ[2] or Reddit. Whatever is your preferred medium to talk to the world, use it. Tell people why it’s important. Tell people why it’s fun. Share the social side of the project. Explain some of the tricky design issues that face it. Tell people why it’s written in the language(s) it’s in. Point people at the sections of code you’ve written and are proud of. Even better, point people at the sections of code you’ve written and are ashamed of, but don’t have time to fix as you’re too busy at the moment. But most of all, invite them to look around, meet the contributors, read the code, test the executables, read the documentation. Make it easy for them to find the project. Once we get back to a world where in-person conferences are re-emerging, arrange meet-ups, provide swag and get together (safely!) IRL[3].

Include

Once your invitees have started looking around, interacting with the community, submitting issues, documentation or patches, find ways to include them. There’s nothing more alienating than, well, being alienated. I think the very worst thing anyone can say to a person new to a project is something along the lines of “go and read the documentation – this is a ridiculous question/terrible piece of documentation/truly horrible piece of code”. It may be all of those things, but how does that help anyone? If you find people giving these reactions – if you find yourself giving these reactions – you need to sort it out. Everyone was a n00b once, and everyone has a different learning style, way of interacting, cultural background and level of expertise. If there are concerns that senior project members’ time is being “wasted” by interactions, nominate (and agree!) that someone will take time to mentor newcomers. Better yet, take turns mentoring, so that information and expertise is spread widely and experts in the project get to see the questions and concerns that non-experts are having. There are limits to this, of course, but you need to find ways not just to welcome people into the project, but actually include them in the functioning, processes, social interactions and day-to-day working of the project which make it a community.

You should also strongly consider a code of conduct such as the Contributor Covenant to model, encourage and, if necessary enforce appropriate and inclusive behaviour. Diversity and Inclusion are complex topics, but there’s a wealth of material out there if you want to take engage – and you should.

Encourage

Encouragement is a little different to inclusion. It’s possible to feel part of a community, but not actually to be participating to the development and growth of the project. Encouragement may be what people need to move into active engagement, contributing more than lurking. And there’s a difference between avoiding negative comments (as outlined above) and promoting positive interactions. The former discourage, and the latter can encourage. If someone contributes their first patch, and gets an “accepted, merged” message, that’s great, but it’s pretty clear that they’re much more likely to contribute again if, instead, they receive a message along the lines of “thanks for this: great to see. We need more contributions in this area: have you looked at issues #452, #599 and #1023?”.

These sorts of interactions are time-consuming, and it may not always be the maintainers who are providing them: as above, the project may need to have someone whose role includes this sort of encouragement. If you’re using something like Github, you may be able to automate notifications of first-time contributions so that you know that it’s time to send an encouraging message. The same could go for someone who was making a few contributions, but has slowed down or dropped off: a quick message or two might be enough to get them involved in the project again.

Celebrate

I see celebration as as step on again from simple encouragement – though it can certainly reinforce it. Celebration isn’t just about acknowledging something positive, but is also a broader social interaction. When somebody’s achievements are celebrated, other people in the community come together to say well done and congratulate them. This is great for the person whose work is being celebrated, as the acknowledgement from others reinforces the network of people with whom they’re connected, bringing them closer into the community.

Celebrating a project-related event like a release and including new members of the community in that celebration can be even more powerful. When new members are part of a celebration, and are made to feel that their contributions, though small, have made up part of what’s being celebrated, their engagement in the project is likely to increase. Their feelings of inclusion in the community are also likely to go up. Celebrations in person (again, when possible) allow for better network-building and closer ties, but even virtual meet-ups can bring peripherally-involved or new members closer to the core of the project.

Summary

Getting people involved in your open source project is important for its health and its growth, but telling people about it isn’t enough. You need to take conscious steps to increase involvement and ensure that initial contributions to a project are followed up, tying people into the project and making them part of the community.


1 – I’m going to be honest: I wouldn’t know where to start with TikTok. My kids will probably be appalled that I even mentioned it, but hey, why not? The chances are that you, dear reader, are younger and (almost certainly) cooler than I am.

2 – I’m guessing the take up will be a bit lower here.

3 – In Real Life. It seems odd to be re-using this term, which had all but disappeared from what I could tell, but which seems to need to re-popularised.