Patents for software start-ups – an introduction

Having a budget assigned and time set aside for patent creation and filing should be an important part of your company strategy.

To learn more about creating and applying for patents, please visit my consultancy, P2P Consulting, for more detail about how we can help you.


Disclaimer: I’m not a lawyer! Don’t treat any of this article as legal advice: always consult your legal counsel for legal matters.

When I was fund-raising for our start-up (just a couple of years ago at time of writing), one of the questions that frequently came up was “what about IP”? For the techies amongst my readership, this isn’t “Internet Protocol”, but “Intellectual Property”, and, for most start-ups, what the question really meant was “do you have any patents to protect your business idea and the technology behind it?”. When the question wasn’t forthcoming, I would always raise it myself, because, as it happened, we did have a good story around Intellectual Property.

Unlike many other start-ups, it turns out.

There are four types of Intellectual Property, which we can list thus, including their relevance to start-ups (I’m concentrating on software start-ups, as they situation is somewhat different for different approaches):

  • Copyright – protects the implementation and expression of code
  • Trademarks – protect things like your logo and colour scheme
  • Patents – protect the functionality of the code
  • Trade secrets – have to be protected through secrecy, NDAs, etc.

If you decide that you need to do more than just rely on trade secrets (which are all very well for soft drink recipes and things which can’t be reverse engineered, but aren’t great for software), and you’re more interested in the software side of IP than trademarks, that leaves two key types: copyright and patents. People get these confused, and although I’m not a lawyer (see disclaimer above…), the way I understand the difference is this: copyright just protects the bits and bytes of the code in the way that it’s written, whereas a patent protects what it does. A competent engineer can look at your code (or its effects, sometimes) and rewrite it (in another language, using different patterns, using subtly different processes) to get the same effects, so copyright doesn’t really help here. A patent protects you from someone implementing the same effects – or, more accurately, the processes, methods and mechanisms that you use to create these effects – and this is almost always the type of protection you want.

What can you patent?

Now, you can’t just patent anything, and there are actually differences between what you can patent depending on the authority granting the patent (the US patent authorities’ rules differ from those of the European Union, for instance), but a couple of rules of thumb are useful as starting points:

  1. you can’t generally patent mathematical equations or algorithms;
  2. you can’t patent business processes.

What you generally can patent (depending on your jurisdiction, etc.) are processes and mechanisms that would be difficult or impossible for humans to do on their own and which also make or cause changes to external systems (such as causing things to happen in the physical world).

Another important test is that the idea should be both novel and also not immediately obvious to someone skilled in the art. This is often a lower bar than most engineers think, it turns out: once an idea is explained to you, it often feels obvious, but that doesn’t mean it was to start with!

What should I patent?

Subject to the points mentioned above, you can patent pretty much anything you want, but it probably doesn’t make sense to patent everything you come up with: if you’re in the AI business, then patenting an idea around 3D printing for efficient traffic lights probably isn’t sensible (unless your AI is great at CAD/CAM, maybe). The patent process is resource-intensive (typically consuming the time and effort of senior engineers and staff who you’d prefer to be spending their time on getting your product or service out of the door) and fairly expensive. You should work on a strategy to decide what is key to your business now, what is likely to be key in the future, what might help you in possible technical or business pivots and what might protect you from competitors now and in the future. The exact priorities between those will vary from company to company, but understanding these – and having a budget assigned and time set aside for patent creation and filing should be an important part of your company strategy.

When can you patent?

The obvious answer is “as soon as you have the idea” – you absolutely don’t need to have an implementation of it. Beyond that, things vary (again) between jurisdictions. Generally, good advice is to apply for a patent before you disclose anything about the idea to anyone else via an academic paper, GitHub repository, conference session, LinkedIn post or similar (though conversations under NDA should generally be OK). For US patents, you generally have a year after first disclosure, but in other jurisdictions, you don’t, so be careful!

Why should I patent?

The best answer, I think, is “to protect your core business” – aligned, therefore, with the answer to your answers to the questions “what should I patent” above. Some key reasons that people create patents include:

  • Company valuation
  • Defensive / fight back
  • License/sell
  • Market/partner
  • Sue

Offensive use of patents is unlikely for start-ups, but all the others can be very useful, even if licensing or selling them may seem like a way down the road for many early stage companies. The two which are likely to be most interesting for early stage companies are valuation and defensive. Showing that your company has real ideas (which are, what’s more, protected by law) is a great signal of value for almost any type of exit. On the other hand, if there are companies out there who threaten you, alleging that you are impinging on their space and ideas, being able to say “we have a patent in this area, back off”, can seriously reduce the amount of time and money that you spend on lawyers. And that makes everyone happy (apart, maybe, for the lawyers).

There are sometimes reasons that people are reluctant to apply for patents. These include a lack of knowledge of the process (which this article is hopefully addressing), a decision to protect trade secrets instead and moral qualms around the whole question of whether software should be patented at all. Many of those concerned about this last point worry about “patent trolls” and the techniques they use to attack and restrict expression of ideas, particularly in open source. Luckily, there are some very good models and organisations designed to address exactly this point: if you want more information, I strongly advise reading up about the Open Invention Network, the LOT Network and Red Hat Patent Promise.

How should I start?

I plan to write more articles on how to get started with patents, but there are three steps that I’d strongly advise all start-ups to consider as soon as they are able:

  1. consider an IP strategy, and discuss it at the Board level (this will, I promise, send good signals to your investors!).
  2. set time aside every few weeks to discuss possible patent ideas and record the idea, the date it was created, and who was involved in its invention. This is important information that you’ll need when you do start the patenting process.
  3. think hard before sharing any important, business-critical technical information externally, as disclosure may hinder your ability to patent it in the future. You should talk to all your employees about this (not just the techies!).

The other thing you can do, of course, is talk to an expert in patent creation (often called “harvesting”) and filing. Intellectual Property lawyers specialise in the latter part of the process, but the actual creation and preparation of ideas in such a way that lawyers can efficiently help move you through the filing process (sometimes called, somewhat scarily, “prosecution”) is a different set of skills. This part of the process is something I’m very happy to help you through my consultancy P2P Consulting: do get in touch for more details.

Announcing P2P Consulting

A consulting practice reflecting the expertise and experience I’ve built up over the past 25+ years in the industry.

It’s been a few months since we decided to close down Profian, the start-up we created around the Enarx project, and I’ve been working on what my next steps should be. The first, and most obvious, is that I started a couple of months back as Executive Director for the Confidential Computing Consortium, part of the Linux Foundation. I’ve also got far too good at a number of online games – too embarrassing to list here. But the other thing that I’ve been working on is starting a consulting practice, reflecting the expertise and experience I’ve built up over the past 25+ years in the industry.

There are a number of services that I’m offering:

  • software patent strategy and harvesting
  • open source strategy
  • start-up strategy
  • VC and PE due diligence
  • cybersecurity

Some of them speak for themselves: I’ve been in what’s now called “cybersecurity” for over 20 years, and my previous role was as CEO and Co-founder of a start-up. I’ve also been involved in due diligence, which explains the Venture Capital and Private Equity offerings. I plan to write more about all of the offerings in future articles, but the other two – around software patents and open source strategy – probably deserve a little more detail at this point.

Here are the basic descriptions of these services – feedback is definitely welcome:

Intellectual property is a valuable resource for start-ups: for valuation, partnership and competitive advantage. Many start-ups know that they should be managing their Intellectual Property – in particular filing patents – but few have the skills or time to do so efficiently. P2P Consultancy runs in-person patent workshops to generate ideas (“harvesting”) and works with management on the appropriate company strategy, selecting harvested ideas that are best aligned. P2P Consultancy can then work through the process of taking each patent idea through the write-up, discussion and filing stages with patent lawyers, saving valuable staff time and helping the company internalise the skills and gain the experience needed to manage the process in future.

Patent strategy and harvesting

P2P Consulting offers services to companies looking to build a strong strategy for their involvement with open source projects and communities which is consistent with the commercial goals of the organisation.  Mike Bursell, P2P Consulting’s founder, has been involved with open source strategy for over 15 years, in companies ranging from multi-nationals to start-ups, considering issues ranging from community growth and involvement to open source licensing decisions, intellectual property protection and go-to-market.  P2P Consulting provides expertise and links in the open source ecosystem and insights into the opportunities and challenges associated with embracing open source as a strategic differentiator.

Open source strategy

I look forward to growing the consultancy alongside my other activities, and offering these services particularly to start-ups looking to consolidate their patent portfolios and expand their open source involvement. For queries, please visit the P2P Consulting LinkedIn page, the https://p2pconsulting.dev or email me at mike@p2pconsulting.dev.

E2E encryption in danger (again) – sign the petition

You should sign the petition on the official UK government site.

The UK Government is at it again: trying to require technical changes to products and protocols which will severely impact (read “destroy”) the security of users. This time, it’s the Online Safety Bill and, like pretty much all similar attempts, it requires the implementation of backdoors to things like messaging services. Lots of people have stood up and made the point that this is counter-productive and dangerous – here are a couple of links:

This isn’t the first time I’ve written about this (The Backdoor Fallacy: explaining it slowly for governments and Helping our governments – differently, for a start), and I fear that it won’t be the last. The problem is that none of these technical approaches work: none of them can work. Privacy and backdoors (and this is a backdoor, make no mistake about it) are fundamentally incompatible. And everyone with an ounce (or gram, I suppose) of technical expertise agrees: we know (and we can prove) that what’s being suggested won’t and can’t work.

We gain enormous benefits from technology, and with those benefits come risks, and some downsides which malicious actors exploit. The problem is that you can’t have one without the other. If you try to fix (and this approach won’t fix – it might reduce, but not fix) the problem that malicious actors and criminals use online messaging service, you open out a huge number of opportunities for other actors, including malicious governments (now or in the future) to do very, very bad things, whilst reducing significantly the benefits to private individuals, businesses, human rights organisations, charities and the rest. There is no zero sum game here.

What can you do? You can read up about the problem, you can add your voice to the technical discussions and/or if you’re a British citizen or resident, you should sign the petition on the official UK government site. This needs 10,000 signatures, so please get signing!

Executive Director, Confidential Computing Consortium

I look forward to furthering the aims of the CCC

I’m very pleased to announce that I’ve just started a new role as part-time Executive Director for the Confidential Computing Consortium, which is a project of the The Linux Foundation. I have been involved from the very earliest days of the consortium, which was founded in 2019, and I’m delighted to be joining as an officer of the project as we move into the next phase of our growth. I look forward to working with existing and future members and helping to expand industry adoption of Confidential Computing.

For those of you who’ve been following what I’ve been up to over the years, this may not be a huge surprise, at least in terms of my involvement, which started right at the beginning of the CCC. In fact, Enarx, the open source project of which I was co-founder, was the very first project to be accepted into the CCC, and Red Hat, where I was Chief Security Architect (in the Office of the CTO) at the time, was one of the founding members. Since then, I’ve served on the Governing Board (twice, once as Red Hat’s representative as a Premier member, and once as an elected representative of the General members) acted as Treasurer, been Co-chair of the Attestation SIG and been extremely active in the Technical Advisory Council. I was instrumental in initiating the creation of the first analyst report into Confidential Computing and helped in the creation of the two technical and one general white paper published by the CCC. I’ve enjoyed working with the brilliant industry leaders who more than ably lead the CCC, many of whom I now count not only as valued colleagues but also as friends.

The position – Executive Director – however, is news. For a while, the CCC has been looking to extend its activities beyond what the current officers of the consortium can manage, given that they have full-time jobs outside the CCC. The consortium has grown to over 40 members now – 8 Premier, 35 General and 8 Associate – and with that comes both the opportunity to engage in a whole new set of activities, but also a responsibility to listen to the various voices of the membership and to ensure that the consortium’s activities are aligned with the expectations and ambitions of the members. Beyond that, as Confidential Computing becomes more pervasive, it’s time to ensure that (as far as possible), there’s a consistent, crisp and compelling set of messages going out to potential adopters of the technology, as well as academics and regulators.

I plan to be working on the issues above. I’ve only just started and there’s a lot to be doing – and the role is only part-time! – but I look forward to furthering the aims of the CCC:

The Confidential Computing Consortium is a community focused on projects securing data in use and accelerating the adoption of confidential computing through open collaboration.

The core mission of the CCC

Wish me luck, or, even better, get in touch and get involved yourself.

Enarx hits 750 stars

Yesterday, Enarx, the open source security project of which I’m co-founder and for which Profian is custodian, gained its 750th GitHub star. This is an outstanding achievement, and I’m very proud of everyone involved. Particular plaudits to Nathaniel McCallum, my co-founder for Enarx and Profian, Nick Vidal, the community manager for Enarx, everyone who’s been involved in committing code, design, tests and documentation for the project, and everyone who manages the running of the project and its infrastructure. We’ve been lucky enough to be joined by a number of stellar interns along the way, who have also contributed enormously to the project.

Enarx has also been supported by a number of organisations and companies, and it’s worth listing as many of them as I can think of:

  • Profian, the current custodian
  • Red Hat, under whose auspices the initial development began
  • the Confidential Computing Consortium, a Linux Foundation Project, which owns the project
  • Equinix, who have donated computing resources
  • PhoenixNAP, who have donated computing resources
  • Rocket.Chat, who have donated chat resources
  • Intel, who have worked with us along the way and donated various resources
  • AMD, who have worked with us along the way and donated various resources
  • Outreachy, with whom worked to get some of our fine interns

When it all comes down to it, however, it’s the community that makes the project. We strive to create a friendly, open community, and we want more and more people to get involved. To that end, we’ll soon be announcing some new ways to get involved with trying and using Enarx, in association with Profian. Keep an eye out, and keep visiting and giving us stars!

My book at RSA Conference NA

Attend RSA and get 20% off my book!

Attend RSA and get 20% off my book!

I’m immensely proud (as you can probably tell from the photo) to be able to say that my book in available in the book store at the RSA Conference in San Francisco this week. You’ll find the store in Moscone South, up the escalators on the Esplanade.

If you ever needed a reason to attend RSA, this is clearly the one, particularly with the 20% discount. If anyone’s interested in getting a copy signed, please contact me via LinkedIn – I currently expect to be around till Friday morning. It would be great to meet you.

Back in the (conference) groove

Ah, yes: conferences. We love them, we hate them.

Ah, yes: conferences. We love them, we hate them, but they used to be part of the job, and they’re coming back. At least in the IT world that I inhabit, things are beginning to start happening in person again. I attended my first conference in over two years in Valencia a couple of weeks ago: Kubecon + CloudNativeCon Europe. I’d not visited Valencia before, and it’s a lovely city. I wasn’t entirely well (I’m taking a while to recover from Covid-19 – cannot recommend), which didn’t help, but we had some great meetings, Nathaniel (my Enarx & Profian co-founder) spoke at the co-located WasmDay event on WASI networking, and I got to walk the exhibition hall picking up (small amounts) of swag (see Buying my own t-shirts, OR “what I miss about conferences”).

For the last few years, when I’ve been attending conferences, I’ve been doing it as the employee of a large company – Red Hat and Intel – and things are somewhat different when you’re attending as a start-up. We (Profian) haven’t exhibited at any conferences yet (keep an eye out for announcements on social media for that), but you look at things with a different eye when you’re a start-up – or at least I do.

One of the differences, of course, is that as CEO, my main focus has to be on the business side, which means that attending interesting talks on mildly-related technologies isn’t likely to be a good use of my time. That’s not always true – we’re not big enough to send that many people to these conferences, so it may be that I’m the best person available to check out something which we need to put on our radar – but I’m likely to restrict my session attendance to one of three types of session:

  1. a talk by a competitor (or possible competitor) to understand what they’re doing and how (and whether) we should react.
  2. a talk by a possible customer or representative from a sector in which we’re interested, to find understand possible use cases.
  3. a talk about new advances or applications of the technologies in which we’re interested.

There will, of course, also be business-related talks, but so many of these are aimed at already-established companies that it’s difficult to find ones with obvious applicability.

What else? Well, there are the exhibition halls, as I mentioned. Again, we’re there to look at possible competitors, but also to assess possible use cases. These aren’t just likely to be use cases associated with potential customers – in fact, given the marketing dollars (euros, pounds, etc.) funnelled into these events, it’s likely to be difficult to find clear statements of use cases, let alone discover the right person to talk to on the booth. More likely, in fact, is finding possible partners or licensees among the attendees: realising that there are companies out there with a product or offering to which we could add value. Particularly for smaller players, there’s a decent chance that you might find someone with sufficient technical expertise to assess whether there might be fit.

What else? Well, meetings. On site, off site: whichever fits. Breakfast, cocktails or dinner seem to be preferred. as lunch can be tricky, and there aren’t always good places to sit for a quiet chat. Investors – VCs and institutional capital – realise that conferences are a good place to meet with their investees or potential investees. The same goes for partners for whom setting aside a whole day of meetings with a start-up makes little obvious sense (and it probably doesn’t make sense for us to fly over specially meet them either), but for whom finding a slot to discuss what’s going on and the state of the world is a good investment of their time if they’re already attending an event.

So – that’s what I’m going to be up to at events from now on, it seems. If you’re interested in catching up, I’ll be at RSA in San Francisco, Open Source Summit in Austin and Scale 19x in San Antonio in the next couple of months, with more to come. Do get in touch: it’s great to meet folks!

Enarx and Pi (and Wasm)

It’s not just Raspberry Pi, but also Macs.

A few weeks ago, I wrote a blog post entitled WebAssembly: the importance of language(s), in which I talked about how important it is for Enarx that WebAssembly supports multiple languages. We want to make it easy for as many people as possible to use Enarx. Today, we have a new release of Enarx – Elmina Castle – and with it comes something else very exciting: Raspberry Pi support. In fact, there’s loads more in this release – it’s not just Raspberry Pi, but also Macs – but I’d like to concentrate on what this means.

As of this release, you can run WebAssembly applications on your Raspberry Pi, using Enarx. Yes, that’s right: you can take your existing Raspberry Pi (as long as it’s running a 64bit kernel), and run Wasm apps with the Enarx framework.

While the Enarx framework provides the ability to deploy applications in Keeps (TEE[1] instances), one of the important features that it also brings is the ability to run applications outside these TEEs so that you can debug and test your apps. The ability to do this much more simply is what we’re announcing today.

3 reasons this is important

1. WebAssembly just got simpler

WebAssembly is very, very hot at the moment, and there’s a huge movement behind adoption of WASI, which is designed for server-based (that is, non-browser) applications which want to take advantage of all the benefits that Wasm brings – cross-architecture support, strong security model, performance and the rest.

As noted above, Enarx is about running apps within Keeps, protected within TEE instances, but access to the appropriate hardware to do this is difficult. We wanted to make it simple for people without direct access to the hardware to create and test their applications on whatever hardware they have, and lots of people have Raspberry Pis (or Macs).

Of course, some people may just want to use Enarx to run their Wasm applications, and while that’s not the main goal of the project, that’s just fine, of course!

2. Tapping the Pi dev community

The Raspberry Pi community is one of the most creative and vibrant communities out there. It’s very open source friendly, and Raspberry Pi hardware is designed to be cheap and accessible to as many people as possible. We’re very excited about allowing anyone with access to a Pi to start developing WebAssembly and deploying apps with Enarx.

The Raspberry Pi community also has a (deserved) reputation for coming up with new and unexpected uses for technology, and we’re really interested to see what new applications arise: please tell us.

3. Preparing for Arm9 Realms

Last, and far from least, is the fact that in 2021, Arm announced their CCA (Confidential Compute Architecture), coming out with the Arm9 architecture. This will allow the creation of TEEs called Realms, which we’re looking forward to supporting with Enarx. Running Enarx on existing Arm architecture (which is what powers Raspberry Pis) is an important step towards that goal. Extending Enarx Keeps beyond the x86 architecture (as embodied by the Intel SGX and AMD SEV architectures) has always been a goal of the project, and this provides a very important first step which will allow us to move much faster when chips with the appropriate capabilities start becoming available.

How do I try it on my Raspberry Pi?

First, you’ll need a Raspberry Pi running a 64bit kernel. Instructions for this are available over at the Raspberry Pi OS pages, and the good news is that the default installer can easily put this on all of the more recent hardware models.

Next, you’ll need to follow the instructions over at the Enarx installation guide. That will walk you through it, and if you have any problems, you can (and should!) report them, by chatting with the community over at our chat or by searching for/adding bug issues at our issue tracker.

We look forward to hearing how you’re doing. If you think this is cool (and we certainly do!), then please head to our main repository at https://github.com/enarx/enarx and give us a star.


1 – Trusted Execution Environments, such as Intel’s SGX and AMD’s SEV.

Image: Michael H. („Laserlicht“) / Wikimedia Commons

8 tips to rekindle Linux nostalgia (and pain)

Give us back our uber-geek status: make Linux hard again.

I bought a new machine the other day, and decided to put Fedora 35 on it. I’ve been a Fedora user since joining Red Hat in August 2016, and decided to continue using it when I left to (co-)found Profian in mid 2021, having gone through several Linux distros over the years (Red Hat Enterprise Linux before it was called that, Slackware, Debian, SuSE, Ubuntu, and probably some more I tried for a while and never adopted). As a side note, I like Fedora: it gives a good balance of stability, newish packages and the ability to mess at lower levels if you feel like it, and the move to Gnome Shell extensions to deliver UI widgets is an easy way to deliver more functionality on the desktop.

Anyway, the point of this article is that installing Linux on my most recent machines has become easy. Far too easy. When I started using Linux in around 1997, it was hard. Properly hard. You had to know what hardware might work, what things you might do which could completely brick your machine what was going on, have detailed understanding of obscure kernel flags, and generally have to do everything yourself. There was a cachet to using Linux, because only very skilled[1] experts could build a Linux machine and run it successfully as their main desktop. Servers weren’t too difficult, but desktops? They required someone special[2].

I yearn for those days, and I’m sure that many of my readers do, too. What I thought I’d do in the rest of this article is to suggest ways that Linux distributions could allow those of us who enjoy pain and to recapture the feeling of “specialness” which used to come with running Linux desktops. Much of the work they will need to do is to remove options from installers. Installers these days take away all of the fun. They’re graphical, guide you through various (sensible) options and just get on with things. I mean: what’s the point? Distributions: please rework the installers, and much of the joy will come back into my Linux distribution life.

1. Keyboards

I don’t want the installer to guess what keyboard I’m using, and, from there, also make intelligent suggestions about what the default language of the system should be. I’d much prefer an obscure list of keyboards, listing the number of keys (which I’ll have to count several times to be sure), layouts and types. Preferably by manufacturer, and preferably listing several which haven’t been available for retail sale for at least 15 years. As an added bonus, I want to have to plug my keyboard into a special dongle so that it will fit into the back of the motherboard, which will have several colour-coded plastic ports which disappear into the case if you push them too hard. USB? It’s for wimps.

2. Networking

Networking: where do I start? To begin with, if we’re going to have wifi support, I want to set it up by hand, because the card I’ve managed to find is so obscure that it’s not listed anywhere online, and the Access Point to which I’m connecting doesn’t actually support any standard protocol versions. Alternatively, it does, but the specific protocol it supports is so bleeding edge that I need to download a new firmware version to the card. This should be packaged for Windows (CE or ME, preferably), and only work if I disable all of the new features that the AP has introduced.

I’d frankly much prefer Ethernet only support. And to have to track down the actual chipset of the (off-board) network card in order to find the right drivers.

In fact, best of all would be to have to configure a modem. I used to love modems. The noises they made. The lights they flashed. The configuration options they provided.

Token ring enthusiasts are on their own.

3. Monitors

Monitors were interesting. It wasn’t usually too hard to get one working without X (that’s the windowing system we used to use – well, was it a windowing system? And why was the display the Xserver, and the well, server the Xclient? Nobody ever really knew), but get X working? That was difficult and scary. Difficult because there was a set of three different variables you had to work with for at least three different components: X resolution, Y resolution and refresh rate, each with different levels of support by your monitor, your graphics card and your machine (which had to be powerful enough actually to drive the graphics card). Scary because there were frequent warnings that if you chose an unsupported mode, you could permanently damage your monitor

Now, I should be clear that I never managed to damage a monitor by choosing an unsupported mode, and have never, to my knowledge, met anyone else who did, either, but the people who wrote the (generally impenetrable) documentation for configuring your XServer (or was it XClient…?) had put in some very explicit and concerning warnings which meant that every time you changed the smallest setting in your config file, you crossed your fingers as you moved from terminal to GUI.

Oh – for extra fun, you needed to configure your mouse in the same config file, and that never worked properly either. We should go back to that, too.

4. Disks

Ah, disks. These days, you buy a disk, you put it into your machine, and it works. In fact, it may not actually be a physical disk: it’s as likely to be a Solid State Device, with no moving parts at all. What a con. Back in the day, disks didn’t just work, and, like networking, seemed to be full of clever “new” features that the manufacturers used to con us into buying them, but which wouldn’t be supported by Linux for at least 7 months, and which were therefore completely irrelevant to us. Sometimes, if I recall correctly, a manufacturer would release a new disk which would be found to work with Linux, only for latter iterations of the same model to cease working because they used subtly different components.

5. Filesystems

Once you’d got a disk spinning up (and yes, I mean that literally), you then needed to put a filesystem on it. There were a variety of these supported, most of which were irrelevant to all but a couple of dozen enthusiasts running 20 year old DEC or IBM mini-computers. And how were you supposed to choose which filesystem type to use? What about journalling? Did you need it? What did it do? Did you want to be able to read the filesystem from Windows? Write it from Windows? Boot Windows from it? What about Mac? Did you plan to mount it remotely over a network? Which type of network? Was it SCSI? (Of course it wasn’t SCSI, unless you’d managed to blag an old drive enclosure + card from your employer, who was throwing out an old 640Kb drive which was already, frankly on its last legs).

Assuming you’d got all that sorted out, you needed to work out how to partition it. Anyone who was anyone managed to create too small a swap partition to allow your machine to run without falling over regularly, or managed to initialise so small a boot partition that you were unable to update your kernel when the next one came out. Do you need a separate home partition? What about /usr/lib? /usr/src? /opt? /var? All those decisions. So much complexity. Fantastic. And then you have to decide if you’re going to encrypt it. And at what level. There’s a whole other article right there.

6. Email

Assuming that you’ve got X up and running,that your keyboard types the right characters (most of the time), that your mouse can move at least to each edge of the screen, and that the desktop is appearing with the top at the top of the screen, rather than the bottom, or even the right or left, you’re probably ready to try doing something useful with your machine. Assuming also that you’ve managed to set up networking, then one task you might try is email.

First, you’ll need an email provider. There is no Google, not Hotmail. You’ll probably get email from your ISP, and if you’re lucky, they sent you printed instructions for setting up email at the same time they sent you details for the modem settings. You’d think that these could be applied directly to your email client, but they’re actually intended for Windows or Mac users, and your Linux mail client (Pine or Mutt if you’re hardcore, emacs if you’re frankly insane) will need completely different information set up[3]. What ports? IMAP or POP? Is it POP3? What type of autehntication? If you’re feelnig really brave, you might set up your own sendmail instance. Now, that’s real pain.

7. Gaming

Who does gaming on Linux? I mean, apart from Doom and Quake 3D Arena, all you really need is TuxRacer and a poorly configured wine installation which just about runs Minesweeper. Let’s get rid of all the other stuff. Steam? Machines should be steam-powered, not running games via Steam. Huh. *me tuts*

8. Kernels

Linux distros have always had a difficult line to tread: they want to support as many machine configurations as possible. This makes for a large, potentially slow default kernel. Newer distributions tune this automatically, to make a nice, slimline version which suits your particular set-up. This approach is heresy.

The way it should work is that you download the latest kernel source (over your 56k6 modem – this didn’t take as long as you might think, as the source was much smaller: only 45 minutes or so, assuming nobody tried to make a phone call on the line during the download process), read the latest change log in the hopes that the new piece of kit you’d purchased for your machine was now at least in experimental release state, find patches from a random website when you discovered it wasn’t, apply the patches, edit your config file by hand, cutting down the options to a bare minimum, run menuconfig (I know, I know: this isn’t as hardcore as it might be, but it’s probably already 11pm by now, and you’ve been at this since 6pm), deal with clashes and missing pieces, start compiling a kernel, go to get some food, come back, compress the kernel, save it to /boot (assuming you made the partition large enough – see above), reboot, and then swear when you get a kernel panic.

That’s the way it should be. I mean, I still compile my own kernels from time to time, but the joy’s gone from it. I don’t need to, and taking the time to strip it down to essentials takes so long it’s hardly worth it, particularly when I can compile it in what seems like milliseconds.

The good news

There is good news. Some things still don’t work easily on Linux. Some games manufacturers don’t bother with us (and Steam can’t run every game (yet?)). Fingerprint readers seem particularly resistant to Linuxification. And video-conferencing still fails to work properly on a number of platforms (I’m looking at you, Teams, and you, Zoom, at least where sharing is concerned).

Getting audio to work for high-end set-ups seems complex, too, but I’m led to believe that this is no different on Windows systems. Macs are so over-engineered that you can probably run a full professional recording studio without having to install any new software, but they don’t count.

Hopefully, someone will read this article and take pity on those of us who took pride in the pain we inflicted on ourselves. Give us back our uber-geek status: make Linux hard again.


1 – my wife prefers the words “sad”, “tragic” and “obsessed”.

2 – this is a a word which my wife does apply to me, but possibly with a different usage to the one I’m employing.

3 – I should be honest: I still enjoy setting these up by hand, mainly because I can.

WebAssembly: the importance of language(s)

We provide a guide so that you can try each lanuage for yourself.

Over at Enarx, we’re preparing for another release. They’re coming every four weeks now, and we’re getting into a good rhythm. Thanks to all contributors, and also those working on streamlining the release process. It’s a complex project with lots of dependencies – some internal, and some external – and we’re still feeling our way about how best to manage it all. One thing that you will be starting to see in our documentation, and which we intend to formalise in coming releases, is support for particular languages. I don’t mean human languages (though translations of Enarx documentation into different languages, to support as diverse a community as we can, is definitely of interest), but programming languages.

Enarx is, at its heart, a way to deploy applications into different environments: specifically, Trusted Execution Environments (though we do support testing in kvm). The important word here is “execution”, because applications need a runtime in which to execute. Runtimes come in many different flavours: ELF (“Executable and Linking Format”, the main standard for Linux systems), JVM (“Java Virtual Machine”, for compiled Java classes) and PE (“Portable Executable”, used by Windows), to give but a few examples. Enarx uses WebAssembly, or, to be more exact, WASI, which you can think of as a “headless” version of WebAssembly: whereas WebAssembly was originally designed to run within browsers, WASI-compliant runtimes support server-type applications. The runtime which Enarx supports is called wasmtime, which is a Bytecode Alliance project, and written in Rust (like Enarx itself).

This is great, but (almost) nobody writes native WebAssembly code (there is actually a “human-readable” format supported by the standard, but I personally wouldn’t want to be writing in it directly!). One of the great things about WebAssembly is that it’s largely language-neutral: you can write in your favourite language and then compile your application to a “wasm” binary to be executed by the runtime (wasmtime, in our case). WebAssembly is attracting lots of attention within the computing community at the moment, and so people have created lots of different mechanisms to allow their favourite languages to create wasm binaries. There’s a curated list here, though it’s not always updated very frequently, and given the amount of interest in the space, it may be a little out of date when you visit the page. In the list, you’ll find common languages like C, C++, Rust, Golang, .Net, Python and Javascript, as well as less obvious ones like Haskell, COBOL and Scheme. Do have a look – you may be surprised to find support for your favourite “obscure” language is already started, or even quite mature.

This proliferation of languages with what we could call “compile target support” for WebAssembly is excellent news for Enarx, because it means that people writing in these languages may be able to write applications that we can run. I say may, because there’s a slight complication, which is that not all of these compile targets support WASI, which is the specific interface supported by wasmtime, and therefore by Enarx.

This is where the Enarx community has started to step in. They – we – have been creating a list of languages which do allow you to compile wasm binaries that execute under wasmtime, and therefore in Enarx. You’ll find a list over at our WebAssembly Guide and, at time of writing, it includes Rust, C++, C, Golang, Ruby, .NET, TypeScript, AssemblyScript, Grain, Zig and JavaScript[1]. You can definitely expect to see more coming in the near future. With this list, we don’t just say “you can run applications compiled from this language”, but provide a guide so that you can try each lanuage for yourself! Currently the structure of how the information is presented varies from language to language – we should probably try to regularise this – but in each case, there should be sufficient information for someone fairly familiar with the lanaguage to write a simple program and run it in Enarx.

As I noted above, not all languages with compile target support for WebAssembly will work yet, but we’re also doing “upstream” work in some cases to help particular languages get to a position where they will work by submitting patches to fix specific issues. This is an area where more involvement from the community (that means you!) can help: the more people contributing to this work, or noting how important it is to them, the quicker we’ll gain support for more languages.

And here’s where we hope to be: in upcoming releases, we want to be in a position where Enarx officially supports particular languages. What exactly that “support” entails is something we haven’t yet fully defined, but, at minimum, we hope to be able to say something like “applications written in this language using this set of capabilities/features are expected to work”, based on automated testing of “known good” code on a per-release basis. This will mean that users of Enarx will be able to have high confidence that an application working on one release will behave exactly the same on the next: a really important property for a project intended for commercial deployments.

How can you get involved? Well, the most obvious is to visit the page in our docs relating to your favourite language. Try it out, give us feedback or offer to improve the documentation if you think it needs it, or even go upstream and offer patches. If no such page exists, you could visit our chat channels and ask to see if anyone is working on support and/or create an issue requesting support, explaining why you think it’s important.

Finally, to encourage upstream developers to realise how important supporting “their” language is, you can provide a GitHub star by visiting https://enarx.dev or https://github.com/enarx/enarx. “Starring” the project is a way to register your interest, and to show the community that Enarx is something you’re interested in.


1 – Huge thanks to everyone involved in these efforts, with a special shout-out to Deepanshu Arora, who’s done lots of work in this area.

WebAssembly logo: By Carlos Baraza – Own work / https://github.com/carlosbaraza/web-assembly-logo, CC0, https://commons.wikimedia.org/w/index.php?curid=56494100