Enarx 0.3.0 (Chittorgarh Fort)

Write some applications and run them in an Enarx Keep.

I usually post on a Tuesday, but this week I wanted to wait for a significant event: the release Enarx v0.3.0, codenamed “Chittorgarh Fort”. This happened after I’d gone to bed, so I don’t feel too bad about failing to post on time. I announced Enarx nearly three years ago, in the article Announcing Enarx on the 7th May 2019. and it’s admittedly taken us a long time to get to where we are now. That’s largely because we wanted to do it right, and building up a community, creating a start-up and hiring folks with the appropriate skills is difficult. The design has evolved over time, but the core principles and core architecture are the same as when we announced the project.

You can find more information about v0.3.0 at the release page, but I thought I’d give a few details here and also briefly add to what’s on the Enarx blog about the release.

What’s Enarx?

Enarx is a deployment framework for running applications within Trusted Execution Environments (TEEs). We provide a WebAssembly runtime and – this is new functionality that we’ve started adding in this release – attestation so that you can be sure that your application is protected within a TEE instance.

What’s new in v0.3.0?

A fair amount of the development for this release has been in functionality which won’t be visible to most users, including a major rewrite of the TEE/host interface component that we call sallyport. You will, however, notice that TLS support has been added to network connections from applications within the Keep. This is transparent to the application, so “Where does the certificate come from?” I hear you ask. The answer to that is from the attestation service that’s also part of this release. We’ll be talking more about that in further releases and articles, but key to the approach we’re taking is that interactions with the service (we call it the “Steward”) is pretty much transparent to users and applications.

How can I get involved?

What can you do to get involved? Well, visit the Enarx website, look at the code and docs over at our github repositories (please star the project!), get involved in the chat. The very best thing you can do, having looked around, is to write some applications and run them in an Enarx Keep. And then tell us about your experience. If it worked first time, then wow! We’re still very much in development, but we want to amass a list of applications that are known to work within Enarx, so tell us about it. If it doesn’t work, then please also tell us about it, and have a look at our issues page to see if you’re the first person to run across this problem. If you’re not, then please add your experiences to an existing issue, but if you are, then create a new one.

Enarx isn’t production ready, but it’s absolutely ready for initial investigations (as shown by our interns, who created a set of demos for v0.2.0, curated and aided by our community manager Nick Vidal).

Why Chittorgarh Fort?

It’s worth having a look at the Wikipedia entry for the fort: it’s really something! We decided, when we started creating official releases, that we wanted to go with the fortification theme that Enarx has adopted (that’s why you deploy applications to Enarx Keeps – a keep is the safest part of a castle). We started with Alamo, then went to Balmoral Castle, and then to Chittorgarh Fort (we’re trying to go with alphabetically sequential examples as far as we can!). I suggested Chittorgarh Fort to reflect the global nature of our community, which happens to include a number of contributors from India.

Who was involved?

I liked the fact that the Enarx blog post mentioned the names of some (most?) of those involved, so I thought I’d copy the list of github account names from there, with sincere thanks:

@MikeCamel @npmccallum @haraldh @connorkuehl @lkatalin @mbestavros @wgwoods @axelsimon @ueno @ziyi-yan @ambaxter @squidboylan @blazebissar @michiboo @matt-ross16 @jyotsna-penumaka @steveeJ @greyspectrum @rvolosatovs @lilienbm @CyberEpsilon @kubkon @nickvidal @uudiin @zeenix @sagiegurari @platten @greyspectrum @bstrie @jarkkojs @definitelynobody @Deepansharora27 @mayankkumar2 @moksh-pathak


Rahultalreja11 at English Wikipedia, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

What’s a secure channel?

Always beware of products and services which call themselves “secure”. Because they’re not.

A friend asked me what I considered a secure channel a couple of months ago, and it made me think. Many of us have information that we wish to communicate which we’d rather other people can’t look at, for all sorts of reasons. These might range from present ideas for our spouse or partner sent by a friend to my phone to diplomatic communications about espionage targets sent between embassies over the Internet, with lots in between: intellectual property discussions, bank transactions and much else. Sometimes, we want to ensure that people can’t change what’s in the messages we send: it might be OK for other people to know that I pay £300 in rent, but not for them to be able to change the amount (or the bank account into which it goes). These two properties are referred to as confidentiality (keeping information secret) and integrity (keeping information unchangeable), and often you want to combine them – in the case of our espionage plans, I’d prefer that my enemies don’t know what targets are at risk, but also that they don’t change the targets I’ve selected to something less bothersome for them.

Modern encryption systems generally provide both confidentiality and integrity for messages and data, so I’m going to treat these as standard properties for an encrypted channel. Which means that if I use encryption on a channel, it’s secure, right?

Hmm. Let’s step back a bit, because, unfortunately, there’s rather a lot more to unpack than that. Three of the questions we need to tackle should give us pause. They are: “secure from whom?”, “secure for how long?” and “secure where?”. The answers we give to these questions will be important, and though they are all somewhat intertwined, I’m going to deal with them in order, and I’m going to use the examples of the espionage message and the present ideas to discuss them. I’m also going to talk more about confidentiality than integrity – though we’ll assume that both properties are important to what we mean by “secure”.

Secure from whom?

In our examples, we have very different sets of people wanting to read our messages – a nation state and my spouse. Unless my spouse has access to skills and facilities of which I’m unaware (and I wouldn’t put it past her), the resources that she has at her disposal to try to break the security of my communication are both fewer and less powerful than those of the nation state. A nation state may be able to apply cryptologic attacks to messages, attack the software (and even firmware or hardware) implementations of the encryption system, mess with the amount of entropy available for key generation at either or both ends of the channel, perform interception (e.g. Person-In-The-Middle) attacks, coerce the sender or recipient of the message and more. I’m hoping that most of the above are not options for my wife (though coercion might be, I suppose!). The choice of encryption system, including entropy sources, cipher suite(s), hardware and software implementation are all vital in the diplomatic message case, as are vetting of staff and many other issues. In the case of gift ideas for my wife’s birthday, I’m assuming that a standard implementation of a commercial messaging system should be enough.

Secure for how long?

It’s only a few days till my wife’s birthday (yes, I have got her a present, though that does remind me; I need a card…), so I only have to keep the gift ideas secure for a little longer. It turns out that, in this case, the time sensitivity of the integrity of the message is different to that of the confidentiality: even if she managed to change what the gift idea in the message was, it wouldn’t make a difference to what I’ve got her at this point. However, I’d still prefer if she didn’t know what the gift ideas are.

In the case of the diplomatic espionage message, we can assume that confidentiality and the integrity are both important for a much longer time, but we’ll concentrate on the confidentiality. Obviously an attacking country would prefer it if the target were unaware of an attack before it happened, but if the enemy managed to prove an attack was performed by the message sender’s or recipient’s country, even a decade or more in the future, this could also lead to major (and negative) consequences. We want to ensure that whatever steps we take to protect the message are sufficient that access to a copy of the message taken when it was sent (via wire-tapping, for instance) or retrieved at a later date (via access to a message store in the future), is insufficient to allow it to be cracked. This is tricky, and the history of cryptologic attacks on encryption schemes, not to mention human failures (such as leaks) and advances in computation (such as quantum computing) should serve as a strong warning that we need to consider very carefully what mechanisms we should use to protect our messages.

Secure where?

Are the embassies secure? Are all the machines between the embassies secure? Is the message stored before delivery? If so, is it stored on a machine within the embassy or on a server elsewhere? Is it end-to-end encrypted, or is it decrypted before delivery and then re-encrypted (I really, really hope not). While this is unlikely in the case of diplomatic messages, a good number of commercially sensitive messages (including much email) is not end-to-end encrypted, leading to vulnerabilities if someone trying to break the security can get access to the system where they are stored, or intercept them between decryption and re-encryption.

Typically, we have better control over different parts of the infrastructure which carry or host our communications than we do over others. For most of the article above, I’ve generally assumed that the nation state trying to read embassy message is going to have more relevant resources to try to breach the security of the message than my wife does, but there’s a significant weakness in protecting my wife’s gift idea: she has easy access to my phone. I tend to keep it locked, and it has a PIN, but, if I’m honest, I don’t tend to go out of my way to keep her out: the PIN is to deter someone who might steal it. Equally, it’s entirely possible that I may be sharing some material (a video or news article) with her at exactly the time that the gift idea message arrives from our mutual friend, leading her to see the notification. In either case, there’s a good chance that the property of confidentiality is not that strong after all.

Conclusion

I’ve said it before, and I plan to say it again (and again, and again): there is no “secure”. When we talk about secure channels, we must be aware that what we mean should be “channels secured with appropriate measures to protect against the risks associated with the security being compromised”. This is a long way of saying “if I’m protecting diplomatic messages, I need to make greater efforts than if I’m trying to stop my wife finding out ahead of time what she’s getting for her birthday”, but it’s important to understand this. Part of the problem is that we’re bombarded with words like “secure”, which are unqualified, and may lead us to think that they’re absolute, when they’re absolutely not. Another part of the problem is that once we’ve put one type of security in place, particularly when it’s sold or marketed as “best in breed” or “best practice”, that it addresses all of the issues we might have. This is clearly not the case – using the strongest encryption possible for messages between my friend and me isn’t going to stop my wife from knowing I’ve bought her if knows the PIN for my phone. Please, please, consider what you need when you’re protecting your communications (and other data, of course), and always beware of products and services which call themselves “secure”. Because they’re not.

Emotional about open source

Enarx is available to all, usable by all.

Around October 2019, Nathaniel McCallum and I founded the Enarx project. Well, we’d actually started it before then, but it’s around then that the main GitHub repo starts showing up, when I look at available info. In the middle of 2021, we secured funding a for a start-up (now named Profian), and since then we’ve established a team of engineers to work on the project, which is itself part of the Confidential Computing Consortium. Enarx is completely open source, and that’s really central to the project. We want (and need) the community to get involved, try it out, improve it, and use it. And, of course, if it’s not open source, you can’t trust it, and that’s really important for security.

The journey has been hard at times, and there were times when we nearly gave up on the funding, but neither Nathaniel nor I could see ourselves working on anything else – we really, truly believe that there’s something truly special going on, and we want to bring it to the world. I’m glad (and relieved) that we persevered. Why? Because last week, on Thursday, was the day that this came true for me. The occasion was OC3, a conference in Confidential Computing organised by Edgeless Systems. I was giving a talk on Understanding trust relationships for Confidential Computing, which I was looking forward to, but Nick Vidal, Community Manager for the Enarx project, also had a session earlier on. His session was entitled From zero to hero: making Confidential Computing accessible, and wasn’t really his at all: it was taken up almost entirely by interns in the project, with a brief introduction and summing up by Nick.In his introduction, Nick explained that he’d be showing several videos recorded by the interns of demos they had recorded. These demos took the Enarx project and ran applications that the (they interns) had created within Keeps, using the WebAssembly runtime provided within Enarx. The interns and their demos were:

  • TCP Echo Server (Moksh Pathak & Deepanshu Arora) – Mosksh and Deepanshu showed two demos: a ROT13 server which accepts connections, reads text from them and returns the input, ROT13ed; and a simple echo server.
  • Fibonacci number generator (Jennifer Chukwu) – a simple Fibonacci number generator running in a Keep
  • Machine learning with decision tree algorithm on Diabetes data set (Jennifer Kumar & Ajay Kumar) – implementation of Machine Learning, operating on a small dataset.
  • Zero Knowledge Proof using Bulletproof (Shraddha Inamdar) – implementation of a Zero Knowledge Proof with verification.

What is exciting about these demos is several-fold:

  1. three of them have direct real-world equivalent use cases:
    1. The ROT13 server, while simple, could be the basis for an encryption/decryptions service.
    2. the Machine Learning service is directly relevant to organisations who wish to run ML workloads in the Cloud, but need assurances that the data is confidentiality and integrity protected.
    3. the Zero Knowledge Proof demo provides an example of a primitive required for complex transaction services.
  2. none of the creators of the demos knew anything about Confidential Computing until a few months ago.
  3. none of the creators knew much – if anything – about WebAssembly before coming to the project.
  4. none of the creators is a software engineering professional (yet!). They are all young people with an interest in the field, but little experience.

What this presentation showed me is that what we’re building with Enarx (though it’s not even finished at this point) is a framework that doesn’t require expertise to use. It’s accessible to beginners, who can easily write and deploy applications with obvious value. This is what made me emotional: Enarx is available to all, usable by all. Not just security experts. Not just Confidential Computing gurus. Everyone. We always wanted to build something that would simplify access to Confidential Computing, and that’s what we, the community, have brought to the world.

I’m really passionate about this, and I’d love to encourage you to become passionate about it, too. If you’d like to know more about Enarx, and hopefully even try it yourself, here are some ways to do just that;

  • visit our website, with documentation, examples and a guide to getting started
  • join our chat and then one of our stand-ups
  • view the code over at GitHub (and please star the project: it encourages more people to get involved!)
  • read the Enarx blog
  • watch the video of the demos.

I’d like to finish this post by thanking not only the interns who created the demos, but also Nick Vidal, for the incredible (and tireless!) work he’s put into helping the interns and into growing the community. And, of course, everyone involved in the project for their efforts in getting us to where we are (and the vision to continue to the next exciting stages: subscribe to this blog for upcoming details).

Enarx 0.2.0 – Balmoral Castle

Now it’s possible to write applications that you can talk to over the network

The big news this week from the Enarx project is our latest release: 0.2.0, which is codenamed “Balmoral Castle”, to continue with our castle/fortification theme.

The big change in Enarx 0.2.0 is the addition of support for networking. Until now, there wasn’t much you could really do in an Enarx Keep, honestly: you could run an application, but all it could to for input and output was read from stdin and write to stdout or stderr. While this was enough to prove that you could write and compile applications to WebAssembly and run them, any more complex interaction with the world outside the Keep was impossible.

So, why is this big news? Well, now it’s possible to write applications that you can talk to over the network. The canonical example which we’ve provided as part of the release is a simple “echo” server, which you start in a Keep and then listens on a port for incoming connections. You make a connection (for instance using the simple command-line utility ncat), and send it a line of text. The server accepts the connection, receives the text and sends it right back you. It can handle multiple connections and will send the text back to the right one (hopefully!).

This is new functionality with Enarx 0.2.0, and the ability to use networking mirrors an important change within WASI (the WebAssembly System Interface) specification, the runtime implemented within an Enarx Keep. Specifically, WASI snapshot preview 1, released in January (2022) now supports the the ACCEPT capability on sockets. The way that WASI works with managing permissions and capabilities is carefully designed, and we (the Profian folks working on Enarx) coordinated closely with the open source WASI/Wasm community to add this in a way which is consistent with the design philosophy of the project. Once the capability was added to the snapshot, there was one more step needed before Enarx could implement support, which was that it needed to appear in wasmtime, the WebAssembly runtime we use within Keeps to allow you to run your applications. This happened last week, in wasmtime release 0.34.0, and that allowed us to make this new release of Enarx.

This may not sound very exciting … but with this in place, you start to build proper applications and micro-services. What about an HTTP server? A ROT13 “encryption” service? A chatbot? An email server? A Wordle implementation[1]? And it’s not just text that you can send over a network connection, of course. What might write to process other types of data? A timestamp server? A logging service? With a network connection, you have the ability to write any of these. Micro-services are all about accepting connections, processing the data that’s come in, and then sending out the results. All of that is possible with this new release.

What we’d love you to do is to start writing applications (using networking) and running them in Enarx. Tell us what works – even better, tell us what doesn’t by creating an issue in our github repository . Please publish examples, join our chat channels, give us a github star, get involved.

What’s coming next? Well, keep an eye on the Enarx site, but be assured that I’ll announce major news here as well. You can expect work in attestation and deployment in the near future – watch this space…


1 – at time of writing, everyone’s talking about Wordle. For those of you coming from the future (say a couple of weeks from now), you can probably ignore this example.

[Image of Edward VII at Balmoral Castle from Wikimedia].

7 weird points about travelling again

There’s going to be more travel happening

After nearly 23 months without getting on a plane or leaving the UK at all, due to Covid, I’m back travelling. I had a trip to the US last month, and I’m off there again next week. For the past 10+ years, I’ve mainly worked from home, so not going into an office hasn’t been an issue for me, but the flip side of that is that I rarely get any chance to meet colleagues, partners and customers face-to-face except when I do travel. Before the pandemic, I was generally out of the country once a month – a schedule which suited me and the family pretty well, on the whole – so having nearly two years of minimal external contact has been strange.

I’ve blogged about travel before (see Travelling, keeping well, Travelling and the (frankly ill-fated) 5 resolutions for travellers in 2020) and I quite enjoy travelling, on the whole, though I’m not always good at it, and I don’t really enjoy being away from home (which I know is somewhat strange). As we move into a world where there’s going to be more travel happening, conferences move from virtual only to hybrid or in person and face-to-face business meetings become something closer to the norm, I thought it might be interesting to add some personal thoughts about some points that I’ve noticed, and which might be interesting to those considering travel or elicit comments from those who’ve already started in earnest (or never really stopped).

1. Regulations keep changing

Last month, when I went to the US from the UK, I needed a negative Covid test within 72 hours of arrival. That has changed, in the intervening weeks, to a test taken the day before. You need to be on the ball and work out the very latest regulations not only for where you’re going, but also for any countries through which you’re transiting. If you don’t get it right, you may be refused entry, or have to quarantine, which may be not only disruptive to your trip, but very expensive.

2. Masks are everywhere

This may feel normal now, but the default in most places is “mask on”. I’ve found myself keeping a mask on even outside, if I’m making a quick trip to a store or coffee shop from the office, rather than taking it on and off. It’s really worth packing a good supply of (quality) masks with you, and remembering to change or wash them every day: there’s a difference in wearing the same one a few times for 10 minutes each time to wearing one for several hours. You don’t want to wearing the same one for more than a day if you can avoid it.

3. Airlines have strange rules

Cabin crew are trying really hard, and it’s not their fault that there are new rules which you have to follow. One airline I travelled on last month had a rule that you weren’t supposed to spend more than 15 minutes unmasked to eat your meal. That’s difficult to abide by (particularly when the crew are serving different parts of it at different times) and really difficult to enforce, but I see what they’re trying to do. Stick with it, realise that the crew aren’t doing it to make your life hard or because they enjoy it, and try to have empathy with them. A major tip (whether in a pandemic or not): always be nice to the cabin crew, as they have the power to make life really difficult for you, or to ease the way in certain circumstances.

4. You’ll get paranoid about surfaces

Well, I did. While most of the focus on transmission of Covid is around avoiding airborne particles these days, I became aware that many, many people had probably been touching the same surfaces that I’d been touching, and that some of them were probably contagious. Luckily, many shops and places of work are making hand sanitiser available at the entrance/exit these days. I found myself using it on my way in and the way out. It can’t do any harm.

5. It’s quiet out there

I feel for retail and hospitality businesses, I really do. Getting out and about made me realise how quiet things still are – and a little nervous when I was in environments where it was a little more busy. Don’t expect to see as many people on the street, at the airport, in the malls. They’re unlikely to be empty, but things certainly felt abnormally quiet to me. Be pleasant and friendly to those who serve you, and tip well when you get good service.

6. Colleagues are making an extra effort

This isn’t particularly weird if you work with nice people, but I’ve noticed a trend for people to ask just a little bit more about each others’ health – physical and mental – both on calls and in person. I’ve also noticed more awareness of colleagues’ possible risks, such as elderly relatives or immuno-suppressed close family members, and offers to take particular care or implement specific measures to protect those they work with, whether asked for or not. Long may this continue.

7. Long-haul flights in a mask aren’t fun

Top tip? Buy a couple of “ear savers” for your masks if you’re using the type which sit behind your ears. These attach to the loops and then fasten behind your head, relieving the pressure on your ears. I may have a particularly large head, but I found that even twenty minutes of wearing a mask without one of these started giving me a splitting headache. I ended up fashioning one from pieces of an old mask to save my head until a colleague was able to buy some online. Even with this, I can’t say that it was fun wearing one, and getting sleep was much more difficult than it would normally have been. Beyond ear savers, I’m not sure what to suggest beyond finding a comfortable mask, and making sure that you try it out for an extended period before you travel.

A new state of mind

I’m quite proud; though maybe slightly ashamed that I didn’t do it before.

Last year, I co-founded Profian with Nathaniel McCallum, a colleague from Red Hat. It’s a security start-up in the Confidential Computing Space, based on the open source Enarx project. There’s an update on that on the Profian blog with an article entitled Design to Roadmap to Product.

It’s an article on what we’ve been up to in the company, and a records the realisation that it’s time for me to step into yet another role as one of the founders: moving beyond the “let’s make sure that we have a team and that the basic day-to-day running of the company is working” to “OK, let’s really map out our product roadmap and how we present them to customers.”

A new state of mind

Which leads me to the main point of this short article. This is not an easy transition – it’s yet another new thing to learn, discover which bits I’m good at, improve the bits I’m not, get internal or external help to scale with, etc. – but it’s a vital part of being the CEO of a start-up.

It’s also something which I had, to be honest, been resisting. Most of us prefer to stick to stuff which we know – whether we’re good at it or not, sometimes! – rather than “embracing change”. Sometimes that’s OK, but in the position I’m in at the moment, it’s not. I have responsibility to the company and everyone involved in it to ensure that we can be successful. And that means doing something. So I’ve been listening to people say, “these are the things you need to do”, “here are the ways we can help you”, “this is what you should be looking for” and, while listening, just, well, putting it off, I suppose. Towards the end of last week, I ordered a book (The Founder Handbook) to try to get my head round it a bit more. There are loads of this type of book, but I did a little research, and this looked like it might be one of the better ones.

So, it arrived, and I started reading it. And, darn it, it made sense. It made me start seeing the world in a new way – a way which might not have been relevant to me (or the company) a few months ago, but really is, now. And I really need to embrace lots of the things the authors are discussing. I’m not saying that it’s a perfect book, or that no other book would have prompted this response, but at some point over the weekend, I thought: “right, it’s time to change and to move into this persona, thinking about these issues, being proactive and not putting it off anymore”.

I’m quite proud, to be honest; though maybe slightly ashamed that I didn’t do it before. I cemented the decision to jump into a new mindset by doing what I’ve done on a couple of occasions before (including when I decided to commit to writing my book): I told a few people what I was planning to do. This really works for me on several levels:

  1. I’ve made a public commitment (even if it’s to a few people[1]), so it’s difficult to roll it back;
  2. I’ve made a commitment to myself, so I can’t pretend that I haven’t and let myself drift back into the old mindset;
  3. it sets expectations from other people as to what I’m going to do;
  4. people are predisposed to being helpful when you struggle, or ask for help.

These are all big positives, and while telling people you’ve made a big decision may not work for everyone, it certainly helps for me. This is going to be only one of many changes I need to make if we’re to build a successful company out of Profian and Enarx, but acknowledging that it needed to be made – and that I was the one who was going to have to effect that change – is important to me, the company, our investors and our employees. Now all I need to do is make a success of it! Wish me luck (and keep an eye out for more…).


1 – a few more people now, I suppose, now that I’ve published this article!

Open source and cyberwar

If cyberattacks happen to the open source community, the impact may be greater than you expect.

There are some things that it’s more comfortable not thinking about, and one of them is war. For many of us, direct, physical violence is a long way from us, and that’s something for which we can be very thankful. As the threat of physical violence recedes, however, it’s clear that the spectre of cyberattacks as part of a response to aggression – physical or virtual – is becoming more and more likely.

It’s well attested that many countries have “cyber-response capabilities”, and those will include aggressive as well as protective measures. And some nation states have made it clear not only that they consider cyberwarfare part of any conflict, but that they would be entirely comfortable with initiating cyberwarfare with attacks.

What, you should probably be asking, has that to do with us? And by “us”, I mean the open source software community. I think that the answer, I’m afraid, is “a great deal”. I should make it clear that I’m not speaking from a place of privileged knowledge here, but rather from thoughtful and fairly informed opinion. But it occurs to me that the “old style” of cyberattacks, against standard “critical infrastructure” like military installations, power plants and the telephone service, was clearly obsolete when the Two Towers collapsed (if not in 1992, when the film Sneakers hypothesised attacks against targets like civil aviation). Which means that any type of infrastructure or economic system is a target, and I think that open source is up there. Let me explore two ways in which open source may be a target.

Active targets

If we had been able to pretend that open source wasn’t a core part of the infrastructure of nations all over the globe, that self-delusion was finally wiped away by the log4j vulnerabilities and attacks. Open source is everywhere now, and whether or not your applications are running any open source, the chances are that you deploy applications to public clouds running open source, at least some of your employees use an open source operating system on their phones, and that the servers running your chat channels, email providers, Internet providers and beyond make use – extensive use – of open source software: think apache, think bind, think kubernetes. At one level, this is great, because it means that it’s possible for bugs to be found and fixed before they can be turned into vulnerabilities, but that’s only true if enough attention is being paid to the code in the first place. We know that attackers will have been stockpiling exploits, and many of them will be against proprietary software, but given the amount of open source deployed out there, they’d be foolish not to be collecting exploits against that as well.

Passive targets

I hate to say it, but there also are what I’d call “passive targets”, those which aren’t necessarily first tier targets, but whose operation is important to the safe, continued working of our societies and economies, and which are intimately related to open source and open source communities. Two of the more obvious ones are GitHub and GitLab, which hold huge amounts of our core commonwealth, but long-term attacks on foundations such as the Apache Foundation and the Linux Foundation, let alone kernel.org, could also have impact on how we, as a community, work. Things are maybe slightly better in terms of infrastructure like chat services (as there’s a choice of more than one, and it’s easier to host your own instance), but there aren’t that many public servers, and a major attack on either them or the underlying cloud services on which many of them rely could be crippling.

Of course, the impact on your community, business or organisation will depend on your usage of difference pieces of infrastructure, how reliant you are on them for your day-to-day operation, and what mitigations you have available to you. Let’s quickly touch on that.

What can I do?

The Internet was famously designed to route around issues – attacks, in fact – and that helps. But, particularly where there’s a pretty homogeneous software stack, attacks on infrastructure could still have very major impact. Start thinking now:

  • how would I support my customers if my main chat server went down?
  • could I continue to develop if my main git provider became unavailable?
  • would we be able to offer at least reduced services if a cloud provider lost connectivity for more than an hour or two?

By doing an analysis of what your business dependencies are, you have the opportunity to plan for at least some of the contingencies (although, as I note in my book, Trust in Computer Systems and the Cloud, the chances of your being able to analyse the entire stack, or discover all of the dependencies, is lower than you might think).

What else can you do? Patch and upgrade – make sure that whatever you’re running is the highest (supported!) version. Make back-ups of anything which is business critical. This should include not just your code but issues and bug-tracking, documentation and sales information. Finally, consider having backup services available for time-critical services like a customer support chat line.

Cyberattacks may not happen to your business or organisation directly, but if they happen to the open source community, the impact may be greater than you expect. Analyse. Plan. Mitigate.

How to hire an open source developer

Our view was that a pure “algorithm coding” exercise was pretty much useless for what we wanted.

We’ve recently been hiring developers to work on the Enarx project, a security project, written almost exclusively in Rust (with a bit of Assembly), dealing with Confidential Computing. By “we”, I mean Profian, the start-up for which I’m the CEO and co-founder. We’ve now found all the people we’re looking for initially on the team (with a couple due to start in the next few weeks), though we absolutely welcome contributors to Enarx, and, if things continue to go well, we’ll definitely want to hire some more folks in the future.

Hiring people is not easy, and we were hit with a set of interesting requirements which made the task even more difficult. I thought it would be useful and interesting for the community to share how we approached the problem.

What were we looking for?

I mentioned above some interesting requirements. Here’s what the main ones were:

  • systems programming – we mainly need people who are happy programming at the systems layer. This is pretty far down the stack, with lots of interactions directly with hardware or the OS. Where we are creating client-server pieces, for instance, we’re having to write quite a lot of the protocols, manage the crypto, etc., and the tools we’re using aren’t all very mature (see “Rust” below).
  • Rust – almost all of the project is written in Rust, and what isn’t is written in Assembly language (currently exclusively x86, though that may change as we add more platforms). Rust is new, cool and exciting, but it’s still quite young, and some areas don’t have all the support you might like, or aren’t as mature as you might hope – everything from cryptography through multi-threading libraries and compiler/build infrastructure.
  • distributed team – we’re building a team of folks where can find them: we have developers in Germany, Finland, the Netherlands, North Carolina (US), Massachusetts (US), Virginia (US) and Georgia (US), I’m in the UK, our community manager is in Brazil and we have interns in India and Nigeria. We knew from the beginning that we wouldn’t have everyone in one place, and this required people who we were happy would be able to communicate and collaborate with people via video, chat and (at worst) email.
  • security – Enarx is a security project, and although we weren’t specifically looking for security experts, we do need people who are able to think and work with security top of mind, and design and write code which is applicable and appropriate for the environment.
  • git – all of our code is stored in git (mainly GitHub, with a little bit of GitLab thrown in), and so much of our interaction around code revolves around git that anybody joining us would need to be very comfortable using it as a standard tool in their day-to-day work.
  • open source – open source isn’t just a licence, it’s a mindset, and, equally important, a way of collaborating. A great deal of open source software is created by people who aren’t geographically co-located, and who might not even see themselves as a team. We needed to be sure that the people we were hiring, while gelling as a close team within the company, will also be able to collaborate with people outside the organisation and be able to embrace Profian’s “open by default” culture not just for code, but for discussions, communications and documentation.

How did we find them?

As I’ve mentioned before, in Recruiting is hard. We ended up using a variety of means to find candidates, with varying levels of success:

  • LinkedIn job adverts
  • LinkedIn searches
  • Language-specific discussion boards and hiring boards (e.g. Reddit)
  • An external recruiter (shout out to Gerald at Interstem)
  • Word-of-mouth/personal recommendations

It’s difficult to judge between them in terms of quality, but without an external recruiter, we’d certainly have struggled with quantity (and we had some great candidates from that pathway, too).

How did we select them?

We needed to measure all of the candidates against all of the requirements noted above, but not all of them were equal. For instance, although we were keen to hire Rust programmers, we were pretty sure that someone with strong C/C++ skills at the systems level would be able to pick up Rust quickly enough to be useful. On the other hand, a good knowledge of using git was absolutely vital, as we couldn’t spend time working with new team members to bring them up-to-speed on our way of working. A strong open source background was, possibly surprisingly, not a requirement, but the mindset to work in that sort of model was, and anyone with a history of open source involvement is likely to have a good knowledge of git. The same goes for the ability to work in a distributed team: so much of open source is distributed that involvement in almost any open source community was a positive indicator. Security we decided was a “nice-to-have”.

How to proceed? We wanted to keep the process simple and quick – we don’t have a dedicated HR or People function, and we’re busy trying to get code written. What we ended up was this (with slight variations), which we tried to get complete within 1-2 weeks:

  1. Initial CV/resume/github/gitlab/LinkedIn review – this to decide whether to interview
  2. 30-40 minute discussion with me as CEO, to find out if they might be a good cultural fit, to give them a chance to find out about us, and get an idea if they were as technically adept as they appeared from the first step
  3. Deep dive technical discussion led by Nathaniel, usually with me there
  4. Chat with other members of the team
  5. Coding exercise
  6. Quick decision (usually within 24 hours)

The coding exercise was key, but we decided against the usual approach. Our view was that a pure “algorithm coding” exercise of the type so beloved by many tech companies was pretty much useless for what we wanted. What we wanted to understand was whether candidates could quickly understand a piece of code, fix some problems and work with the team to do so. We created a github repository (in fact, we ended up using two – one for people a little higher up the stack) with some almost-working Rust code in it, some instructions to fix it, perform some git-related processes on it, and then improve it slightly, adding tests along the way. A very important part of the test was to get candidates to interact with the team via our chat room(s). We scheduled 15 minutes on a video call for set up and initial questions, 2 hours for the exercise (“open book” – as well as talking to the team, candidates were encouraged to use all resources available to them on the Internet), followed by a 30 minute wrap-up session where the team could ask questions and the candidate could also reflect on the task. This also allowed us to get an idea of how well the candidate was able to communicate with the team (combined with the chat interactions during the exercise). Afterwards, the candidate would drop off the call, and we’d generally make a decision within 5-10 minutes as to whether we wanted to hire them.

This generally worked very well. Some candidates struggled with the task, some didn’t communicate well, some failed to do well with the git interactions – these were the people we didn’t hire. It doesn’t mean they’re not good coders, or that they might not be a good fit for the project or the company later on, but they didn’t immediate meet the criteria we need now. Of the ones we hired, the levels of Rust experience and need for interaction with the team varied, but the level of git expertise and their reactions to our discussions afterwards was always sufficient for us to decide to take them.

Reflections

On the whole, I don’t think we’d change a huge amount about the selection process – though I’m pretty sure we could do better with the search process. The route through to the coding exercise allowed us to filter out quite a few candidates, and the coding exercise did a great job of helping us pick the right people. Hopefully everyone who’s come through the process will be a great fit and will produce great code (and tests and documentation and …) for the project. Time will tell!

Trust book – playlist!

A playlist of music to which I’d listened and which I’d enjoyed over the months it took to write the book.

I had probably more fun than I deserved to have writing the acknowledgements section of my book, Trust in Computer Systems and the Cloud (published by Wiley at the end of December 2021). There was another section which I decided to add to the book purely for fun: a playlist of music to which I’d listened and which I’d enjoyed over the months it took to write. I listen to a lot of music, and the list is very far from a complete one, but it does represent a fair cross-section of my general listening tastes. Here’s the list, with a few words about each one.

One thing that’s missing is any of the classical music that I listen to. I decided against including this, as I’d rarely choose single tracks, but adding full albums seemed to miss the point. I do listen to lots of classical music, in particular sacred choral and organ music – happy to let people have some suggestions if they’d like.

  • Secret Messages – ELO – I just had to have something related (or that could be considered to be related) to cryptography and security. This song isn’t, really, but it’s a good song, and I like it.
  • Bleed to Love Her – Fleetwood Mac – Choosing just one Fleetwood Mac song was a challenge, but I settled on this one. I particularly like the harmonics in the version recorded live at Warner Brothers Studio in Burbank.
  • Alone in Kyoto – Air – This is a song that I put on when I want to relax. Chiiiiilllll.
  • She’s So Lovely – Scouting for Girls – Canonically, this song is known as “She’s A Lovely” in our family, as that’s what we discovered our daughters singing along to when we played it in the car many years ago.
  • Prime – Shearwater – This is much more of an “up” song when I want to get an edge on. Shearwater have a broad range of output, but this is particular favourite.
  • Stay – Gabrielle Aplin – I like the way this song flips expectations on its head. A great song by a talented artist.
  • The Way I Feel – Keane – A song about mental health.
  • Come On, Dreamer – Tom Adams – Adams has an amazing voice, and this is a haunting song about hope.
  • Congregation – Low – I discovered this song watching DEVS on Amazon Prime (it was originally on Hulu). Low write (and perform) some astonishing songs, and it’s really worth going through their discography if you like this one.
  • Go! – Public Service Broadcasting – You either love this or hate it, but I’m in the “love” camp. It takes original audio from the Apollo 11 moon landing and puts it to energising, exciting music.
  • The Son of Flynn (From “TRON: Legacy”/Score) – Daft Punk – TRON:Legacy may not be not the best film ever released, but the soundtrack from Daft Punk is outstanding Electronica.
  • Lilo – The Japanese House – A song about loss? About hope? Another one to chill to (and tha band are great live, too).
  • Scooby Snacks – Fun Lovin’ Criminals – Warning: explicit lyrics (from the very beginning!) A ridiculous song which makes me smile every time I listen to it.
  • My Own Worth Enemy – Stereophonics – I slightly surprised myself by choosing this song from the Stereophonics, as I love so many of their songs, but it really does represent much of what I love about their oeuvre.
  • All Night – Parov Stelar – If you ever needed a song to dance to as if nobody’s watching, this is the one.
  • Long Tall Sally (The Thing) – Little Richard – Sometimes you need some classic Rock ‘n’ Roll in your life, and who better to provide it?
  • Shart Dressed Man – ZZ Top – “Black tie…” An all-time classic by men with beards. Mostly.
  • Dueling Banjos – Eric Weissberg – I first heard this song at university. It still calls out to me. There are some good versions out there, but original from the songtrack to Deliverance is the canonical one. And what a film.
  • The Starship Avalon (Main Title) – Thomas Newman – This (with some of the others above) is on a playlist I have called “Architecting”, designed to get me in the zone. Another great film.
  • A Change is Gonna Come – Sam Cooke – A song of sadness, pain and hope.
  • This Place – Jamie Webster – A song about Liverpool, and a family favourite. Listen and enjoy (the accent and the song!).

If you’d like to listen to these tracks yourself, I’ve made playlists on my two preferred audio streaming sites: I hope you enjoy.

Spotify – Trust in Computer Systems and the Cloud – Bursell

Qobuz – Trust in Computer Systems and the Cloud – Bursell

As always, I love to get feedback from readers – do let me know what you think, or suggest other tracks or artists I or other readers might appreciate.