Open source and cyberwar

If cyberattacks happen to the open source community, the impact may be greater than you expect.

There are some things that it’s more comfortable not thinking about, and one of them is war. For many of us, direct, physical violence is a long way from us, and that’s something for which we can be very thankful. As the threat of physical violence recedes, however, it’s clear that the spectre of cyberattacks as part of a response to aggression – physical or virtual – is becoming more and more likely.

It’s well attested that many countries have “cyber-response capabilities”, and those will include aggressive as well as protective measures. And some nation states have made it clear not only that they consider cyberwarfare part of any conflict, but that they would be entirely comfortable with initiating cyberwarfare with attacks.

What, you should probably be asking, has that to do with us? And by “us”, I mean the open source software community. I think that the answer, I’m afraid, is “a great deal”. I should make it clear that I’m not speaking from a place of privileged knowledge here, but rather from thoughtful and fairly informed opinion. But it occurs to me that the “old style” of cyberattacks, against standard “critical infrastructure” like military installations, power plants and the telephone service, was clearly obsolete when the Two Towers collapsed (if not in 1992, when the film Sneakers hypothesised attacks against targets like civil aviation). Which means that any type of infrastructure or economic system is a target, and I think that open source is up there. Let me explore two ways in which open source may be a target.

Active targets

If we had been able to pretend that open source wasn’t a core part of the infrastructure of nations all over the globe, that self-delusion was finally wiped away by the log4j vulnerabilities and attacks. Open source is everywhere now, and whether or not your applications are running any open source, the chances are that you deploy applications to public clouds running open source, at least some of your employees use an open source operating system on their phones, and that the servers running your chat channels, email providers, Internet providers and beyond make use – extensive use – of open source software: think apache, think bind, think kubernetes. At one level, this is great, because it means that it’s possible for bugs to be found and fixed before they can be turned into vulnerabilities, but that’s only true if enough attention is being paid to the code in the first place. We know that attackers will have been stockpiling exploits, and many of them will be against proprietary software, but given the amount of open source deployed out there, they’d be foolish not to be collecting exploits against that as well.

Passive targets

I hate to say it, but there also are what I’d call “passive targets”, those which aren’t necessarily first tier targets, but whose operation is important to the safe, continued working of our societies and economies, and which are intimately related to open source and open source communities. Two of the more obvious ones are GitHub and GitLab, which hold huge amounts of our core commonwealth, but long-term attacks on foundations such as the Apache Foundation and the Linux Foundation, let alone kernel.org, could also have impact on how we, as a community, work. Things are maybe slightly better in terms of infrastructure like chat services (as there’s a choice of more than one, and it’s easier to host your own instance), but there aren’t that many public servers, and a major attack on either them or the underlying cloud services on which many of them rely could be crippling.

Of course, the impact on your community, business or organisation will depend on your usage of difference pieces of infrastructure, how reliant you are on them for your day-to-day operation, and what mitigations you have available to you. Let’s quickly touch on that.

What can I do?

The Internet was famously designed to route around issues – attacks, in fact – and that helps. But, particularly where there’s a pretty homogeneous software stack, attacks on infrastructure could still have very major impact. Start thinking now:

  • how would I support my customers if my main chat server went down?
  • could I continue to develop if my main git provider became unavailable?
  • would we be able to offer at least reduced services if a cloud provider lost connectivity for more than an hour or two?

By doing an analysis of what your business dependencies are, you have the opportunity to plan for at least some of the contingencies (although, as I note in my book, Trust in Computer Systems and the Cloud, the chances of your being able to analyse the entire stack, or discover all of the dependencies, is lower than you might think).

What else can you do? Patch and upgrade – make sure that whatever you’re running is the highest (supported!) version. Make back-ups of anything which is business critical. This should include not just your code but issues and bug-tracking, documentation and sales information. Finally, consider having backup services available for time-critical services like a customer support chat line.

Cyberattacks may not happen to your business or organisation directly, but if they happen to the open source community, the impact may be greater than you expect. Analyse. Plan. Mitigate.

How to hire an open source developer

Our view was that a pure “algorithm coding” exercise was pretty much useless for what we wanted.

We’ve recently been hiring developers to work on the Enarx project, a security project, written almost exclusively in Rust (with a bit of Assembly), dealing with Confidential Computing. By “we”, I mean Profian, the start-up for which I’m the CEO and co-founder. We’ve now found all the people we’re looking for initially on the team (with a couple due to start in the next few weeks), though we absolutely welcome contributors to Enarx, and, if things continue to go well, we’ll definitely want to hire some more folks in the future.

Hiring people is not easy, and we were hit with a set of interesting requirements which made the task even more difficult. I thought it would be useful and interesting for the community to share how we approached the problem.

What were we looking for?

I mentioned above some interesting requirements. Here’s what the main ones were:

  • systems programming – we mainly need people who are happy programming at the systems layer. This is pretty far down the stack, with lots of interactions directly with hardware or the OS. Where we are creating client-server pieces, for instance, we’re having to write quite a lot of the protocols, manage the crypto, etc., and the tools we’re using aren’t all very mature (see “Rust” below).
  • Rust – almost all of the project is written in Rust, and what isn’t is written in Assembly language (currently exclusively x86, though that may change as we add more platforms). Rust is new, cool and exciting, but it’s still quite young, and some areas don’t have all the support you might like, or aren’t as mature as you might hope – everything from cryptography through multi-threading libraries and compiler/build infrastructure.
  • distributed team – we’re building a team of folks where can find them: we have developers in Germany, Finland, the Netherlands, North Carolina (US), Massachusetts (US), Virginia (US) and Georgia (US), I’m in the UK, our community manager is in Brazil and we have interns in India and Nigeria. We knew from the beginning that we wouldn’t have everyone in one place, and this required people who we were happy would be able to communicate and collaborate with people via video, chat and (at worst) email.
  • security – Enarx is a security project, and although we weren’t specifically looking for security experts, we do need people who are able to think and work with security top of mind, and design and write code which is applicable and appropriate for the environment.
  • git – all of our code is stored in git (mainly GitHub, with a little bit of GitLab thrown in), and so much of our interaction around code revolves around git that anybody joining us would need to be very comfortable using it as a standard tool in their day-to-day work.
  • open source – open source isn’t just a licence, it’s a mindset, and, equally important, a way of collaborating. A great deal of open source software is created by people who aren’t geographically co-located, and who might not even see themselves as a team. We needed to be sure that the people we were hiring, while gelling as a close team within the company, will also be able to collaborate with people outside the organisation and be able to embrace Profian’s “open by default” culture not just for code, but for discussions, communications and documentation.

How did we find them?

As I’ve mentioned before, in Recruiting is hard. We ended up using a variety of means to find candidates, with varying levels of success:

  • LinkedIn job adverts
  • LinkedIn searches
  • Language-specific discussion boards and hiring boards (e.g. Reddit)
  • An external recruiter (shout out to Gerald at Interstem)
  • Word-of-mouth/personal recommendations

It’s difficult to judge between them in terms of quality, but without an external recruiter, we’d certainly have struggled with quantity (and we had some great candidates from that pathway, too).

How did we select them?

We needed to measure all of the candidates against all of the requirements noted above, but not all of them were equal. For instance, although we were keen to hire Rust programmers, we were pretty sure that someone with strong C/C++ skills at the systems level would be able to pick up Rust quickly enough to be useful. On the other hand, a good knowledge of using git was absolutely vital, as we couldn’t spend time working with new team members to bring them up-to-speed on our way of working. A strong open source background was, possibly surprisingly, not a requirement, but the mindset to work in that sort of model was, and anyone with a history of open source involvement is likely to have a good knowledge of git. The same goes for the ability to work in a distributed team: so much of open source is distributed that involvement in almost any open source community was a positive indicator. Security we decided was a “nice-to-have”.

How to proceed? We wanted to keep the process simple and quick – we don’t have a dedicated HR or People function, and we’re busy trying to get code written. What we ended up was this (with slight variations), which we tried to get complete within 1-2 weeks:

  1. Initial CV/resume/github/gitlab/LinkedIn review – this to decide whether to interview
  2. 30-40 minute discussion with me as CEO, to find out if they might be a good cultural fit, to give them a chance to find out about us, and get an idea if they were as technically adept as they appeared from the first step
  3. Deep dive technical discussion led by Nathaniel, usually with me there
  4. Chat with other members of the team
  5. Coding exercise
  6. Quick decision (usually within 24 hours)

The coding exercise was key, but we decided against the usual approach. Our view was that a pure “algorithm coding” exercise of the type so beloved by many tech companies was pretty much useless for what we wanted. What we wanted to understand was whether candidates could quickly understand a piece of code, fix some problems and work with the team to do so. We created a github repository (in fact, we ended up using two – one for people a little higher up the stack) with some almost-working Rust code in it, some instructions to fix it, perform some git-related processes on it, and then improve it slightly, adding tests along the way. A very important part of the test was to get candidates to interact with the team via our chat room(s). We scheduled 15 minutes on a video call for set up and initial questions, 2 hours for the exercise (“open book” – as well as talking to the team, candidates were encouraged to use all resources available to them on the Internet), followed by a 30 minute wrap-up session where the team could ask questions and the candidate could also reflect on the task. This also allowed us to get an idea of how well the candidate was able to communicate with the team (combined with the chat interactions during the exercise). Afterwards, the candidate would drop off the call, and we’d generally make a decision within 5-10 minutes as to whether we wanted to hire them.

This generally worked very well. Some candidates struggled with the task, some didn’t communicate well, some failed to do well with the git interactions – these were the people we didn’t hire. It doesn’t mean they’re not good coders, or that they might not be a good fit for the project or the company later on, but they didn’t immediate meet the criteria we need now. Of the ones we hired, the levels of Rust experience and need for interaction with the team varied, but the level of git expertise and their reactions to our discussions afterwards was always sufficient for us to decide to take them.

Reflections

On the whole, I don’t think we’d change a huge amount about the selection process – though I’m pretty sure we could do better with the search process. The route through to the coding exercise allowed us to filter out quite a few candidates, and the coding exercise did a great job of helping us pick the right people. Hopefully everyone who’s come through the process will be a great fit and will produce great code (and tests and documentation and …) for the project. Time will tell!

Open source Christmas presents

Give the gift of open source to more people.

If you find this post interesting, you’ll find a lot more about how community and open source are important in my book Trust in Computer Systems and the Cloud, published by Wiley.

Whether you celebrate Christmas or not (our family does, as it happens), this time of year is one where presents are often given and received. I thought it might be nice to think about what presents we could give in the spirit of open source. Now, there are lots of open source projects out there, and you could always use one to create something for a friend, colleague or loved one (video, audio, blog post, image, website) or go deeper with a project which combines open source software and hardware, such as Mycroft or Crowdsupply. Or you could go in the other direction, and get people involved in projects you’re part of or enjoy. That’s what I’d like to suggest in this article: give the gift of open source to more people, or just make open source more accessible to more people: that’s a gift in itself (to them and to the project!).

Invite

First of all, people need to know about projects. “Evangelism” is a word that’s often used around open source projects, because people need to be told about them before they can get involved. Everyone can do evangelism, whether it’s word of mouth, laptop stickers, blog posts, videos, speaking at conferences, LinkedIn mentions, podcasts, Slack, IRC, TikTok[1], Twitter, ICQ[2] or Reddit. Whatever is your preferred medium to talk to the world, use it. Tell people why it’s important. Tell people why it’s fun. Share the social side of the project. Explain some of the tricky design issues that face it. Tell people why it’s written in the language(s) it’s in. Point people at the sections of code you’ve written and are proud of. Even better, point people at the sections of code you’ve written and are ashamed of, but don’t have time to fix as you’re too busy at the moment. But most of all, invite them to look around, meet the contributors, read the code, test the executables, read the documentation. Make it easy for them to find the project. Once we get back to a world where in-person conferences are re-emerging, arrange meet-ups, provide swag and get together (safely!) IRL[3].

Include

Once your invitees have started looking around, interacting with the community, submitting issues, documentation or patches, find ways to include them. There’s nothing more alienating than, well, being alienated. I think the very worst thing anyone can say to a person new to a project is something along the lines of “go and read the documentation – this is a ridiculous question/terrible piece of documentation/truly horrible piece of code”. It may be all of those things, but how does that help anyone? If you find people giving these reactions – if you find yourself giving these reactions – you need to sort it out. Everyone was a n00b once, and everyone has a different learning style, way of interacting, cultural background and level of expertise. If there are concerns that senior project members’ time is being “wasted” by interactions, nominate (and agree!) that someone will take time to mentor newcomers. Better yet, take turns mentoring, so that information and expertise is spread widely and experts in the project get to see the questions and concerns that non-experts are having. There are limits to this, of course, but you need to find ways not just to welcome people into the project, but actually include them in the functioning, processes, social interactions and day-to-day working of the project which make it a community.

You should also strongly consider a code of conduct such as the Contributor Covenant to model, encourage and, if necessary enforce appropriate and inclusive behaviour. Diversity and Inclusion are complex topics, but there’s a wealth of material out there if you want to take engage – and you should.

Encourage

Encouragement is a little different to inclusion. It’s possible to feel part of a community, but not actually to be participating to the development and growth of the project. Encouragement may be what people need to move into active engagement, contributing more than lurking. And there’s a difference between avoiding negative comments (as outlined above) and promoting positive interactions. The former discourage, and the latter can encourage. If someone contributes their first patch, and gets an “accepted, merged” message, that’s great, but it’s pretty clear that they’re much more likely to contribute again if, instead, they receive a message along the lines of “thanks for this: great to see. We need more contributions in this area: have you looked at issues #452, #599 and #1023?”.

These sorts of interactions are time-consuming, and it may not always be the maintainers who are providing them: as above, the project may need to have someone whose role includes this sort of encouragement. If you’re using something like Github, you may be able to automate notifications of first-time contributions so that you know that it’s time to send an encouraging message. The same could go for someone who was making a few contributions, but has slowed down or dropped off: a quick message or two might be enough to get them involved in the project again.

Celebrate

I see celebration as as step on again from simple encouragement – though it can certainly reinforce it. Celebration isn’t just about acknowledging something positive, but is also a broader social interaction. When somebody’s achievements are celebrated, other people in the community come together to say well done and congratulate them. This is great for the person whose work is being celebrated, as the acknowledgement from others reinforces the network of people with whom they’re connected, bringing them closer into the community.

Celebrating a project-related event like a release and including new members of the community in that celebration can be even more powerful. When new members are part of a celebration, and are made to feel that their contributions, though small, have made up part of what’s being celebrated, their engagement in the project is likely to increase. Their feelings of inclusion in the community are also likely to go up. Celebrations in person (again, when possible) allow for better network-building and closer ties, but even virtual meet-ups can bring peripherally-involved or new members closer to the core of the project.

Summary

Getting people involved in your open source project is important for its health and its growth, but telling people about it isn’t enough. You need to take conscious steps to increase involvement and ensure that initial contributions to a project are followed up, tying people into the project and making them part of the community.


1 – I’m going to be honest: I wouldn’t know where to start with TikTok. My kids will probably be appalled that I even mentioned it, but hey, why not? The chances are that you, dear reader, are younger and (almost certainly) cooler than I am.

2 – I’m guessing the take up will be a bit lower here.

3 – In Real Life. It seems odd to be re-using this term, which had all but disappeared from what I could tell, but which seems to need to re-popularised.

Recruiting is hard

It’s going to be easier to outsource this work to somebody who is more of an expert than I’ll ever be, would ever want to be, or could ever be.

We (Profian) are currently looking to recruit some software engineers. Now, I’ve been involved in hiring people before – on the interviewing side, at least – but actually doing the recruiting is a completely new experience for me. And it’s difficult. As the CEO of a start-up, however, it turns out that it’s pretty much down to me to manage the process, from identifying the right sort of person, to writing a job advert (see above), to finding places to place it, to short-listing candidates, interviewing them and then introducing them to the rest of the team. Not to mention agreeing a start date, “compensation package” (how much they get paid) and all that. Then there’s the process of on-boarding them (getting contracts sorted, getting them email addresses, etc.), and least some of which I’m pleased to say I have some help with.

The actual recruiting stuff is difficult, though. Recruitment consultants get a bad rap, and there are some dodgy ones, but I’m sure most of them are doing the best they can and are honest people. You might even be happy to introduce some of them to your family. Just a few. But, like so many other things about being start-up founder, it turns out that there comes a time when you have to say to yourself: “well, I could probably learn to do this – maybe not well, but with some degree of competence – but it’s just not worth my time. It’s going to be easier, and actually cheaper in the long run, to outsource this work to somebody who is, frankly, more of an expert than I’ll ever be, would ever want to be, or could ever be. And so I’ve found someone to work with.

What’s really interesting when you find somebody to help you with a new task is the time it takes to mesh your two worlds. I’m a software guy, a we’re looking for software people. I need to explain to the recruitment consultant not only what skills we’re looking for, but what phrases, when they appear on a LinkedIn page or CV[1], are actually red flags. In terms of phrases we’re looking for (or are nice to haves), I’d already mentioned “open source” to the recruitment consultant, but it was only on looking over some possible candidates that I realised that “FOSS” should be in there, too. A person whose current role is “Tech lead” is much more likely to be a fit than “Technical manager”. What’s the difference between a “cloud architect” and a “systems architect”? Is “Assembly” different to “WebAssembly” (yes! – oh, and the latter is sometimes shortened to “Wasm”).

There are, of course, recruitment consultants who specialise in particular technical fields, but what we’re doing (see the Enarx project) is so specialised and so new that I really don’t think that there are likely to be any specialist recruiters anywhere in the world (yet).

So, I feel lucky that I’ve managed to find someone who seems to get not only where we’re coming from as a company, but also the sorts of people we’re looking for. He wisely suggested that we spend some time going over some possible candidates so he could watch me identifying people who were a definite “no” – as useful for him as a definite “must interview”. Hopefully we’ll start to find some really strong candidates soon. If you think you might be one of them, please get in touch!

(Oh – and yes, I’ve invited him to meet my family.)


1 – that’s “resume” for our US friends.

Enarx first release

Write an application, compile it to WebAssembly, and then run it in one of three Keeps types.

I was on holiday last week, and I took the opportunity not to write a blog post, but while I was sunning myself[1] at the seaside, the team did a brilliant thing: we have our first release of Enarx, and a new look for the website, to boot.

To see the new website, head over to https://enarx.dev. There, you’ll find new updated information about the project, details of how to get involved, and – here’s the big news – instructions for how to download and use Enarx. If you’re a keen Rustacean, you can also go straight to crates.io (https://crates.io/crates/enarx) and start off there. Up until now, in order to run Enarx, you’ve had to do quite a lot of low level work to get things running, run your own github branches, understand how everything fits together and manage your own development environment. This has now all changed.

This first release, version 0.1.1, is codenamed Alamo, and provides an easy way in to using Enarx. As always, it’s completely open source: you can look at every single line of our code. It doesn’t provide a full feature set, but what it does do is allow you, for the first time, to write an application, compile it to WebAssembly, and then run it in one of three Keep[2] types:

  1. KVM – this is basically a debugging Keep, in that it doesn’t provide any confidentiality or integrity protection, but it does allow you to get running and to try things even if you don’t have access to specialist hardware. A standard Linux machine should do you fine.
  2. SEV – this is a Keep using AMD’s SEV technology, specifically the newer version, SEV-SNP. This requires access to a machine which supports it[3].
  3. SGX – this is a Keep using Intel’s SGX technology. Again, this requires access to a machine which supports it[3].

The really important point here is that you’re running the same binary on each of these architectures. No recompilation for different architectures: just plain old WebAssembly[4].

Current support

There’s a lot more work to do, but what do we support at the moment?

  • running WebAssembly
  • KVM, SEV and SGX Keeps (see above)
  • stdin and stdout from/to the host – this is temporary, as the host is untrusted in the Enarx model, but until we have networking support (see below), we wanted to provide a simple way to manage input and output from a Keep.

There’s lots more to come – networking and attestation are both high on the list – but now anyone can start playing with Enarx. And, we hope, submitting enhancement and feature requests, not to mention filing bugs (we know there will be some!): to do so, hop over to https://github.com/enarx/enarx/issues.

To find out more, please head over to the website – there’s loads to see – or join us on chat channel over at https://chat.enarx.dev to find out more and get involved.


1 – it’s the British seaside, in October, so “sunning” might be a little inaccurate.

2 – a Keep is what we call a TEE instance set up for you to run an application in.

3 – we have AMD and SGX machines available for people who contribute to the project – get in touch!

4 – WebAssembly is actually rather new, but “plain old” sounds better than “vanilla”. Not my favourite ice cream flavour[5].

5 – my favourite basic ice cream flavour is strawberry. Same for milkshakes.

Announcing Profian

Profian, a security start-up in the Confidential Computing space

I’m very excited to announce Profian, a security start-up in the Confidential Computing space that I co-founded with Nathaniel McCallum, came out of stealth mode today to announce that we’ve completed our Seed Round – you can find the press release here. This is the culmination of months of hard work and about two years of a vision that we’ve shared and developed since coming up with the idea of Enarx. Profian will be creating products and services around Enarx, and we’re committed to keeping everything we do open source: not just because we believe in open source as an ethical choice, but also because we believe that it’s best for security.

Enarx grew out of a vision that we had to simplify use of Trusted Execution Environments like AMD’s SEV and Intel’s SGX[1], while not compromising on the security that we believe the industry wants and needs. Enarx aims to allow you to deploy applications to any of the supported platforms without needing to recompile for each one, and to simplify both the development and deployment process. It supports WebAssembly as its runtime, allowing a seamless execution environment across multiple hardware types. Engineering for Enarx was initially funded by Red Hat, and towards the end of 2020, we started looking for a way to ensure long-term resourcing: out of this Profian was born. We managed to secure funding from two VC funds – Project A (lead investor) and Illuminate Financial – and four amazing angel investors. Coming out of stealth means that we can now tell more people about what we’re doing.

Profian is a member of two great industry bodies: the Confidential Computing Consortium (a Linux Foundation project to promote open source around Trusted Execution Environments) and the Bytecode Alliance (an industry group to promote and nurture WebAssembly, the runtime which Enarx supports).

The other important thing to announce is that with funding of Profian comes our chance to develop Enarx and its community into something really special.

If it’s your thing, you can find the press release on Business Wire, and more information on the company press page.

A few questions and answers

What’s confidential computing?

I tend to follow the Confidential Computing Consortium’s definition: “Confidential Computing protects data in use by performing computation in a hardware-based Trusted Execution Environment”.

What does Profian mean?

It’s Anglo-Saxon, the language also sometimes called “Old English”, which was spoken in (modern day) England and parts of Scotland from around the mid-5th century BCE to 1066, when Norman French had such an impact on the language that it changed (to Middle English).

One online Anglo-Saxon dictionary defines profian thus:

profian - 1. to esteem; regard as 2. to test ; try ; prove 3. to show evidence of ; evince

It’s the root of the English word “to prove”, from which we also get “proof” and “proven”. We felt that this summed up much of what we want to be doing, and is nicely complementary to Enarx.

How is Profian pronounced?

Not the way most pre-Conquest Anglo-Saxons would probably have pronounced it, to be honest. We (well, I) thought about trying to go with a more “authentic” pronunciation, and decided (or was convinced…) that it was too much trouble. We’re going with “PROH-vee-uhn”[2].

What does Enarx mean?

You’ll find more information about this (and how to pronounce Enarx), over at the Enarx FAQ. TL;DR – we made it up.

Who’s part of the company?

Well, there’s me (I’m the CEO), Nathaniel McCallum (the CTO) and a small team of developers. We also have Nick Vidal, who we recruited as Community Manager for Enarx. By the beginning of October, we expect to have six employees in five different countries spread across three separate continents[3].

What’s next?

Well, lots of stuff. There’s so much to do when running a company of which I knew next to nothing when we started. You would not believe the amount of work involved with registering a company, setting up bank accounts, recruiting people, paying people, paying invoices, etc. – and that’s not even about creating products. We absolutely plan to do this (or the investors are not going to be happy).

No – what’s next for this blog?

Ah, right. Well, I plan to keep it going. There will be more articles about my book on trust, security, open source and probably VCs, funding and the rest. There have been quite a few topics I’ve just not felt safe blogging about until Profian came out of stealth mode. Keep an eye out.


1 – there are more coming, such as Arm CCA (also known as “Realms”), and Intel’s TDX – we plan to support these are they become available.

2 – Anglo-Saxons would probably have gone with something more like “PRO-fee-an”, where the “o” has sound like “pop”.

3 – yes, I know we’ve not made it easy on ourselves.

Trust book preview

What it means to trust in the context of computer and network security

Just over two years ago, I agreed a contract with Wiley to write a book about trust in computing. It was a long road to get there, starting over twenty years ago, but what pushed me to commit to writing something was a conference I’d been to earlier in 2019 where there was quite a lot of discussion around “trust”, but no obvious underlying agreement about what was actually meant by the term. “Zero trust”, “trusted systems”, “trusted boot”, “trusted compute base” – all terms referencing trust, but with varying levels of definition, and differing understanding if what was being expected, by what components, and to what end.

I’ve spent a lot of time thinking about trust over my career and also have a major professional interest in security and cloud computing, specifically around Confidential Computing (see Confidential computing – the new HTTPS? and Enarx for everyone (a quest) for some starting points), and although the idea of a book wasn’t a simple one, I decided to go for it. This week, we should have the copy-editing stage complete (technical editing already done), with the final stage being proof-reading. This means that the book is close to down. I can’t share a definitive publication date yet, but things are getting there, and I’ve just discovered that the publisher’s blurb has made it onto Amazon. Here, then, is what you can expect.


Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity 

Trust in Computer Systems and the Cloud delivers an insightful and practical new take on what it means to trust in the context of computer and network security and the impact on the emerging field of Confidential Computing. Author Mike Bursell’s experience, ranging from Chief Security Architect at Red Hat to CEO at a Confidential Computing start-up grounds the reader in fundamental concepts of trust and related ideas before discussing the more sophisticated applications of these concepts to various areas in computing. 

The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security. 

  • Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals 
  • A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs. 
  • A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring 

Perfect for security architects at the CISSP level or higher, Trust in Computer Systems and the Cloud is also an indispensable addition to the libraries of system architects, security system engineers, and master’s students in software architecture and security. 

Buying my own t-shirts, OR “what I miss about conferences”

I can buy my own t-shirts, but friendships need nurturing.

A typical work year would involve my attending maybe six to eight conferences in person and speaking at quite a few of them. A few years ago, I stopped raiding random booths at the exhibitions usually associated with these for t-shirts for the simple reason that I had too many of them. That’s not to say that I wouldn’t accept one here or there if it was particularly nice, or an open source project which I esteemed particularly, for instance. Or ones which I thought my kids would like – they’re not “cool”, but are at least useful for sleepwear, apparently. I also picked up a lot of pens, and enough notebooks to keep me going for a while.

And then, at the beginning of 2020, Covid hit, I left San Francisco, where I’d been attending meetings co-located with RSA North America (my employer at the time, Red Hat, made the somewhat prescient decision not to allow us to go to the main conference), and I’ve not attended any in-person conferences since.

There are some good things about this, the most obvious being less travel, though, of late, my family has been dropping an increasing number of not-so-subtle hints about how it would be good if I let them alone for a few days so they can eat food I don’t like (pizza and macaroni cheese, mainly) and watch films that I don’t enjoy (largely, but not exclusively, romcoms on Disney+). The downsides are manifold. Having to buy my own t-shirts and notebooks, obviously, though it turns out that I’d squirrelled away enough pens for the duration. It also turned out that the move to USB-C connectors hadn’t sufficiently hit the conference swag industry by the end of 2019 for me to have enough of those to keep me going, so I’ve had to purchase some of those. That’s the silly,minor stuff though – what about areas where there’s real impact?

Virtual conferences aren’t honestly too bad and the technology has definitely improved over the past few months. I’ve attended some very good sessions online (and given my share of sessions and panels, whose quality I won’t presume to judge), but I’ve realised that I’m much more likely to attend borderline-interesting talks not on my main list of “must-sees” (some of which turn out to be very valuable) if I’ve actually travelled to get to a venue. The same goes for attention. I’m much less likely to be checking email, writing emails and responding to chat messages in an in-person conference than a virtual one. It’s partly about the venue, moving between rooms, and not bothering to get my laptop out all the time – not to mention the politeness factor of giving your attention to the speaker(s) or panellists. When I’m sitting at my desk at home, none of these is relevant, and the pull of the laptop (which is open anyway, to watch the session) is generally irresistible.

Two areas which have really suffered, though, are the booth experience the “hall-way track”. I’ve had some very fruitful conversations both from dropping by booths (sometimes mainly for a t-shirt – see above) or from staffing a booth and meeting those who visit. I’ve yet to any virtual conferences where the booth experience has worked, particularly for small projects and organisations (many of the conferences I attend are open source-related). Online chat isn’t the same, and the serendipitous aspect of wandering past a booth and seeing something you’d like to talk about is pretty much entirely missing if you have to navigate a set of webpages of menu options with actual intent.

The hall-way track is meeting people outside the main sessions of a conference, either people you know already, or as conversations spill out of sessions that you’ve been attending. Knots of people asking questions of presenters or panellists can reveal shared interests, opposing but thought-provoking points of view or just similar approaches to a topic which can lead to valuable professional relationships and even long-term friendships. I’m not a particularly gregarious person – particularly if I’m tired and jetlagged – but I really enjoy catching up with colleagues and friends over a drink or a meal from time to time. While that’s often difficult given the distributed nature of the companies and industries I’ve been involved with, conferences have presented great opportunities to meet up, have a chinwag and discuss the latest tech trends, mergers and acquisitions and fashion failures of our fellow attendees. This is what I miss most: I can buy my own t-shirts, but friendships need nurturing. and I hope that we can safely start attending conferences again so that I can meet up with friends and share a drink. I just hope I’m not the one making the fashion mistakes (this time).

In praise of … the Community Manager

I am not – and could never be – a community manager

This is my first post in a while. Since Hanging up my Red Hat I’ve been busy doing … stuff. Stuff which I hope to be able to speak about soon. But in the meantime, I wanted to start blogging regularly again. Here’s my first post back, a celebration of an important role associated with open source projects: the community manager.

Open source communities don’t just happen. They require work. Sometimes the technical interest in an open source project is enough to attract a group of people to get involved, but after some time, things are going to get too big for those with a particular bent (documentation, coding, testing) to manage the interactions between the various participants, moderate awkward (or downright aggressive) communications, help encourage new members to contribute, raise the visibility of the project into new areas or market sectors and all the other pieces that go into keeping a project healthy.

Enter the Community Manager. The typical community manager is in that awkward position of having lots of responsibility, but no direct authority. Open source projects being what they are, few of them have empowered “officers”, and even when there are governance structures, they tend to operate by consent of those involved – by negotiated, rather than direct, authority. That said, by the point a community manager is appointed for a community manager, it’s likely that at least one commercial entity is sufficiently deep into the project to fund or part-fund the community manager position. This means that the community manager will hopefully have some support from at least one set of contributors, but will still need to build consensus across the rest of the community. There may also be tricky times, also, when the community manager will need to decide whether their loyalties lie with their employer or with the community. A wise employer should set expectations about how to deal with such situations before they arise!

What does the community manager need to do, then? The answer to this will depend on a number of issues, and there is likely to be a balance between these tasks, but here’s a list of some that come to mind[1].

  • marketing/outreach – this is about raising visibility of the project, either in areas where it is already known, or new markets/sectors, but there are lots of sub-tasks such as a branding, swag ordering (and distribution!), analyst and press relations.
  • event management – setting up meetups, hackathons, booths at larger events or, for really big projects, organising conferences.
  • community growth – spotting areas where the project could use more help (docs, testing, outreach, coding, diverse and inclusive representation, etc.) and finding ways to recruit contributors to help improve the project.
  • community lubrication – this is about finding ways to keep community members talking to each other, celebrate successes, mourn losses and generally keep conversations civil at least and enthusiastically friendly at best.
  • project strategy – there are times in a project when new pastures may beckon (a new piece of functionality might make the project exciting to the healthcare or the academic astronomy community for instance), and the community manager needs to recognise such opportunities, present them to the community, and help the community steer a path.
  • product management – in conjunction with project strategy, situations are likely to occur when a set of features or functionality are presented to the community which require decisions about their priority or the ability of the community to resource them. These may even create tensions between various parts of the community, including involved commercial interests. The community manager needs to help the community reason about how to make choices, and may even be called upon to lead the decision-making process.
  • partner management – as a project grows, partners (open source projects, academic institutions, charities, industry consortia, government departments or commercial organisations) may wish to be associated with the project. Managing expectations, understanding the benefits (or dangers) and relative value can be a complex and time-consuming task, and the community manager is likely to be the first person involved.
  • documentation management – while documentation is only one part of a project, it can often be overlooked by the core code contributors. It is, however, a vital resource when considering many of the tasks associated with the points above. Managing strategy, working with partners, creating press releases: all of these need good documentation, and while it’s unlikely that the community manager will need to write it (well, hopefully not all of it!), making sure that it’s there is likely to be their responsibility.
  • developer enablement – this is providing resources (including, but not restricted to, documentation) to help developers (particularly those new to the project) to get involved in the project. It is often considered a good idea to separate this set of tasks out, rather than expecting a separate role to that of a community manager, partly because it may require a deeper technical focus than is required for many of the other responsibilities associated with the role. This is probably sensible, but the community manager is likely to want to ensure that developer enablement is well-managed, as without new developers, almost any project will eventually calcify and die.
  • cat herding – programmers (who make up the core of any project) are notoriously difficult to manage. Working with them – particularly encouraging them to work to a specific set of goals – has been likened to herding cats. If you can’t herd cats, you’re likely to struggle as a community manager!

Nobody (well almost nobody) is going to be an expert in all of these sets of tasks, and many projects won’t need all of them at the same time. Two of the attributes of a well-established community manager are an awareness of the gaps in their expertise and a network of contacts who they can call on for advice or services to fill out those gaps.

I am not – and could never be – a community manager. I don’t have the skills (or the patience), and one of the joys of gaining experience and expertise in the world is realising when others do have skills that you lack, and being able to recognise and celebrate what they can bring to your world that you can’t. So thank you, community managers!


1 – as always, I welcome comments and suggestions for how to improve or extend this list.

Hanging up my Red Hat

It’s time to move on.

Friday (2021-06-11) was my last day at Red Hat. I’ve changed my LinkedIn, Facebook and Twitter profiles and updated the information on this blog, too. I’d been at Red Hat for just under 5 years, which is one of the longest stays I’ve had at any company in my career. When I started there, I realised that there was a huge amount about the company which really suited who I was, and my attitude to life, and, in particular, open source. That hasn’t changed, and although the company is very different to the one I joined in 2016 – it’s been acquired by IBM, got a new CEO and more than doubled in size – there’s still lots about it which feels very familiar and positive. Bright people, doing stuff they care about, and sharing what they’re doing with the rest of the company and the world: great values.

I’ve also made lots of friends, and got involved in lots of cool things and institutions. I’d particularly call out Opensource.com and the Confidential Computing Consortium. And, of course, the Enarx project.

But … it’s time to move on. I’ve developed an interest in something I care a whole lot about, and which, after lots of discussion and soul-searching, I’ve decided to move into that. I hope to be able to talk more about it in a few weeks, and until then, this blog may be a little quiet. In the meantime, have fun, keep safe and do all that good security stuff.