Enarx for everyone (a quest)

In your backpack, the only tool that you have to protect you is Enarx…

You are stuck in a deep, dark wood, with spooky noises and roots that seem to move and trip you up.  Behind every tree malevolent eyes look out at you.  You look in your backpack and realise that the only tool that you have for your protection is Enarx, the trusty open source project given you by the wizened old person at the beginning of your quest.  Hard as you try, you can’t remember what it does, or how to use it.  You realise that now is that time to find out.

What do you do next?

  • If you are a business person, go to 1. Why I need Enarx to reduce business risk.
  • If you are an architect, go to 2. How I can use Enarx to protect sensitive data.
  • If you are a techy, go to 3. Tell me more about Enarx technology (I can take it).

1. Why I need Enarx to reduce business risk

You are the wise head upon which your business relies to consider and manage risk.  One of the problems that you run into is that you have sensitive data that needs to be protected.  Financial data, customer data, legal data, payroll data: it’s all at risk of compromise if it’s not adequately protected.  Who can you trust, however?  You want to be able to use public clouds, but the risks of keeping and processing information on systems which are not under your direct control are many and difficult to quantify.  Even your own systems are vulnerable to outdated patches, insider attacks or compromises: confidentiality is difficult to ensure, but vital to your business.

Enarx is a project which allows you to run applications in the public cloud, on your premises – or wherever else – with significantly reduced and better quantifiable risk.  It uses hardware-based security called “Trust Execution Environments” from CPU manufacturers, and cuts out many of the layers that can be compromised.  The only components that do need to be trusted are fully open source software, which means that they can be examined and audited by industry experts and your own teams.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


2. How I can use Enarx to protect sensitive data

You are the expert architect who has to consider the best technologies and approaches for your organisation.  You worry about where best to deploy sensitive applications and data, given the number of layers in the stack that may have been compromised, and the number of entities – human and machine – that have the opportunity to peek into or mess with the integrity of your applications.  You can’t control the public cloud, nor know exactly what the stack it’s running is, but equally, the resources required to ensure that you can run sufficient numbers of hardened systems on premises are growing.

Enarx is an open source project which uses TEEs (Trusted Execution Environments), to allow you to run applications within “Keeps” on systems that you don’t trust.  Enarx manages the creation of these Keeps, providing cryptographic confidence that the Keeps are using valid CPU hardware and then encrypting and provisioning your applications and data to the Keep using one-time cryptographic keys.  Your applications run without any of the layers in the stack (e.g. hypervisor, kernel, user-space, middleware) being able to look into the Keep.  The Keep’s run-time can accept applications written in many different languages, including Rust, C, C++, C#, Go, Java, Python and Haskell.  It allows you to run on TEEs from various CPU manufacturers without having to worry about portability: Enarx manages that for you, along with attestation and deployment.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


3. Tell me more about Enarx technology (I can take it)

You are a wily developer with technical skills beyond the ken of most of your peers.  A quick look at the github pages tells you more: Enarx is an open source project to allow you to deploy and applications within TEEs (Trusted Execution Environments).

  • If you’d like to learn about how to use Enarx, proceed to 4. I want to use Enarx.
  • If you’d like to learn about contributing to the Enarx project, proceed to 5. I want to contribute to Enarx.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


4. I want to use Enarx

You learn good news: Enarx is designed to be easy to use!

If you want to run applications that process sensitive data, or which implement sensitive algorithms themselves, Enarx is for you.  Enarx is a deployment framework for applications, rather than a development framework.  What this means is that you don’t have to write to particular SDKs, or manage the tricky attestation steps required to use TEEs.  You write your application in your favourite language, and as long as it has WebAssembly as a compile target, it should run within an Enarx “Keep”.  Enarx even manages portability across hardware platforms, so you don’t need to worry about that, either.  It’s all open source, so you can look at it yourself, audit it, or even contribute (if you’re interested in that, you might want to proceed to 5. I want to contribute to Enarx).

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


5. I want to contribute to Enarx

Enarx is an open source project (under the Apache 2.0 licence), and we welcome contributions, whether you are a developer, tester, documentation guru or other enthusiastic bod with an interest in providing a way for the rest of the world to up the security level of the applications they’re running with minimal effort.  There are various components to Enarx, including attestation, hypervisor work, uni-kernel and WebAssembly run-time pieces.  We want to provide a simple and flexible framework to allow developers and operations folks to deploy applications to TEEs on any supported platform without recompilation, having to choose an obscure language or write to a particular SDK.  Please have a look around our github site and get in touch if you’re in a position to contribute.

Well done: you found out about Enarx.  Continue to 6. Well, what’s next?


6. Well, what’s next?

You now know enough to understand how Enarx can help you: well done!  At time of writing, Enarx is still in development, but we’re working hard to make it available to all.

We’ve known for a long time that we need encryption for data at rest and in transit: Enarx helps you do encryption for data in use.

For more information, you may wish to visit:

My 7 rules for remote-work sanity

If I need to get out of my office, I’ll take the dog for a walk

I work remotely, and have done, on and off, for a good percentage of the past 10-15 years.  I’m lucky that I’m in a role where this suits my responsibilities, and in a company – Red Hat – that is set up for it.  Not all roles – those with many customer onsite meetings, or those with a major service component – are suited to remote working, of course, but it’s clear that an increasing number of organisations are considering having at least some of their workers doing so remotely.

I’ve carefully avoided using the phrase either “working from home” or “working at home” above.  I’ve seen discussion that the latter gives a better “vibe” for some reason, but it’s not accurate for many remote workers.  In fact, it doesn’t describe my role perfectly, either.  My role is remote, in that I have no company-provided “base” – with chair, desk, meeting rooms, phone, Internet access, etc. – but I don’t spend all of my time at home.  I spend maybe one and a half weeks a month, on average, travelling – to attend or speak at conferences, to have face-to-face (“F2F”) meetings, etc..  During these times, I’m generally expected to be contactable and to keep at least vaguely up-to-date on email – though the exact nature of the activities in which I’m engaged, and the urgency of the contacts and email, may increase or reduce my engagement.

Open source

One of the reasons that I can work remotely is that I work for a company that works with open source software.  I’m currently involved in a very exciting project called Enarx (which I first announced on this blog).  We have contributors in Europe and the US – and interest from further abroad.  Our stand-ups are all virtual, and we default to turning on video.  At least two of our regulars will participate from a treadmill, I will typically actually stand at my desk.  We use github for all of our code (it’s all open source, of course), and there’s basically no reason for us to meet in person very often.  We try to celebrate together – agreeing to get cake, wherever we are, to mark special occasions, for instance – and have laptop stickers to brand ourselves and help team unity. We have a shared chat, and IRC channel and spend a lot of time communicating via different channels.  We’re still quite a small team, but it works for now.  If you’re looking for more tips about how to manage, coordinate and work in remote teams, particularly around open source projects, you’ll find lots of information at the brilliant Opensource.com.

The environment

When I’m not travelling around the place, I’m based at home.  There, I have a commute – depending on weather conditions – of around 30-45 seconds, which is generally pretty bearable.  My office is separate from the rest of the house (set in the garden), and outfitted with an office chair, desk, laptop dock, monitor, webcam, phone, keyboard and printer: these are the obvious work-related items in the room.

Equally important, however, are the other accoutrements that make for a good working environment.  These will vary from person to person, but I also have:

  • a Sonos, attached to an amplifier and good speakers
  • a sofa, often occupied by my dog, and sometimes one of the cats
  • a bookshelf, where the books which aren’t littering the floor reside
  • tea-making facilities (I’m British – this is important)
  • a fridge, filled with milk (for the tea), beer and wine (don’t worry: I don’t drink these during work hours, and it’s more that the fridge is good for “overflow” from our main kitchen one)
  • wide-opening windows and blinds for the summer (we have no air-conditioning: I’m British, remember?)
  • underfloor heating and a wood-burning stove for the winter (the former to keep the room above freezing until I get the latter warmed up)
  • a “NUC” computer and monitor for activities that aren’t specifically work-related
  • a few spiders.

What you have will depend on your work style, but these “non-work-related” items are important (bar the spiders, possibly) to my comfort and work practice.  For instance, I often like to listen to music to help me concentrate; I often sit on the sofa with the dog/cats to read long documents; and without the fridge and tea-making facilities, I might as well be American[1].

My rules

How does it work, then?  Well, first of all, most of us like human contact from time to time.  Some remote workers will rent space in a shared work environment, and work there most of the time: they prefer an office environment, or don’t have a dedicated space for working a home.  Others will mainly work in coffee shops, or on their boat[2], or may spend half of the year in the office, and the other half working from a second home.  Whatever you do, finding something that works for you is important.  Here’s what I tend to do, and why:

  1. I try to have fairly rigid work hours – officially (and as advertised on our intranet for the information of colleagues), I work 10am-6pm UK time.  This gives me a good overlap with the US (where many of my colleagues are based), and time in the morning to go for a run or a cycle and/or to walk the dog (see below).  I don’t always manage these times, but when I flex in one direction, I attempt to pull some time back the other way, as otherwise I know that I’ll just work ridiculous hours.
  2. I ensure that I get up and have a cup of tea – in an office environment, I would typically be interrupted from time to time by conversations, invitations to get tea, phyiscal meetings in meeting rooms, lunch trips, etc..  This doesn’t happen at home, so it’s important to keep moving, or you’ll be stuck at your desk for 3-4 hours at a time, frequently.  This isn’t good for your health, and often, for your productivity (and I enjoy drinking tea).
  3. I have an app which tells me when I’ve been inactive – this is new for me, but I like it.  If I’ve basically not moved for an hour, my watch (could be phone or laptop) tells me to do some exercise.  It even suggests something, but I’ll often ignore that, and get up for some tea, for instance[3].
  4. I use my standing desk’s up/down capability – I try to vary my position through the day from standing to sitting and back again.  It’s good for posture, and keeps me more alert.
  5. I walk the dog – if I just need to get out of my office and do some deep thinking (or just escape a particularly painful email thread!), I’ll take the dog for a walk.  Even if I’m not thinking about work for all of the time, I know that it’ll make me more productive, and if it’s a longish walk, I’ll make sure that I compensate with extra time spent working (which is always easy).
  6. I have family rules – the family knows that when I’m in my office, I’m at work.  They can message me on my phone (which I may ignore), or may come to the window to see if I’m available, but if I’m not, I’m not.  Emergencies (lack of milk for tea, for example) can be negotiated on a case-by-case basis.
  7. I go for tea (and usually cake) at a cafe – sometimes, I need to get into a different environment, and have a chat with actual people.  For me, popping into the car for 10 minutes and going to a cafe is the way to do this.  I’ve found one which makes good cakes (and tea).

These rules don’t describe my complete practice, but they are an important summary of what I try to do, and what keeps me (relatively) sane.  Your rules will be different, but I think it’s really important to have rules, and to make it clear to yourself, your colleagues, your friends and your family, what they are.  Remote working is not always easy, and requires discipline – but that discipline is, more often than not, in giving yourself some slack, rather than making yourself sit down for eight hours a day.


1 – I realise that many people, including many of my readers, are American.  That’s fine: you be you.  I actively like tea, however (and know how to make it properly, which seems to be an issue when I visit).

2 – I know a couple of these: lucky, lucky people!

3 – can you spot a pattern?

Open source and – well, bad people

For most people writing open source, it – open source software – seems like an unalloyed good.  You write code, and other nice people, like you, get to add to it, test it, document it and use it.  Look what good it can do to the world!  Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source.  Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed.  The very soul of open source – the licence – is doing our work for us: improving the world.

And on the whole, I’d agree.  But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder.  Because I know, and you know, when you think about it, that not everybody who uses open source is a good person.  Crackers (that’s “bad hackers”) use open source.  Drug dealers use open source.  People traffickers use open source.  Terrorists use open source.  Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people.  These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”.  You’re happy to continue contributing to open source because many more people are helped by it than harmed by it.  The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1].  This is sometimes described as:

“Actions are right in proportion as they tend to promote overall human happiness.”

I certainly hope that open source does tend to promote overall human happiness.  The problem is that criminals are not the only people who will be using open source – your open source – code.  There will be businesses whose practices are shady, governments that  oppress their detractors, police forces that spy on the citizens they watch.  This is your code, being used to do bad things.

But what even are bad things?  This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad.  We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:

  • tobacco manufacturers;
  • petrochemical and fracking companies;
  • plastics manufacturers;
  • organisations who don’t support LGBTQ+ people;
  • gun manufacturers.

There’s quite a range here, and that’s intentional.  Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3].  He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”.  For ESR, “it’s all about freedom”.  I disagree, although I don’t feel the need to attack him for it.  But it’s clear that his view about what constitutes good is different to mine.  I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view.  Although we tend to characterise the open source community as liberal, this has never been a good generalisation.  According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:

Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.

This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community.  Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation.  Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.

I still think, though, that I’m not too worried.  I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).


1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.

2 – you should be used to this by now: UK spelling throughout.

3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page

Immutability: my favourite superpower

As a security guy, I approve of defence in depth.

I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time.  I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well.  You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora.  There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.

What does “immutable” mean?  It means that it can’t be changed.  To be more accurate, in a software context, it generally means that something can’t be changed during run-time.

Important digression – constant immutability

I realised as I wrote that final sentence that it might be a little misleading.  Many  programming languages have the concept of “constants”.  A constant is a variable (or set, or data structure) which is constant – that is, not variable.  You can assign a value to a constant, and generally expect it not to change.  But – and this depends on the language you are using – it may be that the constant is not immutable.  This seems to go against common sense[1], but that’s just the way that some languages are designed.  The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.

Operating System immutability

In Silverblue’s case, it’s the operating system that’s immutable.  You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem.  This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced.  It’s not impossible[2], but the risk is significantly lower.

How do you update your system, then?  Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that.  Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system.  This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages.  If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing.  Another environment?  Another image.

We’re more interested in the security properties that this offers us, however.  Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration.  From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.

Container immutability

This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”).  One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.

From our point of view, however, what’s great about containers is that you can choose to use them immutably.  In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern.  The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required.  The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t.  Remember: immutable software gives better knowability, and improves your resistance to run-time compromise.  Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.

This brings us to two of the reasons that you should never run containers with root privilege:

  • there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
  • there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.

Double immutability with Silverblue

I mentioned above that Silverblue runs applications in containers.  This means that you have two levels of security provided as default when you run applications on a Silverblue system:

  1. the operating system immutability;
  2. the container immutability.

As a security guy, I approve of defence in depth, and this is a classic example of that property.  I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.


1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.

2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.

3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.

Building Evolutionary Architectures – for security and for open source

Consider the fitness functions, state them upfront, have regular review.

Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.

https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it.  Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.

How, then, did I come across the book?  I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting.  The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.

Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly.  The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.

The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”.  Crucially, these are properties of the architecture – or system – rather than features or specific functionality.  Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests.  Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring.  There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.

Frankly, the above might be enough to recommend the book, but there’s more.  They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:

  1. you may not meet all your fitness functions at the beginning;
  2. applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
  3. your architecture is likely to change over time;
  4. your requirements, and therefore the priority that you give to each fitness function, will change over time;
  5. that even if your fitness functions remain the same, the ways in which you need to monitor them may change.

All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.

As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book.  At times, this felt a little contrived, but the mechanism is generally helpful.  There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts.  Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance.  Relating these to the impact of “architectural coupling” was particularly helpful, in my view.

There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective.  There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.

Why is this a book for security folks?

The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security.  Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix.  The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function.  Taking this as a starting point for a project will help you do two things:

  • avoid focussing on features and functionality, and look at the bigger picture;
  • consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.

Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time.  A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.

Why is this a book for open source folks?

Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles.  This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project.  One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:

  • still relevant;
  • correctly prioritised at this stage in the project;
  • actually being met.

If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.

Open source projects have – quite rightly – various different models of use and intended users.  One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended.  Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out.  One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project.  It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.

Trust & choosing open source

Your impact on open source can be equal to that of others.

A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”.  Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of  very important issues including (unsurprisingly, given the title), a number of issues around trust.  It defines transitive trust thus:

“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”

It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.

At no point in the document does it mention open source software.  To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.

My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source.  It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises.  I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).

What has this to do with trust, specifically transitive trust?  Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it.  Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.

What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software.  This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself.  What are we doing when we say “I trust open source software”?  We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.

There’s actually a lot going on here, some of which is very interesting:

  • we are trusting architects and designers to design software to meet our use cases and requirements;
  • we are trusting developers to implement code well, to those designs;
  • we are trusting developers to review each others’ code;
  • we are trusting documentation folks to document the software correctly;
  • we are trusting testers to write, run and check tests which are appropriate to my use cases;
  • we are trusting those who deploy the code to run in it ways which are similar to my use cases;
  • we are trusting those who deploy the code to report bugs;
  • we are trusting those who receive bug reports to fix them as expected.

There’s more, of course, but that’s definitely enough to get us going.  Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.

In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low.  There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.

That’s because, for open source, I can be any of the entities outlined above.  I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter.  When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive.  You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.

Why, then, would anybody buy or license open source software from a vendor?  Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above.  There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.

Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.


1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal.  You may need help.

2 – see 1.

3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.

Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.