Zero trust and Confidential Computing

Confidential Computing can provide two properties which are excellent starting points for zero/explicit trust.

I’ve been fairly scathing about “zero trust” before – see, for instance, my articles Thinking beyond “zero-trust” and “Zero-trust”: my love/hate relationship – and my view of how the industry talks about it hasn’t changed much. I still believe, in particular, that:

  1. the original idea, as conceived, has a great deal of merit;
  2. few people really understand what it means;
  3. it’s become an industry bandwagon that is sorely abused by some security companies;
  4. it would be better called “explicit trust”.

The reason for this last is that it’s impossible to have zero trust: any entity or component has to have some level of trust in the other entities/components with which it interacts. More specifically, it has to maintain trust relationships – and what they look like, how they’re established, evaluated, maintained and destroyed is the core point of discussion of my book Trust in Computer Systems and the Cloud. If you’re interested in a more complete and reasoned criticism of zero trust, you’ll find that in Chapter 5: The Importance of Systems.

But, as noted above, I’m actually in favour of the original idea of zero trust, and that’s why I wanted to write this article about how zero trust and Confidential Computing, when combined, can actually provide some real value and improvements over standard distributed architectures (particularly in the Cloud).

An important starting point, however, is to note that I’ll be using this definition of Confidential Computing:

Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment.

Confidential Computing Consortium, https://confidentialcomputing.io/about/

Confidential Computing, as thus described, can provide two properties which are excellent starting points for components wishing to exercise zero/explicit trust, which we’ll examine individually:

  1. isolation from the host machine/system, particularly in terms of confidentiality of data;
  2. cryptographically verifiable identity.

Isolation

One of the main trust relationships that any executing component must establish and maintain is with the system that is providing the execution capabilities – the machine on which it is running (or virtual machine – but that presents similar issues). When you say that your component has “zero trust”, but has to trust the host machine on which it is running to maintain the confidentiality of the code and/or data associated with the component, then you have to accept the fact that you do actually have an enormous trust relationship: with the machine and whomever administers/controls it (and that includes anyone who may have compromised it). This can hardly form the basis for a “zero trust” architecture – but what can be done about it?

Where Confidential Computing helps is by allowing isolation from the machine which is doing the execution. The component still needs to trust the CPU/firmware that’s providing the execution context – something needs to run the code, after all! – but we can shrink that number of trust relationships required significantly, and provide cryptographic assurances to base this relationship on (see Attestation, below).

Knowing that a component is isolated from another component allows that component to have assurances about how it will operate and also allows other components to build a trust relationship with that component with the assurance that it is acting with its own agency, rather than under that of a malicious actor.

Attestation

Attestation is the mechanism by which an entity can receive assurances that a Confidential Computing component has been correctly set up and can provide the expected properties of data confidentiality and integrity and code integrity (and in some cases, confidentiality). These assurances are bound to a particular Confidential Computing component (and the Trusted Execution Environment in which it executes) cryptographically, which allows for another property to be provided as well: a unique identity. If the attesting service bind this identity cryptographically to the Confidential Computing component by means of, for instance, a standard X.509 certificate, then this can provide one of the bases for trust relationships both to and from the component.

Establishing a “zero trust” relationship

These properties allow zero (or “explicit”) trust relationships to be established with components that are operating within a Confidential Computing environment, and to do so in ways which have previously been impossible. Using classical computing approaches, any component is at the mercy of the environment within which it is executing, meaning that any trust relationship that is established to it is equally with the environment – that is, the system that is providing its execution environment. This is far from a zero trust relationship, and is also very unlikely to be explicit!

In a Confidential Computing environment, components can have a small number of trust relationships which are explicitly noted (typically these include the attestation service, the CPU/firmware provider and the provider of the executing code), allowing for a much better-defined trust architecture. It may not be exactly “zero trust”, but it is, at least, heading towards “minimal trust”.

What’s your website’s D&D alignment?

This is the impression – often the first impression – that users of the website get of the organisation.

Cookies and Dungeons and Dragons – a hypothesis

Recent privacy legislation has led organisations to have to adopt ways of allowing their users to register their cookie preference in ways which expose the underlying motivations of the org. I have a related theory, and it goes like this: these different registration options allow you to map organisations to one of the 9 Dungeons and Dragons character alignments.

This may seem like a bit of a leap, but stick with me. First, here’s a a little bit of background for those readers who’ve never dabbled in (or got addicted to -there’s little room between the two extremes) the world of Dungeons and Dragons (or “D&D”). There are two axes used to describe a character: Lawful-Neutral-Chaotic and Good-Neutral-Evil. Each character has a position across each of these axes, so you could have someone who’s Lawful-Good, one who’s Chaotic Neutral or one who’s Neutral-Good, for instance (a “Neutral-Neutral” character is described as “True Neutral”). A Lawful character follows the law – or a strong moral code, whereas a Chaotic one just can’t be bothered. A Neutral character tends to do so when it suits them. The Good-Neutral-Evil axis should be pretty clear.

Second bit of background: I never just accept cookies on a website. I always go through the preferences registration options, and almost always remove permissions for all cookie tracking beyond the “minimum required for functionality”. I know I’m in a tiny minority in this, but I like to pretend that I can safeguard at least some of my private data, so I do it anyway (and so should you). I’ve noticed, over the past few months, that there are a variety of ways that cookie choices are presented to you, and I reckon that we can map several of these to these D&D alignments, giving us a chance to get a glimpse into the underlying motivation of the organisation whose website we’re visiting.

I’ve attempted to map the basic approaches I’ve seen into this table.

Lawful GoodNeutral GoodChaotic Good
Functional cookies only by default.No cookies, and a link to a long and confusing explanation about how the organisation doesn’t believe in them.No cookies at all, no explanation.
Lawful NeutralTrue Neutral Chaotic Neutral
Functional and tracking cookies by default, clear what tracking cookies are; all easy to turn off.Functional and tracking cookies by default, completely unclear what the cookies do.Random selection of cookies, and it’s unclear what they do, but you can at least turn them off.
Lawful EvilNeutral EvilChaotic Evil
All cookies by default: functional, tracking and legitimate uses.  Easy to remove with “reject all” or “object all”.All cookies by default.  “Legitimate uses” need to be deselected individually.All cookies by default, with 100s listed.  You have to deselect them by hand (there’s no “reject all” or “object all”), and there’s a 2 to 5 minute process to complete the registration, which finishes on 100% but never completes.
D&D alignments and website cookie preference approaches

Clearly, this is a tongue-in-cheek post, but there’s an important point here, I think: even if this glimpse isn’t a true representation of the organisation, it’s the impression – often the first impression – that users of the website get. My view of an organisation is formed partly through my interaction with its website, and while design, layout and content are all important, of course, the view that is presented about how much (if at all) the organisation cares about my experience, my data and my privacy should be something that organisations really care about. If they don’t respect me, then why should I respect them?

If I’m trying to attract someone to work for me, partner with me or buy from me, then my marketing department should be aware of the impression that visitors to my website glean from all interactions. At the moment, this seems to be missing, and while it’s not difficult to address, it seems to have escaped the notice of most organisations up to this point.

Enarx hits 750 stars

Yesterday, Enarx, the open source security project of which I’m co-founder and for which Profian is custodian, gained its 750th GitHub star. This is an outstanding achievement, and I’m very proud of everyone involved. Particular plaudits to Nathaniel McCallum, my co-founder for Enarx and Profian, Nick Vidal, the community manager for Enarx, everyone who’s been involved in committing code, design, tests and documentation for the project, and everyone who manages the running of the project and its infrastructure. We’ve been lucky enough to be joined by a number of stellar interns along the way, who have also contributed enormously to the project.

Enarx has also been supported by a number of organisations and companies, and it’s worth listing as many of them as I can think of:

  • Profian, the current custodian
  • Red Hat, under whose auspices the initial development began
  • the Confidential Computing Consortium, a Linux Foundation Project, which owns the project
  • Equinix, who have donated computing resources
  • PhoenixNAP, who have donated computing resources
  • Rocket.Chat, who have donated chat resources
  • Intel, who have worked with us along the way and donated various resources
  • AMD, who have worked with us along the way and donated various resources
  • Outreachy, with whom worked to get some of our fine interns

When it all comes down to it, however, it’s the community that makes the project. We strive to create a friendly, open community, and we want more and more people to get involved. To that end, we’ll soon be announcing some new ways to get involved with trying and using Enarx, in association with Profian. Keep an eye out, and keep visiting and giving us stars!

What is attestation for Confidential Computing?

Without attestation, you’re not really doing Confidential Computing.

This post – or the title of this post – has been sitting in my “draft” pile for about two years. I don’t know how this happened, because I’ve been writing about Confidential Computing for three years or so years by now, and attestation is arguably the most important part of the entire subject.

I know I’ve mentioned attestation in passing multiple times, but this article is devoted entirely to it. If you’re interested in Confidential Computing, then you must be interested in attestation, because, without it, you’re not doing Confidential Computing right. Specifically, without attestation, any assurances you may think you have about Confidential Computing are worthless.

Let’s remind ourselves what Confidential Computing is: it’s the protection of applications and data in use by a hardware-based TEE (Trusted Execution Environment). The key benefit that this brings you is isolation from the host running your workload: you can run applications in the public cloud, on premises or in the Edge, and have cryptographic assurances that no one with access to the host system – hypervisor access, kernel access, admin access, even standard hardware access[1] – can tamper with your application. This, specifically, is Type 3 – workload from host – isolation (see my article Isolationism – not a 4 letter word (in the cloud) for more details), and is provided by TEEs such as AMD’s SEV and Intel’s SGX – though not, crucially, by AWS Nitro, which does not provide Confidential Computing capabilities as defined by the Confidential Computing Consortium.

Without attestation, you’re not really doing Confidential Computing. Let’s consider a scenario where you want to deploy an application using Confidential Computing on a public cloud. You ask your CSP (Cloud Service Provider) to deploy it. The CSP does so. Great – your application is now protected: or is it? Well, you have no way to tell, because your CSP could just have taken your application, deployed it in the normal way, and told you that it had deployed it using a TEE. What you need is to take advantage of a capability that TEE chips provide called an attestation measurement to check that a TEE instance was actually launched and that your application was deployed into it. You (or your application) asks the TEE-enabled chip to perform a cryptographically signed measurement of the TEE set-up (which is basically a set of encrypted memory pages). It does so, and that measurement can then be checked to ensure that it has been correctly set up: there’s a way to judge whether you’re actually doing Confidential Computing.

So, who does that checking? Doing a proper cryptographic check of an attestation measurement – the attestation itself – is surprisingly[2] tricky, and, unless you’re an expert in TEEs and Confidential Computing (and one of the points of Confidential Computing is to make is easy for anyone to use these capabilities), then you probably don’t want to be doing it.

Who can perform the validation? Well, one option might be for the validation to be done on the host machine that’s running the TEE. But wait a moment – that makes no sense! You’re trying to isolate yourself from that machine and anyone who has access to it: that’s the whole point of Confidential Computing. You need a remote attestation service – a service running on a different machine which can be trusted to validate the attestation and either halt execution if it fails, or let you know so that you can halt execution.

So who can run that remote attestation service? The obvious – obvious but very, very wrong – answer is the CSP who’s running your workload. Obvious because, well, they presumably run Confidential Computing workloads for lots of people, but wrong because your CSP is part of your threat model. What does this mean? Well, we talked before about “trying to isolate yourself from that machine and anyone who has access to it”, the anyone who has access to it is exactly your CSP. If the reason to be using Confidential Computing is to be able to put workloads in the public cloud even when you can’t fully trust your CSP (for regulatory reasons, for auditing reasons, or just because you need higher levels of assurance than existing cloud computing), then you can’t trust your CSP to provide the remote attestation service. To be entirely clear: if you allow your CSP to do your attestation, you lose the benefits of Confidential Computing.

Attestation – remote attestation – is vital, but if we can’t trust the host or the CSP to do it, what are your options? Well, either you need to do the attestation yourself (which I already noted is surprisingly difficult), or you’re going to need to find a third party to do that. I’ll be discussing the options for this in a future article – keep an eye out.


1 – the TEEs used for Confidential Computing don’t aim to protect against long-term access to the CPU by a skilled malicious actor – but this isn’t a use case that’s relevant to most users.

2 – actually, not that surprising if you’ve done much work with cryptography, cryptographic libraries, system-level programming or interacted with any silicon vendor documentation.

My book at RSA Conference NA

Attend RSA and get 20% off my book!

Attend RSA and get 20% off my book!

I’m immensely proud (as you can probably tell from the photo) to be able to say that my book in available in the book store at the RSA Conference in San Francisco this week. You’ll find the store in Moscone South, up the escalators on the Esplanade.

If you ever needed a reason to attend RSA, this is clearly the one, particularly with the 20% discount. If anyone’s interested in getting a copy signed, please contact me via LinkedIn – I currently expect to be around till Friday morning. It would be great to meet you.

Enarx 0.3.0 (Chittorgarh Fort)

Write some applications and run them in an Enarx Keep.

I usually post on a Tuesday, but this week I wanted to wait for a significant event: the release Enarx v0.3.0, codenamed “Chittorgarh Fort”. This happened after I’d gone to bed, so I don’t feel too bad about failing to post on time. I announced Enarx nearly three years ago, in the article Announcing Enarx on the 7th May 2019. and it’s admittedly taken us a long time to get to where we are now. That’s largely because we wanted to do it right, and building up a community, creating a start-up and hiring folks with the appropriate skills is difficult. The design has evolved over time, but the core principles and core architecture are the same as when we announced the project.

You can find more information about v0.3.0 at the release page, but I thought I’d give a few details here and also briefly add to what’s on the Enarx blog about the release.

What’s Enarx?

Enarx is a deployment framework for running applications within Trusted Execution Environments (TEEs). We provide a WebAssembly runtime and – this is new functionality that we’ve started adding in this release – attestation so that you can be sure that your application is protected within a TEE instance.

What’s new in v0.3.0?

A fair amount of the development for this release has been in functionality which won’t be visible to most users, including a major rewrite of the TEE/host interface component that we call sallyport. You will, however, notice that TLS support has been added to network connections from applications within the Keep. This is transparent to the application, so “Where does the certificate come from?” I hear you ask. The answer to that is from the attestation service that’s also part of this release. We’ll be talking more about that in further releases and articles, but key to the approach we’re taking is that interactions with the service (we call it the “Steward”) is pretty much transparent to users and applications.

How can I get involved?

What can you do to get involved? Well, visit the Enarx website, look at the code and docs over at our github repositories (please star the project!), get involved in the chat. The very best thing you can do, having looked around, is to write some applications and run them in an Enarx Keep. And then tell us about your experience. If it worked first time, then wow! We’re still very much in development, but we want to amass a list of applications that are known to work within Enarx, so tell us about it. If it doesn’t work, then please also tell us about it, and have a look at our issues page to see if you’re the first person to run across this problem. If you’re not, then please add your experiences to an existing issue, but if you are, then create a new one.

Enarx isn’t production ready, but it’s absolutely ready for initial investigations (as shown by our interns, who created a set of demos for v0.2.0, curated and aided by our community manager Nick Vidal).

Why Chittorgarh Fort?

It’s worth having a look at the Wikipedia entry for the fort: it’s really something! We decided, when we started creating official releases, that we wanted to go with the fortification theme that Enarx has adopted (that’s why you deploy applications to Enarx Keeps – a keep is the safest part of a castle). We started with Alamo, then went to Balmoral Castle, and then to Chittorgarh Fort (we’re trying to go with alphabetically sequential examples as far as we can!). I suggested Chittorgarh Fort to reflect the global nature of our community, which happens to include a number of contributors from India.

Who was involved?

I liked the fact that the Enarx blog post mentioned the names of some (most?) of those involved, so I thought I’d copy the list of github account names from there, with sincere thanks:

@MikeCamel @npmccallum @haraldh @connorkuehl @lkatalin @mbestavros @wgwoods @axelsimon @ueno @ziyi-yan @ambaxter @squidboylan @blazebissar @michiboo @matt-ross16 @jyotsna-penumaka @steveeJ @greyspectrum @rvolosatovs @lilienbm @CyberEpsilon @kubkon @nickvidal @uudiin @zeenix @sagiegurari @platten @greyspectrum @bstrie @jarkkojs @definitelynobody @Deepansharora27 @mayankkumar2 @moksh-pathak


Rahultalreja11 at English Wikipedia, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Emotional about open source

Enarx is available to all, usable by all.

Around October 2019, Nathaniel McCallum and I founded the Enarx project. Well, we’d actually started it before then, but it’s around then that the main GitHub repo starts showing up, when I look at available info. In the middle of 2021, we secured funding a for a start-up (now named Profian), and since then we’ve established a team of engineers to work on the project, which is itself part of the Confidential Computing Consortium. Enarx is completely open source, and that’s really central to the project. We want (and need) the community to get involved, try it out, improve it, and use it. And, of course, if it’s not open source, you can’t trust it, and that’s really important for security.

The journey has been hard at times, and there were times when we nearly gave up on the funding, but neither Nathaniel nor I could see ourselves working on anything else – we really, truly believe that there’s something truly special going on, and we want to bring it to the world. I’m glad (and relieved) that we persevered. Why? Because last week, on Thursday, was the day that this came true for me. The occasion was OC3, a conference in Confidential Computing organised by Edgeless Systems. I was giving a talk on Understanding trust relationships for Confidential Computing, which I was looking forward to, but Nick Vidal, Community Manager for the Enarx project, also had a session earlier on. His session was entitled From zero to hero: making Confidential Computing accessible, and wasn’t really his at all: it was taken up almost entirely by interns in the project, with a brief introduction and summing up by Nick.In his introduction, Nick explained that he’d be showing several videos recorded by the interns of demos they had recorded. These demos took the Enarx project and ran applications that the (they interns) had created within Keeps, using the WebAssembly runtime provided within Enarx. The interns and their demos were:

  • TCP Echo Server (Moksh Pathak & Deepanshu Arora) – Mosksh and Deepanshu showed two demos: a ROT13 server which accepts connections, reads text from them and returns the input, ROT13ed; and a simple echo server.
  • Fibonacci number generator (Jennifer Chukwu) – a simple Fibonacci number generator running in a Keep
  • Machine learning with decision tree algorithm on Diabetes data set (Jennifer Kumar & Ajay Kumar) – implementation of Machine Learning, operating on a small dataset.
  • Zero Knowledge Proof using Bulletproof (Shraddha Inamdar) – implementation of a Zero Knowledge Proof with verification.

What is exciting about these demos is several-fold:

  1. three of them have direct real-world equivalent use cases:
    1. The ROT13 server, while simple, could be the basis for an encryption/decryptions service.
    2. the Machine Learning service is directly relevant to organisations who wish to run ML workloads in the Cloud, but need assurances that the data is confidentiality and integrity protected.
    3. the Zero Knowledge Proof demo provides an example of a primitive required for complex transaction services.
  2. none of the creators of the demos knew anything about Confidential Computing until a few months ago.
  3. none of the creators knew much – if anything – about WebAssembly before coming to the project.
  4. none of the creators is a software engineering professional (yet!). They are all young people with an interest in the field, but little experience.

What this presentation showed me is that what we’re building with Enarx (though it’s not even finished at this point) is a framework that doesn’t require expertise to use. It’s accessible to beginners, who can easily write and deploy applications with obvious value. This is what made me emotional: Enarx is available to all, usable by all. Not just security experts. Not just Confidential Computing gurus. Everyone. We always wanted to build something that would simplify access to Confidential Computing, and that’s what we, the community, have brought to the world.

I’m really passionate about this, and I’d love to encourage you to become passionate about it, too. If you’d like to know more about Enarx, and hopefully even try it yourself, here are some ways to do just that;

  • visit our website, with documentation, examples and a guide to getting started
  • join our chat and then one of our stand-ups
  • view the code over at GitHub (and please star the project: it encourages more people to get involved!)
  • read the Enarx blog
  • watch the video of the demos.

I’d like to finish this post by thanking not only the interns who created the demos, but also Nick Vidal, for the incredible (and tireless!) work he’s put into helping the interns and into growing the community. And, of course, everyone involved in the project for their efforts in getting us to where we are (and the vision to continue to the next exciting stages: subscribe to this blog for upcoming details).

Enarx 0.2.0 – Balmoral Castle

Now it’s possible to write applications that you can talk to over the network

The big news this week from the Enarx project is our latest release: 0.2.0, which is codenamed “Balmoral Castle”, to continue with our castle/fortification theme.

The big change in Enarx 0.2.0 is the addition of support for networking. Until now, there wasn’t much you could really do in an Enarx Keep, honestly: you could run an application, but all it could to for input and output was read from stdin and write to stdout or stderr. While this was enough to prove that you could write and compile applications to WebAssembly and run them, any more complex interaction with the world outside the Keep was impossible.

So, why is this big news? Well, now it’s possible to write applications that you can talk to over the network. The canonical example which we’ve provided as part of the release is a simple “echo” server, which you start in a Keep and then listens on a port for incoming connections. You make a connection (for instance using the simple command-line utility ncat), and send it a line of text. The server accepts the connection, receives the text and sends it right back you. It can handle multiple connections and will send the text back to the right one (hopefully!).

This is new functionality with Enarx 0.2.0, and the ability to use networking mirrors an important change within WASI (the WebAssembly System Interface) specification, the runtime implemented within an Enarx Keep. Specifically, WASI snapshot preview 1, released in January (2022) now supports the the ACCEPT capability on sockets. The way that WASI works with managing permissions and capabilities is carefully designed, and we (the Profian folks working on Enarx) coordinated closely with the open source WASI/Wasm community to add this in a way which is consistent with the design philosophy of the project. Once the capability was added to the snapshot, there was one more step needed before Enarx could implement support, which was that it needed to appear in wasmtime, the WebAssembly runtime we use within Keeps to allow you to run your applications. This happened last week, in wasmtime release 0.34.0, and that allowed us to make this new release of Enarx.

This may not sound very exciting … but with this in place, you start to build proper applications and micro-services. What about an HTTP server? A ROT13 “encryption” service? A chatbot? An email server? A Wordle implementation[1]? And it’s not just text that you can send over a network connection, of course. What might write to process other types of data? A timestamp server? A logging service? With a network connection, you have the ability to write any of these. Micro-services are all about accepting connections, processing the data that’s come in, and then sending out the results. All of that is possible with this new release.

What we’d love you to do is to start writing applications (using networking) and running them in Enarx. Tell us what works – even better, tell us what doesn’t by creating an issue in our github repository . Please publish examples, join our chat channels, give us a github star, get involved.

What’s coming next? Well, keep an eye on the Enarx site, but be assured that I’ll announce major news here as well. You can expect work in attestation and deployment in the near future – watch this space…


1 – at time of writing, everyone’s talking about Wordle. For those of you coming from the future (say a couple of weeks from now), you can probably ignore this example.

[Image of Edward VII at Balmoral Castle from Wikimedia].

Trust book – playlist!

A playlist of music to which I’d listened and which I’d enjoyed over the months it took to write the book.

I had probably more fun than I deserved to have writing the acknowledgements section of my book, Trust in Computer Systems and the Cloud (published by Wiley at the end of December 2021). There was another section which I decided to add to the book purely for fun: a playlist of music to which I’d listened and which I’d enjoyed over the months it took to write. I listen to a lot of music, and the list is very far from a complete one, but it does represent a fair cross-section of my general listening tastes. Here’s the list, with a few words about each one.

One thing that’s missing is any of the classical music that I listen to. I decided against including this, as I’d rarely choose single tracks, but adding full albums seemed to miss the point. I do listen to lots of classical music, in particular sacred choral and organ music – happy to let people have some suggestions if they’d like.

  • Secret Messages – ELO – I just had to have something related (or that could be considered to be related) to cryptography and security. This song isn’t, really, but it’s a good song, and I like it.
  • Bleed to Love Her – Fleetwood Mac – Choosing just one Fleetwood Mac song was a challenge, but I settled on this one. I particularly like the harmonics in the version recorded live at Warner Brothers Studio in Burbank.
  • Alone in Kyoto – Air – This is a song that I put on when I want to relax. Chiiiiilllll.
  • She’s So Lovely – Scouting for Girls – Canonically, this song is known as “She’s A Lovely” in our family, as that’s what we discovered our daughters singing along to when we played it in the car many years ago.
  • Prime – Shearwater – This is much more of an “up” song when I want to get an edge on. Shearwater have a broad range of output, but this is particular favourite.
  • Stay – Gabrielle Aplin – I like the way this song flips expectations on its head. A great song by a talented artist.
  • The Way I Feel – Keane – A song about mental health.
  • Come On, Dreamer – Tom Adams – Adams has an amazing voice, and this is a haunting song about hope.
  • Congregation – Low – I discovered this song watching DEVS on Amazon Prime (it was originally on Hulu). Low write (and perform) some astonishing songs, and it’s really worth going through their discography if you like this one.
  • Go! – Public Service Broadcasting – You either love this or hate it, but I’m in the “love” camp. It takes original audio from the Apollo 11 moon landing and puts it to energising, exciting music.
  • The Son of Flynn (From “TRON: Legacy”/Score) – Daft Punk – TRON:Legacy may not be not the best film ever released, but the soundtrack from Daft Punk is outstanding Electronica.
  • Lilo – The Japanese House – A song about loss? About hope? Another one to chill to (and tha band are great live, too).
  • Scooby Snacks – Fun Lovin’ Criminals – Warning: explicit lyrics (from the very beginning!) A ridiculous song which makes me smile every time I listen to it.
  • My Own Worth Enemy – Stereophonics – I slightly surprised myself by choosing this song from the Stereophonics, as I love so many of their songs, but it really does represent much of what I love about their oeuvre.
  • All Night – Parov Stelar – If you ever needed a song to dance to as if nobody’s watching, this is the one.
  • Long Tall Sally (The Thing) – Little Richard – Sometimes you need some classic Rock ‘n’ Roll in your life, and who better to provide it?
  • Shart Dressed Man – ZZ Top – “Black tie…” An all-time classic by men with beards. Mostly.
  • Dueling Banjos – Eric Weissberg – I first heard this song at university. It still calls out to me. There are some good versions out there, but original from the songtrack to Deliverance is the canonical one. And what a film.
  • The Starship Avalon (Main Title) – Thomas Newman – This (with some of the others above) is on a playlist I have called “Architecting”, designed to get me in the zone. Another great film.
  • A Change is Gonna Come – Sam Cooke – A song of sadness, pain and hope.
  • This Place – Jamie Webster – A song about Liverpool, and a family favourite. Listen and enjoy (the accent and the song!).

If you’d like to listen to these tracks yourself, I’ve made playlists on my two preferred audio streaming sites: I hope you enjoy.

Spotify – Trust in Computer Systems and the Cloud – Bursell

Qobuz – Trust in Computer Systems and the Cloud – Bursell

As always, I love to get feedback from readers – do let me know what you think, or suggest other tracks or artists I or other readers might appreciate.

“Trust in Computer Systems and the Cloud” published

I’ll probably have a glass or two of something tonight.

It’s official: my book is now published and available in the US! What’s more, my author copies have arrived, so I’ve actually got physical copies that I can hold in my hand.

You can buy the book at Wiley’s site here, and pre-order with Amazon (the US site lists is as “currently unavailable”, and the UK site lists is as available from 22nd Feb, 2022. ,though hopefully it’ll be a little earlier than that). Other bookstores are also stocking it.

I’m over the moon: it’s been a long slog, and I’d like to acknowledge not only those I mentioned in last week’s post (Who gets acknowledged?), but everybody else. Particularly, at this point, everyone at Wiley, calling out specifically Jim Minatel, my commissioning editor. I’m currently basking in the glow of something completed before getting back to my actual job, as CEO of Profian. I’ll probably have a glass or two of something tonight. In the meantime, here’s a quote from Bruce Schneier to get you thinking about reading the book.

Trust is a complex and important concept in network security. Bursell neatly unpacks it in this detailed and readable book.

Bruce Schneier, author of Liars and Outliers: Enabling the Trust that Society Needs to Thrive

At least you know what to buy your techy friends for Christmas!