“Trust in Computer Systems and the Cloud” published

I’ll probably have a glass or two of something tonight.

It’s official: my book is now published and available in the US! What’s more, my author copies have arrived, so I’ve actually got physical copies that I can hold in my hand.

You can buy the book at Wiley’s site here, and pre-order with Amazon (the US site lists is as “currently unavailable”, and the UK site lists is as available from 22nd Feb, 2022. ,though hopefully it’ll be a little earlier than that). Other bookstores are also stocking it.

I’m over the moon: it’s been a long slog, and I’d like to acknowledge not only those I mentioned in last week’s post (Who gets acknowledged?), but everybody else. Particularly, at this point, everyone at Wiley, calling out specifically Jim Minatel, my commissioning editor. I’m currently basking in the glow of something completed before getting back to my actual job, as CEO of Profian. I’ll probably have a glass or two of something tonight. In the meantime, here’s a quote from Bruce Schneier to get you thinking about reading the book.

Trust is a complex and important concept in network security. Bursell neatly unpacks it in this detailed and readable book.

Bruce Schneier, author of Liars and Outliers: Enabling the Trust that Society Needs to Thrive

At least you know what to buy your techy friends for Christmas!

Who gets acknowledged?

Some of the less obvious folks who get a mention in my book, and why

After last week’s post, noting that my book was likely to be delayed, it turns out that it may be available sooner than I’d thought. Those of you in the US should be able to get hold of a copy first – possibly sooner than I do. The rest of the world should have availability soon after. While you’re all waiting for your copy, however, I thought it might be fun for me to reveal a little about the acknowledgements: specifically, some of the less obvious folks who get a mention, and why they get a mention.

So, without further ado, here’s a list of some of them:

  • David Braben – in September 1984, not long after my 14th birthday, the game Elite came out on the BBC micro. I was hooked, playing for as long and as often as I was allowed (which wasn’t as much as I would have liked, as we had no monitor, and I had to hook the BBC up to the family TV). I first had the game on cassette, and then convinced my parents that a (5.25″) floppy drive would be a good educational investment for me, thereby giving me the ability to play the extended (and much quicker loading) version of the game. Fast forward to now, and I’m still playing the game which, though it has changed and expanded in many ways, is still recognisably the same one that came out 37 years ago. David Braben was the initial author, and still runs the company (Frontier Developments) which creates, runs and supports the game. Elite excited me, back in the 80s, with what computers could do, leading me to look into wireframes, animation and graphics.
  • Richard D’Silva – Richard was the “head of computers” at the school I attended from 1984-1989. He encouraged me (and many others) to learn what computers could do, all the way up to learning Pascal and Assembly language to supplement the (excellent) BASIC available BBC Bs and BBC Masters which the school had (and, latterly, some RISC machines). There was a basic network, too, an “Econet”, and this brought me to initial research into security – mainly as a few of us tried (generally unsuccessfully) to access machines and accounts were weren’t supposed to.
  • William Gibson – Gibson wrote Neuromancer – and then many other novels (and short stories) – in the cyberpunk genre. His vision of engagement with technology – always flawed, often leading to disaster, has yielded some of the most exciting and memorable situations and characters in scifi (Molly, we love you!).
  • Nick Harkaway – I met Nick Cornwell (who writes as Nick Harkaway) at the university Jiu Jitsu club, but he became a firm friend beyond that. Always a little wacky, interested maybe more about the social impacts of technology than tech for tech’s sake (more my style then), he always had lots of interesting opinions to share. When he started writing, his wackiness and thoughtfulness around how technology shapes us informed his fiction (and non-fiction). If you haven’t read The Gone-Away World, order it now (and read it after you’ve finished my book!).
  • Anne McCaffrey – while I’m not an enormous fan of fantasy fiction (and why do scifi and fantasy always seem to be combined in the same section in bookshops?), Anne McCaffrey’s work was a staple in my teenage years. I devoured her DragonRiders of Pern series and also enjoyed her (scifi) Talents series as that emerged. One of the defining characteristics of her books was always strong female characters – a refreshing change for a genre which, at the time, seemed dominated by male protagonists. McCaffrey also got me writing fiction – at one point, my school report in English advised that “Michael has probably written enough science fiction for now”.
  • Mrs Macquarrie (Jenny) – Mrs Macquarrie was my Maths teacher from around 1978-1984. She was a redoubtable Scot, known to the wider world as wife of the eminent theologian John Macquarrie, but, in the universe of the boarding school I attended, she was the strict but fair teacher who not only gave me a good underpinning in Maths, but also provided a “computer club” at the weekends (for those of us who were boarding), with her ZX Spectrum.
  • Sid Meier – I’ve played most of the “Sid Meier’s Civilization” (sic) series of games over the past several decades(!) from the first, released in 1991. In my university years, there would be late night sessions with a bunch of us grouped around the monitor, eating snacks and drinking whatever we could afford. These days, having a game running on a different monitor can still be rewarding when there’s a boring meeting you have to attend…
  • Bishop Nick – this one is a trick, and shouldn’t really appear in this section, as Bishop Nick isn’t a person, but a local brewery. They brew some great beers, however, including “Heresy” and “Divine”. Strongly recommended if you’re in the Northeast Essex/West Suffolk area.
  • Melissa Scott – Scott’s work probably took the place of McCaffrey’s as my reading tastes matured. Night Sky Mine, Trouble and her Friends and The Jazz provided complex and nuanced futures, again with strong female protagonists. The queer undercurrents in her books – most if not all of her books have some connection with queer themes and cultures – were for me introduction to a different viewpoint on writing and sexuality in “popular” fiction, beyond the more obvious and “worthy” literary treatments with which I was already fairly familiar.
  • Neal Stephenson – Nick Harkaway/Cornwell (see above) introduced me to Snowcrash when it first came out, and I managed to get a UK trade paperback copy. Stephenson’s view of a cyberpunk future, different from Gibson’s and full of linguistic and cultural craziness, hooked me, and I’ve devoured all of his work since. You can’t lose with Snowcrash, but my other favourite of his is Cryptonomicon, a book which zig-zags between present day (well, early 2000s, probably) and the Second World War, embracing cryptography, religion, computing, gold, civil engineering and start-up culture. It’s on the list of books I suggest for anyone considering getting into security because the mindset shown by a couple of the characters really nails what it’s all about.

There are more people mentioned, but these are the ones most far removed either in time from or direct relevance to the writing of the book. I’ll leave those more directly involved, or just a little more random, for you to discover as you read.

Don’t forget: if you follow this blog, you’re in for a chance to win a free copy of the Trust in Computer Systems and the Cloud!

Cloud security asymmetry

We in the security world have to make people understand this issue.

My book, Trust in Computer Systems and the Cloud, is due out in the next few weeks, and I was wondering as I walked the dogs today (a key part of the day for thinking!) what the most important message in the book is. I did a bit of thinking and a bit of searching, and decided that the following two paragraphs expose the core thesis of the book. I’ll quote them below and then explain briefly why (the long explanation would require me to post most of the book here!). The paragraph is italicised in the book.

A CSP [Cloud Service Provider] can have computational assurances that a tenant’s workloads cannot affect its hosts’ normal operation, but no such computational assurances are available to a tenant that a CSP’s hosts will not affect their workloads’ normal operation.

In other words, the tenant has to rely on commercial relationships for trust establishment, whereas the CSP can rely on both commercial relationships and computational techniques. Worse yet, the tenant has no way to monitor the actions of the CSP and its host machines to establish whether the confidentiality of its workloads has been compromised (though integrity compromise may be detectable in some situations): so even the “trust, but verify” approach is not available to them.”

What does this mean? There is, in cloud computing, a fundamental asymmetry: CSPs can protect themselves from you (their customer), but you can’t protect yourself from them.

Without Confidential Computing – the use of Trusted Execution Environments to protect your workloads – there are no technical measures that you can take which will stop Cloud Service Providers from looking into and/or altering not only your application, but also the data it is processing, storing and transmitting. CSPs can stop you from doing the same to them using standard virtualisation techniques, but those techniques provide you with no protection from a malicious or compromised host, or a malicious or compromised CSP.

I attended a conference recently attended by lots of people whose job it is to manage and process data for their customers. Many of them do so in the public cloud. And a scary number of them did not understand that all of this data is vulnerable, and that the only assurances they have are commercial and process-based.

We in the security world have to make people understand this issue, and realise that if they are looking after our data, they need to find ways to protect it with strong technical controls. These controls are few:

  • architectural: never deploy sensitive data to the public cloud, ever.
  • HSMs: use Hardware Security Modules. These are expensive, difficult to use and don’t scale, but they are appropriate for some sensitive data.
  • Confidential Computing: use Trusted Execution Environments (TEEs) to protect data and applications in use[1].

Given my interest – and my drive to write and publish my book – it will probably come as no surprise that this is something I care about: I’m co-founder of the Enarx Project (an open source Confidential Computing project) and co-founder and CEO of Profian (a start-up based on Enarx). But I’m not alone: the industry is waking up to the issue, and you can find lots more about the subject at the Confidential Computing Consortium‘s website (including a list of members of the consortium). If this matters to you – and if you’re an enterprise company who uses the cloud, it almost certainly already does, or will do so – then please do your research and consider joining as well. And my book is available for pre-order!

Logs – good or bad for Confidential Computing?

I wrote a simple workload for testing. It didn’t work.

A few weeks ago, we had a conversation on one of the Enarx calls about logging. We’re at the stage now (excitingly!) where people can write applications and run them using Enarx, in an unprotected Keep, or in an SEV or SGX Keep. This is great, and almost as soon as we got to this stage, I wrote a simple workload to test it all.

It didn’t work.

This is to be expected. First, I’m really not that good a software engineer, but also, software is buggy, and this was our very first release. Everyone expects bugs, and it appeared that I’d found one. My problem was tracing where the issue lay, and whether it was in my code, or the Enarx code. I was able to rule out some possibilities by trying the application in an unprotected (“plain KVM”) Keep, and I also discovered that it ran under SEV, but not SGX. It seemed, then, that the problem might be SGX-specific. But what could I do to look any closer? Well, with very little logging available from within a Keep, there was little I could do.

Which is good. And bad.

It’s good because one of the major points about using Confidential Computing (Enarx is a Confidential Computing framework) is that you don’t want to leak information to untrusted parties. Since logs and error messages can leak lots and lots of information, you want to restrict what’s made available, and to whom. Safe operation dictates that you should make as little information available as you possibly can: preferably none.

It’s bad because there are times when (like me) you need to work out what’s gone wrong, and find out whether it’s in your code or the environment that you’re running your application in.

This is where the conversation about logging came in. We’d started talking about it before this issue came up, but this made me realise how important it was. I started writing a short blog post about it, and then stopped when I realised that there are some really complex issues to consider. That’s why this article doesn’t go into them in depth: you can find a much more detailed discussion over on the Enarx blog. But I’m not going to leave you hanging: below, you’ll find the final paragraph of the Enarx blog article. I hope it piques your interest enough to go and find out more.

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure (e.g. the Enarx projects) and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.


Header image by philm1310 from Pixabay.

Enarx first release

Write an application, compile it to WebAssembly, and then run it in one of three Keeps types.

I was on holiday last week, and I took the opportunity not to write a blog post, but while I was sunning myself[1] at the seaside, the team did a brilliant thing: we have our first release of Enarx, and a new look for the website, to boot.

To see the new website, head over to https://enarx.dev. There, you’ll find new updated information about the project, details of how to get involved, and – here’s the big news – instructions for how to download and use Enarx. If you’re a keen Rustacean, you can also go straight to crates.io (https://crates.io/crates/enarx) and start off there. Up until now, in order to run Enarx, you’ve had to do quite a lot of low level work to get things running, run your own github branches, understand how everything fits together and manage your own development environment. This has now all changed.

This first release, version 0.1.1, is codenamed Alamo, and provides an easy way in to using Enarx. As always, it’s completely open source: you can look at every single line of our code. It doesn’t provide a full feature set, but what it does do is allow you, for the first time, to write an application, compile it to WebAssembly, and then run it in one of three Keep[2] types:

  1. KVM – this is basically a debugging Keep, in that it doesn’t provide any confidentiality or integrity protection, but it does allow you to get running and to try things even if you don’t have access to specialist hardware. A standard Linux machine should do you fine.
  2. SEV – this is a Keep using AMD’s SEV technology, specifically the newer version, SEV-SNP. This requires access to a machine which supports it[3].
  3. SGX – this is a Keep using Intel’s SGX technology. Again, this requires access to a machine which supports it[3].

The really important point here is that you’re running the same binary on each of these architectures. No recompilation for different architectures: just plain old WebAssembly[4].

Current support

There’s a lot more work to do, but what do we support at the moment?

  • running WebAssembly
  • KVM, SEV and SGX Keeps (see above)
  • stdin and stdout from/to the host – this is temporary, as the host is untrusted in the Enarx model, but until we have networking support (see below), we wanted to provide a simple way to manage input and output from a Keep.

There’s lots more to come – networking and attestation are both high on the list – but now anyone can start playing with Enarx. And, we hope, submitting enhancement and feature requests, not to mention filing bugs (we know there will be some!): to do so, hop over to https://github.com/enarx/enarx/issues.

To find out more, please head over to the website – there’s loads to see – or join us on chat channel over at https://chat.enarx.dev to find out more and get involved.


1 – it’s the British seaside, in October, so “sunning” might be a little inaccurate.

2 – a Keep is what we call a TEE instance set up for you to run an application in.

3 – we have AMD and SGX machines available for people who contribute to the project – get in touch!

4 – WebAssembly is actually rather new, but “plain old” sounds better than “vanilla”. Not my favourite ice cream flavour[5].

5 – my favourite basic ice cream flavour is strawberry. Same for milkshakes.

Win a copy of my book!

What’s better than excerpts? That’s right: the entire book.

As regular readers of this blog will know, I’ve got a book coming out with Wiley soon. It’s called “Trust in Computer Systems and the Cloud”, and the publisher’s blurb is available here. We’ve now got to the stage where we’ve completed not only the proof-reading for the main text, but also the front matter (acknowledgements, dedication, stuff like that), cover and “praise page”. I’d not heard the term before, but it’s where endorsements of the book go, and I’m very, very excited by the extremely kind comments from a variety of industry leaders which you’ll find quoted there and, in some cases, on the cover. You can find a copy of the cover (without endorsement) below.

Trust book front cover (without endorsement)

I’ve spent a lot of time on this book, and I’ve written a few articles about it, including providing a chapter index and summary to let you get a good idea of what it’s about. More than that, some of the articles here actually contain edited excerpts from the book.

What’s better than excerpts, though? That’s right: the entire book. Instead of an article today, however, I’m offering the opportunity to win a copy of the book. All you need to do is follow this blog (with email updates, as otherwise I can’t contact you), and when it’s published (soon, we hope – the March date should be beaten), I’ll choose one lucky follower to receive a copy.

No Wiley employees, please, but other than that, go for it, and I’ll endeavour to get you a copy as soon as I have any available. I’ll try to get it to you pretty much anywhere in the world, as well. So far, it’s only available in English, so apologies if you were hoping for an immediate copy in another language (hint: let me know, and I’ll lobby my publisher for a translation!).

Announcing Profian

Profian, a security start-up in the Confidential Computing space

I’m very excited to announce Profian, a security start-up in the Confidential Computing space that I co-founded with Nathaniel McCallum, came out of stealth mode today to announce that we’ve completed our Seed Round – you can find the press release here. This is the culmination of months of hard work and about two years of a vision that we’ve shared and developed since coming up with the idea of Enarx. Profian will be creating products and services around Enarx, and we’re committed to keeping everything we do open source: not just because we believe in open source as an ethical choice, but also because we believe that it’s best for security.

Enarx grew out of a vision that we had to simplify use of Trusted Execution Environments like AMD’s SEV and Intel’s SGX[1], while not compromising on the security that we believe the industry wants and needs. Enarx aims to allow you to deploy applications to any of the supported platforms without needing to recompile for each one, and to simplify both the development and deployment process. It supports WebAssembly as its runtime, allowing a seamless execution environment across multiple hardware types. Engineering for Enarx was initially funded by Red Hat, and towards the end of 2020, we started looking for a way to ensure long-term resourcing: out of this Profian was born. We managed to secure funding from two VC funds – Project A (lead investor) and Illuminate Financial – and four amazing angel investors. Coming out of stealth means that we can now tell more people about what we’re doing.

Profian is a member of two great industry bodies: the Confidential Computing Consortium (a Linux Foundation project to promote open source around Trusted Execution Environments) and the Bytecode Alliance (an industry group to promote and nurture WebAssembly, the runtime which Enarx supports).

The other important thing to announce is that with funding of Profian comes our chance to develop Enarx and its community into something really special.

If it’s your thing, you can find the press release on Business Wire, and more information on the company press page.

A few questions and answers

What’s confidential computing?

I tend to follow the Confidential Computing Consortium’s definition: “Confidential Computing protects data in use by performing computation in a hardware-based Trusted Execution Environment”.

What does Profian mean?

It’s Anglo-Saxon, the language also sometimes called “Old English”, which was spoken in (modern day) England and parts of Scotland from around the mid-5th century BCE to 1066, when Norman French had such an impact on the language that it changed (to Middle English).

One online Anglo-Saxon dictionary defines profian thus:

profian - 1. to esteem; regard as 2. to test ; try ; prove 3. to show evidence of ; evince

It’s the root of the English word “to prove”, from which we also get “proof” and “proven”. We felt that this summed up much of what we want to be doing, and is nicely complementary to Enarx.

How is Profian pronounced?

Not the way most pre-Conquest Anglo-Saxons would probably have pronounced it, to be honest. We (well, I) thought about trying to go with a more “authentic” pronunciation, and decided (or was convinced…) that it was too much trouble. We’re going with “PROH-vee-uhn”[2].

What does Enarx mean?

You’ll find more information about this (and how to pronounce Enarx), over at the Enarx FAQ. TL;DR – we made it up.

Who’s part of the company?

Well, there’s me (I’m the CEO), Nathaniel McCallum (the CTO) and a small team of developers. We also have Nick Vidal, who we recruited as Community Manager for Enarx. By the beginning of October, we expect to have six employees in five different countries spread across three separate continents[3].

What’s next?

Well, lots of stuff. There’s so much to do when running a company of which I knew next to nothing when we started. You would not believe the amount of work involved with registering a company, setting up bank accounts, recruiting people, paying people, paying invoices, etc. – and that’s not even about creating products. We absolutely plan to do this (or the investors are not going to be happy).

No – what’s next for this blog?

Ah, right. Well, I plan to keep it going. There will be more articles about my book on trust, security, open source and probably VCs, funding and the rest. There have been quite a few topics I’ve just not felt safe blogging about until Profian came out of stealth mode. Keep an eye out.


1 – there are more coming, such as Arm CCA (also known as “Realms”), and Intel’s TDX – we plan to support these are they become available.

2 – Anglo-Saxons would probably have gone with something more like “PRO-fee-an”, where the “o” has sound like “pop”.

3 – yes, I know we’ve not made it easy on ourselves.

Trust book – chapter index and summary

I thought it might be interesting to provide the chapter index and a brief summary of each chapter addresses.

In a previous article, I presented the publisher’s blurb for my upcoming book with Wiley, Trust in Computer Systems and the Cloud. I thought it might be interesting, this time around, to provide the chapter index of the book and to give a brief summary of what each chapter addresses.

While it’s possible to read many of the chapters on their own, I haved tried to maintain a logical progression of thought through the book, building on earlier concepts to provide a framework that can be used in the real world. It’s worth noting that the book is not about how humans trust – or don’t trust – computers (there’s a wealth of literature around this topic), but about how to consider the issue of trust between computing systems, or what we can say about assurances that computing systems can make, or can be made about them. This may sound complex, and it is – which is pretty much why I decided to write the book in the first place!

  • Introduction
    • Why I think this is important, and how I came to the subject.
  • Chapter 1 – Why Trust?
    • Trust as a concept, and why it’s important to security, organisations and risk management.
  • Chapter 2 – Humans and Trust
    • Though the book is really about computing and trust, and not humans and trust, we need a grounding in how trust is considered, defined and talked about within the human realm if we are to look at it in our context.
  • Chapter 3 – Trust Operations and Alternatives
    • What are the main things you might want to do around trust, how can we think about them, and what tools/operations are available to us?
  • Chapter 4 – Defining Trust in Computing
    • In this chapter, we delve into the factors which are specific to trust in computing, comparing and contrasting them with the concepts in chapter 2 and looking at what we can and can’t take from the human world of trust.
  • Chapter 5 – The Importance of Systems
    • Regular readers of this blog will be unsurprised that I’m interested in systems. This chapter examines why systems are important in computing and why we need to understand them before we can talk in detail about trust.
  • Chapter 6 – Blockchain and Trust
    • This was initially not a separate chapter, but is an important – and often misunderstood or misrepresented – topic. Blockchains don’t exist or operate in a logical or computational vacuum, and this chapter looks at how trust is important to understanding how blockchains work (or don’t) in the real world.
  • Chapter 7 – The Importance of Time
    • One of the important concepts introduced earlier in the book is the consideration of different contexts for trust, and none is more important to understand than time.
  • Chapter 8 – Systems and Trust
    • Having introduced the importance of systems in chapter 5, we move to considering what it means to have establish a trust relationship from or to a system, and how the extent of what is considered part of the system is vital.
  • Chapter 9 – Open Source and Trust
    • Another topc whose inclusion is unlikely to surprise regular readers of this blog, this chapter looks at various aspects of open source and how it relates to trust.
  • Chapter 10 – Trust, the Cloud, and the Edge
    • Definitely a core chapter in the book, this addresses the complexities of trust in the modern computing environments of the public (and private) cloud and Edge networks.
  • Chapter 11 – Hardware, Trust, and Confidential Computing
    • Confidential Computing is a growing and important area within computing, but to understand its strengths and weaknesses, there needs to be a solid theoretical underpinning of how to talk about trust. This chapter also covers areas such as TPMs and HSMs.
  • Chapter 12 – Trust Domains
    • Trust domains are a concept that allow us to apply the lessons and frameworks we have discussed through the book to real-world situations at large scale. They also allow for modelling at the business level and for issues like risk management – introduced at the beginning of the book – to be considered more explicitly.
  • Chapter 13 – A World of Explicit Trust
    • Final musings on what a trust-centric (or at least trust-inclusive) view of the world enables and hopes for future work in the field.
  • References
    • List of works cited within the book.

Trust book preview

What it means to trust in the context of computer and network security

Just over two years ago, I agreed a contract with Wiley to write a book about trust in computing. It was a long road to get there, starting over twenty years ago, but what pushed me to commit to writing something was a conference I’d been to earlier in 2019 where there was quite a lot of discussion around “trust”, but no obvious underlying agreement about what was actually meant by the term. “Zero trust”, “trusted systems”, “trusted boot”, “trusted compute base” – all terms referencing trust, but with varying levels of definition, and differing understanding if what was being expected, by what components, and to what end.

I’ve spent a lot of time thinking about trust over my career and also have a major professional interest in security and cloud computing, specifically around Confidential Computing (see Confidential computing – the new HTTPS? and Enarx for everyone (a quest) for some starting points), and although the idea of a book wasn’t a simple one, I decided to go for it. This week, we should have the copy-editing stage complete (technical editing already done), with the final stage being proof-reading. This means that the book is close to down. I can’t share a definitive publication date yet, but things are getting there, and I’ve just discovered that the publisher’s blurb has made it onto Amazon. Here, then, is what you can expect.


Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity 

Trust in Computer Systems and the Cloud delivers an insightful and practical new take on what it means to trust in the context of computer and network security and the impact on the emerging field of Confidential Computing. Author Mike Bursell’s experience, ranging from Chief Security Architect at Red Hat to CEO at a Confidential Computing start-up grounds the reader in fundamental concepts of trust and related ideas before discussing the more sophisticated applications of these concepts to various areas in computing. 

The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security. 

  • Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals 
  • A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs. 
  • A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring 

Perfect for security architects at the CISSP level or higher, Trust in Computer Systems and the Cloud is also an indispensable addition to the libraries of system architects, security system engineers, and master’s students in software architecture and security. 

Defining the Edge

How might we differentiate Edge computing from Cloud computing?

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

There’s been a lot of talk about the Edge, and almost as many definitions as there are articles out there. As usual on this blog, my main interest is around trust and security, so this brief look at the Edge concentrates on those aspects, and particularly on how we might differentiate Edge computing from Cloud computing.

The first difference we might identify is that Edge computing addresses use cases where consolidating compute resource in a centralised location (the typical Cloud computing case) is not necessarily appropriate, and pushes some or all of the computing power out to the edges of the network, where it can computing resources can process data which is generated at the fringes, rather than having to transfer all the data over what may be low-bandwidth networks for processing. There is no generally accepted single industry definition of Edge computing, but examples might include:

  • placing video processing systems in or near a sports stadium for pre-processing to reduce the amount of raw footage that needs to be transmitted to a centralised data centre or studio
  • providing analysis and safety control systems on an ocean-based oil rig to reduce reliance and contention on an unreliable and potentially low-bandwidth network connection
  • creating an Internet of Things (IoT) gateway to process and analyse data environmental sensor units (IoT devices)
  • mobile edge computing, or multi-access edge computing (both abbreviated to MEC), where telecommunications services such as location and augmented reality (AR) applications are run on cellular base stations, rather than in the telecommunication provider’s centralised network location.

Unlike Cloud computing, where the hosting model is generally that computing resources are consumed by tenants – customers of a public cloud, for instance- in the Edge case, the consumer of computing resources is often the owner of the systems providing them (though this is not always the case). Another difference is the size of the host providing the computing resources, which may range from very large to very small (in the case of an IoT gateway, for example). One important factor about most modern Edge computing environments is that they employ the same virtualisation and orchestration techniques as the cloud, allowing more flexibility in deployment and lifecycle management over bare-metal deployments.

A table comparing the various properties typically associated with Cloud and Edge computing shows us a number of differences.


Public cloud computingPrivate cloud computingEdge computing
LocationCentralisedCentralisedDistributed
Hosting modelTenantsOwnerOwner or tenant(s)
Application typeGeneralisedGeneralisedMay be specialised
Host system sizeLargeLargeLarge to very small
Network bandwidthHighHighMedium to low
Network availabilityHighHighHigh to low
Host physical securityHighHighLow
Differences between Edge computing and public/private cloud computing

In the table, I’ve described two different types of cloud computing: public and private. The latter is sometimes characterised as on premises or on-prem computing, but the point here is that rather than deploying applications to dedicated hosts, workloads are deployed using the same virtualisation and orchestration techniques employed in the public cloud, the key difference being that the hosts and software are owned and managed by the owner of the applications. Sometimes these services are actually managed by an external party, but in this case there is a close commercial (and concomitant trust) relationship to this managed services provider, and, equally important, single tenancy is assured (assuming that security is maintained), as only applications from the owner of the service are hosted[1]. Many organisations will mix and match workloads to different cloud deployments, employing both public and private clouds (a deployment model known as hybrid cloud) and/or different public clouds (a deployment model known as multi-cloud). All these models – public computing, private computing and Edge computing – share an approach in common: in most cases, workloads are not deployed to bare-metal servers, but to virtualisation platforms.

Deployment model differences

What is special about each of the models and their offerings if we are looking at trust and security?

One characteristic that the three approaches share is scale: they all assume that hosts will have with multiple workloads per host – though the number of hosts and the actual size of the host systems is likely to be highest in the public cloud case, and lowest in the Edge case. It is this high workload density that makes public cloud computing in particular economically viable, and one of the reasons that it makes sense for organisations to deploy at least some of their workloads to public clouds, as Cloud Service Providers can employ economies of scale which allow them to schedule workloads onto their servers from multiple tenants, balancing load and bringing sets of servers in and out of commission (a computation- and time-costly exercise) infrequently. Owners and operators of private clouds, in contrast, need to ensure that they have sufficient resources available for possible maximum load at all times, and do not have the opportunities to balance loads from other tenants unless they open up their on premises deployment to other organisations, transforming themselves into Cloud Service Providers and putting them into direct competition with existing CSPs.

It is this push for high workload density which is one of the reasons for the need for strong workload-from-workload (type 1) isolation, as in order to be able to maintain high density, cloud owners need to be able to mix workloads from multiple tenants on the same host. Tenants are mutual untrusting; they are in fact likely to be completely unaware of each other, and, if the host is doing its job well, unaware of the presence of other workloads on the same host as them. More important than this property, however, is a strong assurance that their workloads will not be negatively impacted by workloads from other tenants. Although negative impact can occur in other contexts to computation – such as storage contention or network congestion – the focus is mainly on the isolation that hosts can provide.

The likelihood of malicious workloads increases with the number of tenants, but reduces significantly when the tenant is the same as the host owner – the case for private cloud deployments and some Edge deployments. Thus, the need for host-from-workload (type 2) isolation is higher for the public cloud – though the possibility of poorly written or compromised workloads means that it should not be neglected for the other types of deployment.

One final difference between the models is that for both public and private cloud deployments the physical vulnerability of hosts is generally considered to be low[2], whereas the opportunities for unauthorised physical access to Edge computing hosts are considered to be much higher. You can read a little more about the importance of hardware as part of the Trusted Compute Base in my article Turtles – and chains of trust, and it is a fundamental principle of computer security that if an attacker has physical access to a system, then the system must be considered compromised, as it is, in almost all cases, possible to compromise the confidentiality, integrity and availability of workloads executing on it.

All of the above are good reasons to apply Confidential Computing techniques not only to cloud computing, but to Edge computing as well: that’s a topic for another article.


1 – this is something of a simplification, but is a useful generalisation.

2 – Though this assumes that people with authorised access to physical machines are not malicious, a proposition which cannot be guaranteed, but for which monitoring can at least be put in place.