Enarx and Pi (and Wasm)

It’s not just Raspberry Pi, but also Macs.

A few weeks ago, I wrote a blog post entitled WebAssembly: the importance of language(s), in which I talked about how important it is for Enarx that WebAssembly supports multiple languages. We want to make it easy for as many people as possible to use Enarx. Today, we have a new release of Enarx – Elmina Castle – and with it comes something else very exciting: Raspberry Pi support. In fact, there’s loads more in this release – it’s not just Raspberry Pi, but also Macs – but I’d like to concentrate on what this means.

As of this release, you can run WebAssembly applications on your Raspberry Pi, using Enarx. Yes, that’s right: you can take your existing Raspberry Pi (as long as it’s running a 64bit kernel), and run Wasm apps with the Enarx framework.

While the Enarx framework provides the ability to deploy applications in Keeps (TEE[1] instances), one of the important features that it also brings is the ability to run applications outside these TEEs so that you can debug and test your apps. The ability to do this much more simply is what we’re announcing today.

3 reasons this is important

1. WebAssembly just got simpler

WebAssembly is very, very hot at the moment, and there’s a huge movement behind adoption of WASI, which is designed for server-based (that is, non-browser) applications which want to take advantage of all the benefits that Wasm brings – cross-architecture support, strong security model, performance and the rest.

As noted above, Enarx is about running apps within Keeps, protected within TEE instances, but access to the appropriate hardware to do this is difficult. We wanted to make it simple for people without direct access to the hardware to create and test their applications on whatever hardware they have, and lots of people have Raspberry Pis (or Macs).

Of course, some people may just want to use Enarx to run their Wasm applications, and while that’s not the main goal of the project, that’s just fine, of course!

2. Tapping the Pi dev community

The Raspberry Pi community is one of the most creative and vibrant communities out there. It’s very open source friendly, and Raspberry Pi hardware is designed to be cheap and accessible to as many people as possible. We’re very excited about allowing anyone with access to a Pi to start developing WebAssembly and deploying apps with Enarx.

The Raspberry Pi community also has a (deserved) reputation for coming up with new and unexpected uses for technology, and we’re really interested to see what new applications arise: please tell us.

3. Preparing for Arm9 Realms

Last, and far from least, is the fact that in 2021, Arm announced their CCA (Confidential Compute Architecture), coming out with the Arm9 architecture. This will allow the creation of TEEs called Realms, which we’re looking forward to supporting with Enarx. Running Enarx on existing Arm architecture (which is what powers Raspberry Pis) is an important step towards that goal. Extending Enarx Keeps beyond the x86 architecture (as embodied by the Intel SGX and AMD SEV architectures) has always been a goal of the project, and this provides a very important first step which will allow us to move much faster when chips with the appropriate capabilities start becoming available.

How do I try it on my Raspberry Pi?

First, you’ll need a Raspberry Pi running a 64bit kernel. Instructions for this are available over at the Raspberry Pi OS pages, and the good news is that the default installer can easily put this on all of the more recent hardware models.

Next, you’ll need to follow the instructions over at the Enarx installation guide. That will walk you through it, and if you have any problems, you can (and should!) report them, by chatting with the community over at our chat or by searching for/adding bug issues at our issue tracker.

We look forward to hearing how you’re doing. If you think this is cool (and we certainly do!), then please head to our main repository at https://github.com/enarx/enarx and give us a star.


1 – Trusted Execution Environments, such as Intel’s SGX and AMD’s SEV.

Image: Michael H. („Laserlicht“) / Wikimedia Commons

More Enarx milestones

It’s been a big month for Enarx.

It’s being a big month for Enarx. Last week, I announced that we’d released Enarx 0.3.0 (Chittorgarh Fort), with some big back-end changes, and some new functionality as well. This week, the news is that we’ve hit a couple of milestones around activity and involvement in the project.

1500 commits

The first milestone is 1500 commits to the core project repository. When you use a git-based system, each time you make a change to a file or set of files (including deleting old ones, creating new one and editing or removing sections), you create a new commit. Each commit has a rather long identifier, and its position in the project is also recorded, along with the name provided by the committer and any comments. Commit 1500 to the enarx was from Nathaniel McCallum, and entitled feat(wasmldr): add Platform API. He committed it on Saturday, 2022-03-19, and its commit number is 8ec77de0104c0f33e7dd735c245f3b4aa91bb4d2.

I should point out that this isn’t the 1500th commit to the Enarx project, but the 1500th commit to the enarx/enarx repository on GitHub. This is the core repository for the Enarx project, but there are quite a few others, some of which also have lots of commits. As an example, the enarx/enarx-shim-sgx repository ,which provides some SGX-specific capabilities within Enarx, had 968 commits at time of writing.

500 Github stars

The second milestone is 500 GitHub stars. Stars are measure of how popular a repository or project is, and you can think of them as the Github of a “like” on social media: people who are interested in it can easily click a button on the repository page to “star” it (they can “unstar” it, too, if they change their mind). We only tend to count stars on the main enarx/enarx repository, as that’s the core one for the Enarx project. The 500th star was given to the project by a GitHub user going by the username shebuel-oss, a self-described “Open Source contributor, Advocate, and Community builder”: we’re really pleased to have attracted their interest!

There’s a handy little website which allows you to track stars to a project called GitHub Star History where you can track the addition (or removal!) of stars, and compare other projects. You can check the status of Enarx whenever you’re reading by following this link, but for the purposes of this article, the important question is how did we get to 500? Here’s a graph:

Enarx GitHub star history to 500 stars

You’ll see a nice steep line towards the end which corresponds to Nick Vidal’s influence as community manager, actively working to encourage more interest and involvement, and contributions to the Enarx project.

Why do these numbers matter?

Objectively, they don’t, if I’m honest: we could equally easily have chosen a nice power of two (like 512) for the number of stars, or the year that Michelangelo started work on the statue David (1501) for the number of commits. Most humans, however like round decimal numbers, and the fact that we hit 1500 and 500 commits and stars respectively within a couple of days of each provides a nice visual symmetry.

Subjectively, there’s the fact that we get to track the growth in interest (and the acceleration in growth) and contribution via these two measurements and their historical figures. The Enarx project is doing very well by these criteria, and that means that we’re beginning to get more visibility of the project. This is good for the project, it’s good for Profian (the company Nathaniel and I founded last year to take Enarx to market) and I believe that it’s good for Confidential Computing and open source more generally.

But don’t take my word for it: come and find out about the project and get involved.

Enarx 0.3.0 (Chittorgarh Fort)

Write some applications and run them in an Enarx Keep.

I usually post on a Tuesday, but this week I wanted to wait for a significant event: the release Enarx v0.3.0, codenamed “Chittorgarh Fort”. This happened after I’d gone to bed, so I don’t feel too bad about failing to post on time. I announced Enarx nearly three years ago, in the article Announcing Enarx on the 7th May 2019. and it’s admittedly taken us a long time to get to where we are now. That’s largely because we wanted to do it right, and building up a community, creating a start-up and hiring folks with the appropriate skills is difficult. The design has evolved over time, but the core principles and core architecture are the same as when we announced the project.

You can find more information about v0.3.0 at the release page, but I thought I’d give a few details here and also briefly add to what’s on the Enarx blog about the release.

What’s Enarx?

Enarx is a deployment framework for running applications within Trusted Execution Environments (TEEs). We provide a WebAssembly runtime and – this is new functionality that we’ve started adding in this release – attestation so that you can be sure that your application is protected within a TEE instance.

What’s new in v0.3.0?

A fair amount of the development for this release has been in functionality which won’t be visible to most users, including a major rewrite of the TEE/host interface component that we call sallyport. You will, however, notice that TLS support has been added to network connections from applications within the Keep. This is transparent to the application, so “Where does the certificate come from?” I hear you ask. The answer to that is from the attestation service that’s also part of this release. We’ll be talking more about that in further releases and articles, but key to the approach we’re taking is that interactions with the service (we call it the “Steward”) is pretty much transparent to users and applications.

How can I get involved?

What can you do to get involved? Well, visit the Enarx website, look at the code and docs over at our github repositories (please star the project!), get involved in the chat. The very best thing you can do, having looked around, is to write some applications and run them in an Enarx Keep. And then tell us about your experience. If it worked first time, then wow! We’re still very much in development, but we want to amass a list of applications that are known to work within Enarx, so tell us about it. If it doesn’t work, then please also tell us about it, and have a look at our issues page to see if you’re the first person to run across this problem. If you’re not, then please add your experiences to an existing issue, but if you are, then create a new one.

Enarx isn’t production ready, but it’s absolutely ready for initial investigations (as shown by our interns, who created a set of demos for v0.2.0, curated and aided by our community manager Nick Vidal).

Why Chittorgarh Fort?

It’s worth having a look at the Wikipedia entry for the fort: it’s really something! We decided, when we started creating official releases, that we wanted to go with the fortification theme that Enarx has adopted (that’s why you deploy applications to Enarx Keeps – a keep is the safest part of a castle). We started with Alamo, then went to Balmoral Castle, and then to Chittorgarh Fort (we’re trying to go with alphabetically sequential examples as far as we can!). I suggested Chittorgarh Fort to reflect the global nature of our community, which happens to include a number of contributors from India.

Who was involved?

I liked the fact that the Enarx blog post mentioned the names of some (most?) of those involved, so I thought I’d copy the list of github account names from there, with sincere thanks:

@MikeCamel @npmccallum @haraldh @connorkuehl @lkatalin @mbestavros @wgwoods @axelsimon @ueno @ziyi-yan @ambaxter @squidboylan @blazebissar @michiboo @matt-ross16 @jyotsna-penumaka @steveeJ @greyspectrum @rvolosatovs @lilienbm @CyberEpsilon @kubkon @nickvidal @uudiin @zeenix @sagiegurari @platten @greyspectrum @bstrie @jarkkojs @definitelynobody @Deepansharora27 @mayankkumar2 @moksh-pathak


Rahultalreja11 at English Wikipedia, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

What’s a secure channel?

Always beware of products and services which call themselves “secure”. Because they’re not.

A friend asked me what I considered a secure channel a couple of months ago, and it made me think. Many of us have information that we wish to communicate which we’d rather other people can’t look at, for all sorts of reasons. These might range from present ideas for our spouse or partner sent by a friend to my phone to diplomatic communications about espionage targets sent between embassies over the Internet, with lots in between: intellectual property discussions, bank transactions and much else. Sometimes, we want to ensure that people can’t change what’s in the messages we send: it might be OK for other people to know that I pay £300 in rent, but not for them to be able to change the amount (or the bank account into which it goes). These two properties are referred to as confidentiality (keeping information secret) and integrity (keeping information unchangeable), and often you want to combine them – in the case of our espionage plans, I’d prefer that my enemies don’t know what targets are at risk, but also that they don’t change the targets I’ve selected to something less bothersome for them.

Modern encryption systems generally provide both confidentiality and integrity for messages and data, so I’m going to treat these as standard properties for an encrypted channel. Which means that if I use encryption on a channel, it’s secure, right?

Hmm. Let’s step back a bit, because, unfortunately, there’s rather a lot more to unpack than that. Three of the questions we need to tackle should give us pause. They are: “secure from whom?”, “secure for how long?” and “secure where?”. The answers we give to these questions will be important, and though they are all somewhat intertwined, I’m going to deal with them in order, and I’m going to use the examples of the espionage message and the present ideas to discuss them. I’m also going to talk more about confidentiality than integrity – though we’ll assume that both properties are important to what we mean by “secure”.

Secure from whom?

In our examples, we have very different sets of people wanting to read our messages – a nation state and my spouse. Unless my spouse has access to skills and facilities of which I’m unaware (and I wouldn’t put it past her), the resources that she has at her disposal to try to break the security of my communication are both fewer and less powerful than those of the nation state. A nation state may be able to apply cryptologic attacks to messages, attack the software (and even firmware or hardware) implementations of the encryption system, mess with the amount of entropy available for key generation at either or both ends of the channel, perform interception (e.g. Person-In-The-Middle) attacks, coerce the sender or recipient of the message and more. I’m hoping that most of the above are not options for my wife (though coercion might be, I suppose!). The choice of encryption system, including entropy sources, cipher suite(s), hardware and software implementation are all vital in the diplomatic message case, as are vetting of staff and many other issues. In the case of gift ideas for my wife’s birthday, I’m assuming that a standard implementation of a commercial messaging system should be enough.

Secure for how long?

It’s only a few days till my wife’s birthday (yes, I have got her a present, though that does remind me; I need a card…), so I only have to keep the gift ideas secure for a little longer. It turns out that, in this case, the time sensitivity of the integrity of the message is different to that of the confidentiality: even if she managed to change what the gift idea in the message was, it wouldn’t make a difference to what I’ve got her at this point. However, I’d still prefer if she didn’t know what the gift ideas are.

In the case of the diplomatic espionage message, we can assume that confidentiality and the integrity are both important for a much longer time, but we’ll concentrate on the confidentiality. Obviously an attacking country would prefer it if the target were unaware of an attack before it happened, but if the enemy managed to prove an attack was performed by the message sender’s or recipient’s country, even a decade or more in the future, this could also lead to major (and negative) consequences. We want to ensure that whatever steps we take to protect the message are sufficient that access to a copy of the message taken when it was sent (via wire-tapping, for instance) or retrieved at a later date (via access to a message store in the future), is insufficient to allow it to be cracked. This is tricky, and the history of cryptologic attacks on encryption schemes, not to mention human failures (such as leaks) and advances in computation (such as quantum computing) should serve as a strong warning that we need to consider very carefully what mechanisms we should use to protect our messages.

Secure where?

Are the embassies secure? Are all the machines between the embassies secure? Is the message stored before delivery? If so, is it stored on a machine within the embassy or on a server elsewhere? Is it end-to-end encrypted, or is it decrypted before delivery and then re-encrypted (I really, really hope not). While this is unlikely in the case of diplomatic messages, a good number of commercially sensitive messages (including much email) is not end-to-end encrypted, leading to vulnerabilities if someone trying to break the security can get access to the system where they are stored, or intercept them between decryption and re-encryption.

Typically, we have better control over different parts of the infrastructure which carry or host our communications than we do over others. For most of the article above, I’ve generally assumed that the nation state trying to read embassy message is going to have more relevant resources to try to breach the security of the message than my wife does, but there’s a significant weakness in protecting my wife’s gift idea: she has easy access to my phone. I tend to keep it locked, and it has a PIN, but, if I’m honest, I don’t tend to go out of my way to keep her out: the PIN is to deter someone who might steal it. Equally, it’s entirely possible that I may be sharing some material (a video or news article) with her at exactly the time that the gift idea message arrives from our mutual friend, leading her to see the notification. In either case, there’s a good chance that the property of confidentiality is not that strong after all.

Conclusion

I’ve said it before, and I plan to say it again (and again, and again): there is no “secure”. When we talk about secure channels, we must be aware that what we mean should be “channels secured with appropriate measures to protect against the risks associated with the security being compromised”. This is a long way of saying “if I’m protecting diplomatic messages, I need to make greater efforts than if I’m trying to stop my wife finding out ahead of time what she’s getting for her birthday”, but it’s important to understand this. Part of the problem is that we’re bombarded with words like “secure”, which are unqualified, and may lead us to think that they’re absolute, when they’re absolutely not. Another part of the problem is that once we’ve put one type of security in place, particularly when it’s sold or marketed as “best in breed” or “best practice”, that it addresses all of the issues we might have. This is clearly not the case – using the strongest encryption possible for messages between my friend and me isn’t going to stop my wife from knowing I’ve bought her if knows the PIN for my phone. Please, please, consider what you need when you’re protecting your communications (and other data, of course), and always beware of products and services which call themselves “secure”. Because they’re not.

Enarx 0.2.0 – Balmoral Castle

Now it’s possible to write applications that you can talk to over the network

The big news this week from the Enarx project is our latest release: 0.2.0, which is codenamed “Balmoral Castle”, to continue with our castle/fortification theme.

The big change in Enarx 0.2.0 is the addition of support for networking. Until now, there wasn’t much you could really do in an Enarx Keep, honestly: you could run an application, but all it could to for input and output was read from stdin and write to stdout or stderr. While this was enough to prove that you could write and compile applications to WebAssembly and run them, any more complex interaction with the world outside the Keep was impossible.

So, why is this big news? Well, now it’s possible to write applications that you can talk to over the network. The canonical example which we’ve provided as part of the release is a simple “echo” server, which you start in a Keep and then listens on a port for incoming connections. You make a connection (for instance using the simple command-line utility ncat), and send it a line of text. The server accepts the connection, receives the text and sends it right back you. It can handle multiple connections and will send the text back to the right one (hopefully!).

This is new functionality with Enarx 0.2.0, and the ability to use networking mirrors an important change within WASI (the WebAssembly System Interface) specification, the runtime implemented within an Enarx Keep. Specifically, WASI snapshot preview 1, released in January (2022) now supports the the ACCEPT capability on sockets. The way that WASI works with managing permissions and capabilities is carefully designed, and we (the Profian folks working on Enarx) coordinated closely with the open source WASI/Wasm community to add this in a way which is consistent with the design philosophy of the project. Once the capability was added to the snapshot, there was one more step needed before Enarx could implement support, which was that it needed to appear in wasmtime, the WebAssembly runtime we use within Keeps to allow you to run your applications. This happened last week, in wasmtime release 0.34.0, and that allowed us to make this new release of Enarx.

This may not sound very exciting … but with this in place, you start to build proper applications and micro-services. What about an HTTP server? A ROT13 “encryption” service? A chatbot? An email server? A Wordle implementation[1]? And it’s not just text that you can send over a network connection, of course. What might write to process other types of data? A timestamp server? A logging service? With a network connection, you have the ability to write any of these. Micro-services are all about accepting connections, processing the data that’s come in, and then sending out the results. All of that is possible with this new release.

What we’d love you to do is to start writing applications (using networking) and running them in Enarx. Tell us what works – even better, tell us what doesn’t by creating an issue in our github repository . Please publish examples, join our chat channels, give us a github star, get involved.

What’s coming next? Well, keep an eye on the Enarx site, but be assured that I’ll announce major news here as well. You can expect work in attestation and deployment in the near future – watch this space…


1 – at time of writing, everyone’s talking about Wordle. For those of you coming from the future (say a couple of weeks from now), you can probably ignore this example.

[Image of Edward VII at Balmoral Castle from Wikimedia].

Open source and cyberwar

If cyberattacks happen to the open source community, the impact may be greater than you expect.

There are some things that it’s more comfortable not thinking about, and one of them is war. For many of us, direct, physical violence is a long way from us, and that’s something for which we can be very thankful. As the threat of physical violence recedes, however, it’s clear that the spectre of cyberattacks as part of a response to aggression – physical or virtual – is becoming more and more likely.

It’s well attested that many countries have “cyber-response capabilities”, and those will include aggressive as well as protective measures. And some nation states have made it clear not only that they consider cyberwarfare part of any conflict, but that they would be entirely comfortable with initiating cyberwarfare with attacks.

What, you should probably be asking, has that to do with us? And by “us”, I mean the open source software community. I think that the answer, I’m afraid, is “a great deal”. I should make it clear that I’m not speaking from a place of privileged knowledge here, but rather from thoughtful and fairly informed opinion. But it occurs to me that the “old style” of cyberattacks, against standard “critical infrastructure” like military installations, power plants and the telephone service, was clearly obsolete when the Two Towers collapsed (if not in 1992, when the film Sneakers hypothesised attacks against targets like civil aviation). Which means that any type of infrastructure or economic system is a target, and I think that open source is up there. Let me explore two ways in which open source may be a target.

Active targets

If we had been able to pretend that open source wasn’t a core part of the infrastructure of nations all over the globe, that self-delusion was finally wiped away by the log4j vulnerabilities and attacks. Open source is everywhere now, and whether or not your applications are running any open source, the chances are that you deploy applications to public clouds running open source, at least some of your employees use an open source operating system on their phones, and that the servers running your chat channels, email providers, Internet providers and beyond make use – extensive use – of open source software: think apache, think bind, think kubernetes. At one level, this is great, because it means that it’s possible for bugs to be found and fixed before they can be turned into vulnerabilities, but that’s only true if enough attention is being paid to the code in the first place. We know that attackers will have been stockpiling exploits, and many of them will be against proprietary software, but given the amount of open source deployed out there, they’d be foolish not to be collecting exploits against that as well.

Passive targets

I hate to say it, but there also are what I’d call “passive targets”, those which aren’t necessarily first tier targets, but whose operation is important to the safe, continued working of our societies and economies, and which are intimately related to open source and open source communities. Two of the more obvious ones are GitHub and GitLab, which hold huge amounts of our core commonwealth, but long-term attacks on foundations such as the Apache Foundation and the Linux Foundation, let alone kernel.org, could also have impact on how we, as a community, work. Things are maybe slightly better in terms of infrastructure like chat services (as there’s a choice of more than one, and it’s easier to host your own instance), but there aren’t that many public servers, and a major attack on either them or the underlying cloud services on which many of them rely could be crippling.

Of course, the impact on your community, business or organisation will depend on your usage of difference pieces of infrastructure, how reliant you are on them for your day-to-day operation, and what mitigations you have available to you. Let’s quickly touch on that.

What can I do?

The Internet was famously designed to route around issues – attacks, in fact – and that helps. But, particularly where there’s a pretty homogeneous software stack, attacks on infrastructure could still have very major impact. Start thinking now:

  • how would I support my customers if my main chat server went down?
  • could I continue to develop if my main git provider became unavailable?
  • would we be able to offer at least reduced services if a cloud provider lost connectivity for more than an hour or two?

By doing an analysis of what your business dependencies are, you have the opportunity to plan for at least some of the contingencies (although, as I note in my book, Trust in Computer Systems and the Cloud, the chances of your being able to analyse the entire stack, or discover all of the dependencies, is lower than you might think).

What else can you do? Patch and upgrade – make sure that whatever you’re running is the highest (supported!) version. Make back-ups of anything which is business critical. This should include not just your code but issues and bug-tracking, documentation and sales information. Finally, consider having backup services available for time-critical services like a customer support chat line.

Cyberattacks may not happen to your business or organisation directly, but if they happen to the open source community, the impact may be greater than you expect. Analyse. Plan. Mitigate.

“Trust in Computer Systems and the Cloud” published

I’ll probably have a glass or two of something tonight.

It’s official: my book is now published and available in the US! What’s more, my author copies have arrived, so I’ve actually got physical copies that I can hold in my hand.

You can buy the book at Wiley’s site here, and pre-order with Amazon (the US site lists is as “currently unavailable”, and the UK site lists is as available from 22nd Feb, 2022. ,though hopefully it’ll be a little earlier than that). Other bookstores are also stocking it.

I’m over the moon: it’s been a long slog, and I’d like to acknowledge not only those I mentioned in last week’s post (Who gets acknowledged?), but everybody else. Particularly, at this point, everyone at Wiley, calling out specifically Jim Minatel, my commissioning editor. I’m currently basking in the glow of something completed before getting back to my actual job, as CEO of Profian. I’ll probably have a glass or two of something tonight. In the meantime, here’s a quote from Bruce Schneier to get you thinking about reading the book.

Trust is a complex and important concept in network security. Bursell neatly unpacks it in this detailed and readable book.

Bruce Schneier, author of Liars and Outliers: Enabling the Trust that Society Needs to Thrive

At least you know what to buy your techy friends for Christmas!

Who gets acknowledged?

Some of the less obvious folks who get a mention in my book, and why

After last week’s post, noting that my book was likely to be delayed, it turns out that it may be available sooner than I’d thought. Those of you in the US should be able to get hold of a copy first – possibly sooner than I do. The rest of the world should have availability soon after. While you’re all waiting for your copy, however, I thought it might be fun for me to reveal a little about the acknowledgements: specifically, some of the less obvious folks who get a mention, and why they get a mention.

So, without further ado, here’s a list of some of them:

  • David Braben – in September 1984, not long after my 14th birthday, the game Elite came out on the BBC micro. I was hooked, playing for as long and as often as I was allowed (which wasn’t as much as I would have liked, as we had no monitor, and I had to hook the BBC up to the family TV). I first had the game on cassette, and then convinced my parents that a (5.25″) floppy drive would be a good educational investment for me, thereby giving me the ability to play the extended (and much quicker loading) version of the game. Fast forward to now, and I’m still playing the game which, though it has changed and expanded in many ways, is still recognisably the same one that came out 37 years ago. David Braben was the initial author, and still runs the company (Frontier Developments) which creates, runs and supports the game. Elite excited me, back in the 80s, with what computers could do, leading me to look into wireframes, animation and graphics.
  • Richard D’Silva – Richard was the “head of computers” at the school I attended from 1984-1989. He encouraged me (and many others) to learn what computers could do, all the way up to learning Pascal and Assembly language to supplement the (excellent) BASIC available BBC Bs and BBC Masters which the school had (and, latterly, some RISC machines). There was a basic network, too, an “Econet”, and this brought me to initial research into security – mainly as a few of us tried (generally unsuccessfully) to access machines and accounts were weren’t supposed to.
  • William Gibson – Gibson wrote Neuromancer – and then many other novels (and short stories) – in the cyberpunk genre. His vision of engagement with technology – always flawed, often leading to disaster, has yielded some of the most exciting and memorable situations and characters in scifi (Molly, we love you!).
  • Nick Harkaway – I met Nick Cornwell (who writes as Nick Harkaway) at the university Jiu Jitsu club, but he became a firm friend beyond that. Always a little wacky, interested maybe more about the social impacts of technology than tech for tech’s sake (more my style then), he always had lots of interesting opinions to share. When he started writing, his wackiness and thoughtfulness around how technology shapes us informed his fiction (and non-fiction). If you haven’t read The Gone-Away World, order it now (and read it after you’ve finished my book!).
  • Anne McCaffrey – while I’m not an enormous fan of fantasy fiction (and why do scifi and fantasy always seem to be combined in the same section in bookshops?), Anne McCaffrey’s work was a staple in my teenage years. I devoured her DragonRiders of Pern series and also enjoyed her (scifi) Talents series as that emerged. One of the defining characteristics of her books was always strong female characters – a refreshing change for a genre which, at the time, seemed dominated by male protagonists. McCaffrey also got me writing fiction – at one point, my school report in English advised that “Michael has probably written enough science fiction for now”.
  • Mrs Macquarrie (Jenny) – Mrs Macquarrie was my Maths teacher from around 1978-1984. She was a redoubtable Scot, known to the wider world as wife of the eminent theologian John Macquarrie, but, in the universe of the boarding school I attended, she was the strict but fair teacher who not only gave me a good underpinning in Maths, but also provided a “computer club” at the weekends (for those of us who were boarding), with her ZX Spectrum.
  • Sid Meier – I’ve played most of the “Sid Meier’s Civilization” (sic) series of games over the past several decades(!) from the first, released in 1991. In my university years, there would be late night sessions with a bunch of us grouped around the monitor, eating snacks and drinking whatever we could afford. These days, having a game running on a different monitor can still be rewarding when there’s a boring meeting you have to attend…
  • Bishop Nick – this one is a trick, and shouldn’t really appear in this section, as Bishop Nick isn’t a person, but a local brewery. They brew some great beers, however, including “Heresy” and “Divine”. Strongly recommended if you’re in the Northeast Essex/West Suffolk area.
  • Melissa Scott – Scott’s work probably took the place of McCaffrey’s as my reading tastes matured. Night Sky Mine, Trouble and her Friends and The Jazz provided complex and nuanced futures, again with strong female protagonists. The queer undercurrents in her books – most if not all of her books have some connection with queer themes and cultures – were for me introduction to a different viewpoint on writing and sexuality in “popular” fiction, beyond the more obvious and “worthy” literary treatments with which I was already fairly familiar.
  • Neal Stephenson – Nick Harkaway/Cornwell (see above) introduced me to Snowcrash when it first came out, and I managed to get a UK trade paperback copy. Stephenson’s view of a cyberpunk future, different from Gibson’s and full of linguistic and cultural craziness, hooked me, and I’ve devoured all of his work since. You can’t lose with Snowcrash, but my other favourite of his is Cryptonomicon, a book which zig-zags between present day (well, early 2000s, probably) and the Second World War, embracing cryptography, religion, computing, gold, civil engineering and start-up culture. It’s on the list of books I suggest for anyone considering getting into security because the mindset shown by a couple of the characters really nails what it’s all about.

There are more people mentioned, but these are the ones most far removed either in time from or direct relevance to the writing of the book. I’ll leave those more directly involved, or just a little more random, for you to discover as you read.

Don’t forget: if you follow this blog, you’re in for a chance to win a free copy of the Trust in Computer Systems and the Cloud!

Cloud security asymmetry

We in the security world have to make people understand this issue.

My book, Trust in Computer Systems and the Cloud, is due out in the next few weeks, and I was wondering as I walked the dogs today (a key part of the day for thinking!) what the most important message in the book is. I did a bit of thinking and a bit of searching, and decided that the following two paragraphs expose the core thesis of the book. I’ll quote them below and then explain briefly why (the long explanation would require me to post most of the book here!). The paragraph is italicised in the book.

A CSP [Cloud Service Provider] can have computational assurances that a tenant’s workloads cannot affect its hosts’ normal operation, but no such computational assurances are available to a tenant that a CSP’s hosts will not affect their workloads’ normal operation.

In other words, the tenant has to rely on commercial relationships for trust establishment, whereas the CSP can rely on both commercial relationships and computational techniques. Worse yet, the tenant has no way to monitor the actions of the CSP and its host machines to establish whether the confidentiality of its workloads has been compromised (though integrity compromise may be detectable in some situations): so even the “trust, but verify” approach is not available to them.”

What does this mean? There is, in cloud computing, a fundamental asymmetry: CSPs can protect themselves from you (their customer), but you can’t protect yourself from them.

Without Confidential Computing – the use of Trusted Execution Environments to protect your workloads – there are no technical measures that you can take which will stop Cloud Service Providers from looking into and/or altering not only your application, but also the data it is processing, storing and transmitting. CSPs can stop you from doing the same to them using standard virtualisation techniques, but those techniques provide you with no protection from a malicious or compromised host, or a malicious or compromised CSP.

I attended a conference recently attended by lots of people whose job it is to manage and process data for their customers. Many of them do so in the public cloud. And a scary number of them did not understand that all of this data is vulnerable, and that the only assurances they have are commercial and process-based.

We in the security world have to make people understand this issue, and realise that if they are looking after our data, they need to find ways to protect it with strong technical controls. These controls are few:

  • architectural: never deploy sensitive data to the public cloud, ever.
  • HSMs: use Hardware Security Modules. These are expensive, difficult to use and don’t scale, but they are appropriate for some sensitive data.
  • Confidential Computing: use Trusted Execution Environments (TEEs) to protect data and applications in use[1].

Given my interest – and my drive to write and publish my book – it will probably come as no surprise that this is something I care about: I’m co-founder of the Enarx Project (an open source Confidential Computing project) and co-founder and CEO of Profian (a start-up based on Enarx). But I’m not alone: the industry is waking up to the issue, and you can find lots more about the subject at the Confidential Computing Consortium‘s website (including a list of members of the consortium). If this matters to you – and if you’re an enterprise company who uses the cloud, it almost certainly already does, or will do so – then please do your research and consider joining as well. And my book is available for pre-order!

Logs – good or bad for Confidential Computing?

I wrote a simple workload for testing. It didn’t work.

A few weeks ago, we had a conversation on one of the Enarx calls about logging. We’re at the stage now (excitingly!) where people can write applications and run them using Enarx, in an unprotected Keep, or in an SEV or SGX Keep. This is great, and almost as soon as we got to this stage, I wrote a simple workload to test it all.

It didn’t work.

This is to be expected. First, I’m really not that good a software engineer, but also, software is buggy, and this was our very first release. Everyone expects bugs, and it appeared that I’d found one. My problem was tracing where the issue lay, and whether it was in my code, or the Enarx code. I was able to rule out some possibilities by trying the application in an unprotected (“plain KVM”) Keep, and I also discovered that it ran under SEV, but not SGX. It seemed, then, that the problem might be SGX-specific. But what could I do to look any closer? Well, with very little logging available from within a Keep, there was little I could do.

Which is good. And bad.

It’s good because one of the major points about using Confidential Computing (Enarx is a Confidential Computing framework) is that you don’t want to leak information to untrusted parties. Since logs and error messages can leak lots and lots of information, you want to restrict what’s made available, and to whom. Safe operation dictates that you should make as little information available as you possibly can: preferably none.

It’s bad because there are times when (like me) you need to work out what’s gone wrong, and find out whether it’s in your code or the environment that you’re running your application in.

This is where the conversation about logging came in. We’d started talking about it before this issue came up, but this made me realise how important it was. I started writing a short blog post about it, and then stopped when I realised that there are some really complex issues to consider. That’s why this article doesn’t go into them in depth: you can find a much more detailed discussion over on the Enarx blog. But I’m not going to leave you hanging: below, you’ll find the final paragraph of the Enarx blog article. I hope it piques your interest enough to go and find out more.

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure (e.g. the Enarx projects) and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.


Header image by philm1310 from Pixabay.