SVB & finance stress: remember the (other) type of security

Now is the time for more vigilance, not less.

This is the week that the start-up world has been reeling after the collapse of Silicon Valley Bank. There have been lots of articles about it and about how the larger ecosystem (lawyers, VCs, other banks and beyond) have rallied to support those affected, written (on the whole, at least!) by people much better qualified than me to do so. But there’s another point that could get lost in the noise, and that’s the opportunity presented to bad actors by all of this.

When humans are tired, stressed, confused or have too many inputs, they (we – I’ve not succumbed to the lure of ChatGPT yet…) are prone to make poor decisions, or to take less time over decisions – even important decisions – than they ought to. Sadly, bad people know this, and that means that they will be going out of their way to exploit us (I’m very aware that I’m as vulnerable to this type of exploitation as anybody else). The problem is that when banks start looking dodgy, or when money is at stake, people need to do risky things. And these are often risky things which involve an awful lot of money, things like:

  • withdrawing large amounts of money
  • moving large amounts of money between accounts
  • opening new accounts
  • changing administrative access permissions and privileges on accounts
  • adding new people as administrators on accounts.

All of the above are actions (or involve actions) which we would normally be very careful about, and take very seriously (though that doesn’t stop us making the occasional mistake). The problem (and the opportunity for bad actors) is that when we’re stressed or in a hurry (as we’re likely to be in the current situation), we may pay less attention to important steps than we might otherwise. We might not enable multi-factor authentication, we might not check website certificates, we might click-through on seemingly helpful offers in emails to help us out, or we might not check the email addresses to which we’re sending invitations. All of these could lead bad folks to get at our money. They know this, and they’ll be going out of their way to find ways to encourage us to make mistakes, be less careful or hurry our way through vital processes.

My plea, then, is simple: don’t drop your guard because of the stress of the current situation. Now is the time for more vigilance, not less.

What’s your website’s D&D alignment?

This is the impression – often the first impression – that users of the website get of the organisation.

Cookies and Dungeons and Dragons – a hypothesis

Recent privacy legislation has led organisations to have to adopt ways of allowing their users to register their cookie preference in ways which expose the underlying motivations of the org. I have a related theory, and it goes like this: these different registration options allow you to map organisations to one of the 9 Dungeons and Dragons character alignments.

This may seem like a bit of a leap, but stick with me. First, here’s a a little bit of background for those readers who’ve never dabbled in (or got addicted to -there’s little room between the two extremes) the world of Dungeons and Dragons (or “D&D”). There are two axes used to describe a character: Lawful-Neutral-Chaotic and Good-Neutral-Evil. Each character has a position across each of these axes, so you could have someone who’s Lawful-Good, one who’s Chaotic Neutral or one who’s Neutral-Good, for instance (a “Neutral-Neutral” character is described as “True Neutral”). A Lawful character follows the law – or a strong moral code, whereas a Chaotic one just can’t be bothered. A Neutral character tends to do so when it suits them. The Good-Neutral-Evil axis should be pretty clear.

Second bit of background: I never just accept cookies on a website. I always go through the preferences registration options, and almost always remove permissions for all cookie tracking beyond the “minimum required for functionality”. I know I’m in a tiny minority in this, but I like to pretend that I can safeguard at least some of my private data, so I do it anyway (and so should you). I’ve noticed, over the past few months, that there are a variety of ways that cookie choices are presented to you, and I reckon that we can map several of these to these D&D alignments, giving us a chance to get a glimpse into the underlying motivation of the organisation whose website we’re visiting.

I’ve attempted to map the basic approaches I’ve seen into this table.

Lawful GoodNeutral GoodChaotic Good
Functional cookies only by default.No cookies, and a link to a long and confusing explanation about how the organisation doesn’t believe in them.No cookies at all, no explanation.
Lawful NeutralTrue Neutral Chaotic Neutral
Functional and tracking cookies by default, clear what tracking cookies are; all easy to turn off.Functional and tracking cookies by default, completely unclear what the cookies do.Random selection of cookies, and it’s unclear what they do, but you can at least turn them off.
Lawful EvilNeutral EvilChaotic Evil
All cookies by default: functional, tracking and legitimate uses.  Easy to remove with “reject all” or “object all”.All cookies by default.  “Legitimate uses” need to be deselected individually.All cookies by default, with 100s listed.  You have to deselect them by hand (there’s no “reject all” or “object all”), and there’s a 2 to 5 minute process to complete the registration, which finishes on 100% but never completes.
D&D alignments and website cookie preference approaches

Clearly, this is a tongue-in-cheek post, but there’s an important point here, I think: even if this glimpse isn’t a true representation of the organisation, it’s the impression – often the first impression – that users of the website get. My view of an organisation is formed partly through my interaction with its website, and while design, layout and content are all important, of course, the view that is presented about how much (if at all) the organisation cares about my experience, my data and my privacy should be something that organisations really care about. If they don’t respect me, then why should I respect them?

If I’m trying to attract someone to work for me, partner with me or buy from me, then my marketing department should be aware of the impression that visitors to my website glean from all interactions. At the moment, this seems to be missing, and while it’s not difficult to address, it seems to have escaped the notice of most organisations up to this point.

The 9 development stages for software (and kids)

We can draw parallels in the stages through which software projects and children tend to progress.

This week, one of my kids turned 18, and is therefore an adult – at least in the eyes of law in the UK, where we live. This is scary. For me, and probably for the rest of the UK.

It also got me thinking about how there are similarities between the development lifecycle for software and kids, and that we can probably draw some parallels in the stages through which they tend to progress. Here are the ones that occurred to me.

1. Creation

Creating a new software project is very easy, and a relatively quick process, though sometimes you have a huge number of false starts and things don’t go as planned. The same, it turns out, applies when creating children. Another similarity is that many software projects are created by people who don’t really know what they’re doing, and shouldn’t be allowed anywhere near the process at all. Equally useless at the process are people who have only a theoretical understanding of how it should work, having studied it at school, college or university, but who feel that they are perfectly qualified.

The people who are best qualified – those who have done it before – are either rather blasé about it and go about creating new ones all over the place, or are so damaged by it all that they swear they’ll never do it again. Beware: there are also numerous incidents of people starting software processes at very young ages, or when they had no intention of doing so.

2. Naming

Naming a software project – or a baby – is an important step, as it’s notoriously difficult to change once you’ve assigned a name. While there’s always a temptation to create a “clever” or “funny” name for your project, or come up with an alternative spelling of a well-known word, you creation will suffer if you allow yourself to be so tempted. Use of non-ASCII characters will either be considered silly, or, for non-Anglophone names, lead to complications when your project (or child) is exposed to other cultures.

3. Ownership

When you create a software project, you need to be careful that your employer (or educational establishment) doesn’t lay claim to it. This is rarely a problem for human progeny (if it is, you really need to check your employment contract), but issues can arise when two contributors are involved and wish to go their separate ways, or if other contributors – e.g. mothers-in-law – feel that their input should have more recognition, and expect more of their commits to be merged into main.

4. Language choice

The choice of language may be constrained by the main contributors’ expertise, but support for multiple languages can be very beneficial to a project or child. Be aware that confusion can occur, particularly at early stages, and it is generally worthwhile trying to avoid contributors attempting to use languages in which they are not fluent.

5. Documentation

While it is always worthwhile being aware of the available documentation, there are many self-proclaimed “experts” out there, and much conflicting advice. Some classic texts, e.g. The C Programming Language by Brian Kernighan and Dennis Ritchie or Baby and Child Care by Dr Benjamin Spock, are generally considered outdated in some circles, while yet others may lead to theological arguments. Some older non-core contributors (see, for example, “mothers-in-law”, above), may have particular attachments to approaches which are not considered “safe” in modern software or child development.

6. Maintenance

While the initial creation step is generally considered the most enjoyable in both software and child development processes, the vast majority of the development lifecycle revolves around maintenance. Keeping your project or child secure, resilient and operational or enabling them to scale outside the confines of the originally expected environment, where they come into contact with other projects, can quickly become a full-time job. Many contributors, at this point, will consider outside help to manage these complexities.

7. Scope creep

Software projects don’t always go in the direction you intend (or would like), discovering a mind of their own as they come into contact with external forces and interacting in contexts which are not considered appropriate by the original creators. Once a project reaches this stage, however, there is little that can be done, and community popularity – considered by most contributors as a positive attribute at earlier stages of lifecycle – can lead to some unexpected and possibly negative impacts on the focus of the project as competing interests vie to influence the project’s direction. Careful management of resources (see below) is the traditionally approach to dealing with this issue, but can backfire (withdrawal of privileges can have unexpected side effects in both software and human contexts).

8. Resource management

Any software project always expands to available resources. The same goes for children. In both cases, in fact, there will always appear to be insufficient resources to meet the “needs” of the project/child. Be strong. Don’t give in. Consider your other projects and how they could flourish if provided with sufficient resources. Not to mention your relationships with other contributors. And your own health/sanity.

9. Hand-over

At some point, it becomes time to hand over your project. Whether this is to new lead maintainer (or multiple maintainers – we should be open-minded), to an academic, government or commercial institution, letting go can be difficult. But you have to let go. Software projects – and children – can rarely grow and fulfil their potential under the control of the initial creators. When you do manage to let go, it can be a liberating experience for you and your creation. Just don’t expect that you’ll be entirely free of them: for some reason, as the initial creator, you may well be expected to arrange continued resources well past the time you were expecting. Be generous, and enjoy the nostalgia, but you’re not in charge, so don’t expect the resources to be applied as you might prefer.

Conclusion

I’m aware that there are times when children – and even software projects – can actually cause pain and hurt, and I don’t want to minimise the negative impact that the inability to have children, their illness, injury or loss can have on individuals and families. Please accept this light-hearted offering in the spirit it is meant, and if you are negatively affected by this article, please consider accessing help and external support.

Closing Profian

In June 2021, a little under two years ago, I left Red Hat and joined Profian as the CEO – Chief Executive Officer. In mid-January 2023, we – the board – decided to close down the company. All 14 members of the company are looking for new jobs.

I’ve not been blogging much recently, and it’s been because I’ve been busy trying to sort out what we do with the company. We looked at many different options around getting more funding or even being acquired by another company, but none came to fruition, so we decided to close down the company as gracefully as we could. It’s not been an easy few weeks (or months, in fact), but I’ve pretty much come to peace with the decision.

I’ll be writing more posts about what happened, how we got there, and the rest, but here’s a quick version of what happened, as I posted in an internal chat room:

While pretty much everybody believes that Confidential Computing is on its way, there’s also general agreement in the market that it’s not ready for major market adoption for 12 or more months. This is partly due to the fact that the tech is still regarded as immature (and prone to vulnerabilities) and also largely because the recessionary pressures on all sectors mean that organisations are protecting their core existing services, rather than betting money on new tech. VCs are into “ARR”: Annual Recurring Revenue. They want to see fast growth, and paid pilots with (even with big players) which don’t lead to fast scaling of the business aren’t considered sufficient. The amount of money available wouldn’t have been sufficient to allow us to grow and defend a market share in order to get to the next funding round. We also looked at acquisition, but nobody was ready to bet on new tech to the extent of buying the company: again, because they’re defending their existing services and staff (and, in many cases, laying people off already).

Me, on internal Profian chat room

I’m currently focussing on four things:

  1. helping the extremely talented Profian team find new jobs;
  2. winding the company down;
  3. taking some time to recover from the past few months – emotionally, mentally and physically;
  4. starting to look for a new job for myself.

If you can help with #1 or #4, please get in touch. Otherwise, keep an eye out on this blog, and expect more posts. See you soon.

Confidential Computing – become the expert

There really is no excuse for not protecting your (and your customers’!) data in use.

I came across this article recently: 81% of companies had a cloud security incident in the last year. I think it’s probably factually incorrect, and that the title should be “81% of companies are aware that they had a cloud security incident last year”. Honestly, it could well be much higher than that. When I’ve worked on IT security audits, I sometimes see statements like “[Company X] experienced no data or privacy breaches over the past 12 months”, and I always send it back, insisting on a change of wording to reflect the fact that all that is known is that the organisation is not aware of any data or privacy breaches over the past 12 months.

The other statistic that really struck me in the article, however, is that the top reported type of incident was “Security incidents during runtime”, with 34% of respondents reporting it. That’s over a third of incidents!

And near the top of concerns was “Privacy/data access issues, such as those from GDPR”, at 31%.

The problem about both of these types of issues is that there’s almost nothing you can do to protect yourself from them in the cloud. Cloud computing (and virtualisation in general) is pretty good at protecting you from other workloads (type 1 isolation) and protecting the host from your workloads (type 2 isolation), but offers nothing to protect your workload from the host (type 3 isolation). If you’re interested in a short introduction to why, please have a look at my article Isolationism – not a 4 letter word (in the cloud).

The good news is that there are solutions out there that do allow you to run sensitive applications (and applications with sensitive data) in the cloud: that’s what Confidential Computing is all about. Confidential Computing protects your data not just at rest (when it’s in storage) and in transit (on the network), but actually at runtime: “data in use”. And it seems that industry is beginning to realise that it’s time to be sitting up and paying attention: the problem is that not enough people know about Confidential Computing.

So – now’s the time to become the expert on Confidential Computing for your organisation, and show your manager, your C-levels and your board how to avoid becoming part of the 81% (or the larger, unknowing percentage). The industry body is the Confidential Computing Consortium, and they have lots of information, but if you want to dive straight in, I encourage you to visit Profian and download one or more of our white papers (there’s one about runtime isolation there, as well). There really is no excuse for not protecting your (and your customers’!) data in use.

Enarx hits 750 stars

Yesterday, Enarx, the open source security project of which I’m co-founder and for which Profian is custodian, gained its 750th GitHub star. This is an outstanding achievement, and I’m very proud of everyone involved. Particular plaudits to Nathaniel McCallum, my co-founder for Enarx and Profian, Nick Vidal, the community manager for Enarx, everyone who’s been involved in committing code, design, tests and documentation for the project, and everyone who manages the running of the project and its infrastructure. We’ve been lucky enough to be joined by a number of stellar interns along the way, who have also contributed enormously to the project.

Enarx has also been supported by a number of organisations and companies, and it’s worth listing as many of them as I can think of:

  • Profian, the current custodian
  • Red Hat, under whose auspices the initial development began
  • the Confidential Computing Consortium, a Linux Foundation Project, which owns the project
  • Equinix, who have donated computing resources
  • PhoenixNAP, who have donated computing resources
  • Rocket.Chat, who have donated chat resources
  • Intel, who have worked with us along the way and donated various resources
  • AMD, who have worked with us along the way and donated various resources
  • Outreachy, with whom worked to get some of our fine interns

When it all comes down to it, however, it’s the community that makes the project. We strive to create a friendly, open community, and we want more and more people to get involved. To that end, we’ll soon be announcing some new ways to get involved with trying and using Enarx, in association with Profian. Keep an eye out, and keep visiting and giving us stars!

What is attestation for Confidential Computing?

Without attestation, you’re not really doing Confidential Computing.

This post – or the title of this post – has been sitting in my “draft” pile for about two years. I don’t know how this happened, because I’ve been writing about Confidential Computing for three years or so years by now, and attestation is arguably the most important part of the entire subject.

I know I’ve mentioned attestation in passing multiple times, but this article is devoted entirely to it. If you’re interested in Confidential Computing, then you must be interested in attestation, because, without it, you’re not doing Confidential Computing right. Specifically, without attestation, any assurances you may think you have about Confidential Computing are worthless.

Let’s remind ourselves what Confidential Computing is: it’s the protection of applications and data in use by a hardware-based TEE (Trusted Execution Environment). The key benefit that this brings you is isolation from the host running your workload: you can run applications in the public cloud, on premises or in the Edge, and have cryptographic assurances that no one with access to the host system – hypervisor access, kernel access, admin access, even standard hardware access[1] – can tamper with your application. This, specifically, is Type 3 – workload from host – isolation (see my article Isolationism – not a 4 letter word (in the cloud) for more details), and is provided by TEEs such as AMD’s SEV and Intel’s SGX – though not, crucially, by AWS Nitro, which does not provide Confidential Computing capabilities as defined by the Confidential Computing Consortium.

Without attestation, you’re not really doing Confidential Computing. Let’s consider a scenario where you want to deploy an application using Confidential Computing on a public cloud. You ask your CSP (Cloud Service Provider) to deploy it. The CSP does so. Great – your application is now protected: or is it? Well, you have no way to tell, because your CSP could just have taken your application, deployed it in the normal way, and told you that it had deployed it using a TEE. What you need is to take advantage of a capability that TEE chips provide called an attestation measurement to check that a TEE instance was actually launched and that your application was deployed into it. You (or your application) asks the TEE-enabled chip to perform a cryptographically signed measurement of the TEE set-up (which is basically a set of encrypted memory pages). It does so, and that measurement can then be checked to ensure that it has been correctly set up: there’s a way to judge whether you’re actually doing Confidential Computing.

So, who does that checking? Doing a proper cryptographic check of an attestation measurement – the attestation itself – is surprisingly[2] tricky, and, unless you’re an expert in TEEs and Confidential Computing (and one of the points of Confidential Computing is to make is easy for anyone to use these capabilities), then you probably don’t want to be doing it.

Who can perform the validation? Well, one option might be for the validation to be done on the host machine that’s running the TEE. But wait a moment – that makes no sense! You’re trying to isolate yourself from that machine and anyone who has access to it: that’s the whole point of Confidential Computing. You need a remote attestation service – a service running on a different machine which can be trusted to validate the attestation and either halt execution if it fails, or let you know so that you can halt execution.

So who can run that remote attestation service? The obvious – obvious but very, very wrong – answer is the CSP who’s running your workload. Obvious because, well, they presumably run Confidential Computing workloads for lots of people, but wrong because your CSP is part of your threat model. What does this mean? Well, we talked before about “trying to isolate yourself from that machine and anyone who has access to it”, the anyone who has access to it is exactly your CSP. If the reason to be using Confidential Computing is to be able to put workloads in the public cloud even when you can’t fully trust your CSP (for regulatory reasons, for auditing reasons, or just because you need higher levels of assurance than existing cloud computing), then you can’t trust your CSP to provide the remote attestation service. To be entirely clear: if you allow your CSP to do your attestation, you lose the benefits of Confidential Computing.

Attestation – remote attestation – is vital, but if we can’t trust the host or the CSP to do it, what are your options? Well, either you need to do the attestation yourself (which I already noted is surprisingly difficult), or you’re going to need to find a third party to do that. I’ll be discussing the options for this in a future article – keep an eye out.


1 – the TEEs used for Confidential Computing don’t aim to protect against long-term access to the CPU by a skilled malicious actor – but this isn’t a use case that’s relevant to most users.

2 – actually, not that surprising if you’ve done much work with cryptography, cryptographic libraries, system-level programming or interacted with any silicon vendor documentation.

My book at RSA Conference NA

Attend RSA and get 20% off my book!

Attend RSA and get 20% off my book!

I’m immensely proud (as you can probably tell from the photo) to be able to say that my book in available in the book store at the RSA Conference in San Francisco this week. You’ll find the store in Moscone South, up the escalators on the Esplanade.

If you ever needed a reason to attend RSA, this is clearly the one, particularly with the 20% discount. If anyone’s interested in getting a copy signed, please contact me via LinkedIn – I currently expect to be around till Friday morning. It would be great to meet you.

Back in the (conference) groove

Ah, yes: conferences. We love them, we hate them.

Ah, yes: conferences. We love them, we hate them, but they used to be part of the job, and they’re coming back. At least in the IT world that I inhabit, things are beginning to start happening in person again. I attended my first conference in over two years in Valencia a couple of weeks ago: Kubecon + CloudNativeCon Europe. I’d not visited Valencia before, and it’s a lovely city. I wasn’t entirely well (I’m taking a while to recover from Covid-19 – cannot recommend), which didn’t help, but we had some great meetings, Nathaniel (my Enarx & Profian co-founder) spoke at the co-located WasmDay event on WASI networking, and I got to walk the exhibition hall picking up (small amounts) of swag (see Buying my own t-shirts, OR “what I miss about conferences”).

For the last few years, when I’ve been attending conferences, I’ve been doing it as the employee of a large company – Red Hat and Intel – and things are somewhat different when you’re attending as a start-up. We (Profian) haven’t exhibited at any conferences yet (keep an eye out for announcements on social media for that), but you look at things with a different eye when you’re a start-up – or at least I do.

One of the differences, of course, is that as CEO, my main focus has to be on the business side, which means that attending interesting talks on mildly-related technologies isn’t likely to be a good use of my time. That’s not always true – we’re not big enough to send that many people to these conferences, so it may be that I’m the best person available to check out something which we need to put on our radar – but I’m likely to restrict my session attendance to one of three types of session:

  1. a talk by a competitor (or possible competitor) to understand what they’re doing and how (and whether) we should react.
  2. a talk by a possible customer or representative from a sector in which we’re interested, to find understand possible use cases.
  3. a talk about new advances or applications of the technologies in which we’re interested.

There will, of course, also be business-related talks, but so many of these are aimed at already-established companies that it’s difficult to find ones with obvious applicability.

What else? Well, there are the exhibition halls, as I mentioned. Again, we’re there to look at possible competitors, but also to assess possible use cases. These aren’t just likely to be use cases associated with potential customers – in fact, given the marketing dollars (euros, pounds, etc.) funnelled into these events, it’s likely to be difficult to find clear statements of use cases, let alone discover the right person to talk to on the booth. More likely, in fact, is finding possible partners or licensees among the attendees: realising that there are companies out there with a product or offering to which we could add value. Particularly for smaller players, there’s a decent chance that you might find someone with sufficient technical expertise to assess whether there might be fit.

What else? Well, meetings. On site, off site: whichever fits. Breakfast, cocktails or dinner seem to be preferred. as lunch can be tricky, and there aren’t always good places to sit for a quiet chat. Investors – VCs and institutional capital – realise that conferences are a good place to meet with their investees or potential investees. The same goes for partners for whom setting aside a whole day of meetings with a start-up makes little obvious sense (and it probably doesn’t make sense for us to fly over specially meet them either), but for whom finding a slot to discuss what’s going on and the state of the world is a good investment of their time if they’re already attending an event.

So – that’s what I’m going to be up to at events from now on, it seems. If you’re interested in catching up, I’ll be at RSA in San Francisco, Open Source Summit in Austin and Scale 19x in San Antonio in the next couple of months, with more to come. Do get in touch: it’s great to meet folks!

Enarx and Pi (and Wasm)

It’s not just Raspberry Pi, but also Macs.

A few weeks ago, I wrote a blog post entitled WebAssembly: the importance of language(s), in which I talked about how important it is for Enarx that WebAssembly supports multiple languages. We want to make it easy for as many people as possible to use Enarx. Today, we have a new release of Enarx – Elmina Castle – and with it comes something else very exciting: Raspberry Pi support. In fact, there’s loads more in this release – it’s not just Raspberry Pi, but also Macs – but I’d like to concentrate on what this means.

As of this release, you can run WebAssembly applications on your Raspberry Pi, using Enarx. Yes, that’s right: you can take your existing Raspberry Pi (as long as it’s running a 64bit kernel), and run Wasm apps with the Enarx framework.

While the Enarx framework provides the ability to deploy applications in Keeps (TEE[1] instances), one of the important features that it also brings is the ability to run applications outside these TEEs so that you can debug and test your apps. The ability to do this much more simply is what we’re announcing today.

3 reasons this is important

1. WebAssembly just got simpler

WebAssembly is very, very hot at the moment, and there’s a huge movement behind adoption of WASI, which is designed for server-based (that is, non-browser) applications which want to take advantage of all the benefits that Wasm brings – cross-architecture support, strong security model, performance and the rest.

As noted above, Enarx is about running apps within Keeps, protected within TEE instances, but access to the appropriate hardware to do this is difficult. We wanted to make it simple for people without direct access to the hardware to create and test their applications on whatever hardware they have, and lots of people have Raspberry Pis (or Macs).

Of course, some people may just want to use Enarx to run their Wasm applications, and while that’s not the main goal of the project, that’s just fine, of course!

2. Tapping the Pi dev community

The Raspberry Pi community is one of the most creative and vibrant communities out there. It’s very open source friendly, and Raspberry Pi hardware is designed to be cheap and accessible to as many people as possible. We’re very excited about allowing anyone with access to a Pi to start developing WebAssembly and deploying apps with Enarx.

The Raspberry Pi community also has a (deserved) reputation for coming up with new and unexpected uses for technology, and we’re really interested to see what new applications arise: please tell us.

3. Preparing for Arm9 Realms

Last, and far from least, is the fact that in 2021, Arm announced their CCA (Confidential Compute Architecture), coming out with the Arm9 architecture. This will allow the creation of TEEs called Realms, which we’re looking forward to supporting with Enarx. Running Enarx on existing Arm architecture (which is what powers Raspberry Pis) is an important step towards that goal. Extending Enarx Keeps beyond the x86 architecture (as embodied by the Intel SGX and AMD SEV architectures) has always been a goal of the project, and this provides a very important first step which will allow us to move much faster when chips with the appropriate capabilities start becoming available.

How do I try it on my Raspberry Pi?

First, you’ll need a Raspberry Pi running a 64bit kernel. Instructions for this are available over at the Raspberry Pi OS pages, and the good news is that the default installer can easily put this on all of the more recent hardware models.

Next, you’ll need to follow the instructions over at the Enarx installation guide. That will walk you through it, and if you have any problems, you can (and should!) report them, by chatting with the community over at our chat or by searching for/adding bug issues at our issue tracker.

We look forward to hearing how you’re doing. If you think this is cool (and we certainly do!), then please head to our main repository at https://github.com/enarx/enarx and give us a star.


1 – Trusted Execution Environments, such as Intel’s SGX and AMD’s SEV.

Image: Michael H. („Laserlicht“) / Wikimedia Commons