Are you positive?

What do pregnancy tests and the Ukrainian aircraft missile strike have in common?

Not everything in life is nicely binary, much as we[1] might like it to be. There are shades of grey[2] in many aspects of life, and though humans can often cope with uncertainty, computer systems are less good at it: they generally want a “yes” or “no” answer. This means that decisions sometimes need to be made on incomplete evidence, and, well, that means that the answers aren’t always correct. There’s a whole area of computer science related to this: fuzzy logic.

Let’s look into what the options are. Assuming that we’re looking two options: “yes” (a positive) and “no” (a negative). That means that there are two ways in which the answer can be incorrect:

  1. a “yes” answer was incorrectly chosen (false positive);
  2. a “no” answer was incorrectly chosen (false negative).

An example to allow us to explore this is pregnancy. It’s generally agreed that you can’t be a little bit pregnant: if you take a test, any result it gives you needs to be either positive or negative. If you are pregnant, and a test result comes back negative, then that’s a false negative. If you are not pregnant, and a test comes back positive, that’s a false positive. The implications of a false positive or a false negative can both be pretty major – as anybody who has received one will tell you. I spent a little time online trying to find expected false positive and false negatives for pregnancy tests, but it turns out that the rates are so dependent on a variety of factors that it was difficult to find a sensible answer[3].

A tragic recent example of a false positive took place on Wednesday, 8th January 2020, when a Ukrainian International Airlines flight was shot down by an Iranian missile, killing all 176 people on board. It appears that an air defence radar system misidentified the aircraft as a cruise missile. As the radar system was looking for a positive identification of a threat, this can be counted as a false positive.

What might have been the alternative in this case? If the aircraft actually had been a cruise missile, but was identified as a civilian aircraft, this would have been a false negative, and the impact might well have been significant damage to an Iranian military installation.

Which is the most damaging? Well, in the case of the aircraft, it would seem pretty clear to most observers that the false positive would be worse, but from a military point of view, that might not be the case. Maybe the impact of a missile strike on a major military installation might be considered worse than the civilian loss of life in the other case. In this case, as in many others, a decision needs to be made as to which is most important to reduce: the chance of a false negative or the chance of a false positive? In a perfect world, of course, there would be no false results, negative or positive. The problem with many systems that take analogue[4] inputs and turn them into digital outputs in this way is that avoiding false results is very costly, and sometimes impossible. Even worse news is that reducing probability of one of the two types of false result tends to increase the probability of the other.

A classic example of this is in the use of biometrics for user identification. Fingerprints, facial recognition, iris scanning and similar techniques have to balance the likelihood of a false positive with a false negative. Which is worse: the chance that the CEO will not be able to update the payroll details, or that a rogue employee will update her details to improve her salary package?[5]

One good piece of news is that AI/ML (Artificial Intelligence/Machine Learning) is improving the performance of biometric systems and, in fact, other areas of computing where “fuzzy logic” is required. In most cases, humans are still better at reducing messy sets of information to yes/no results, but that is changing, and where multiple automated decisions need to be made, then AI/ML is worth considering.

Whenever you are dealing with “messy” data[6] which needs to be reduced to a “yes/no” or “positive/negative” binary result, you need to consider the likelihood of false positives or negatives. Not only do you need to consider the likelihood of each, but also the impact of each. Once you have understood these, you can then decide which you want to try to minimise, and what techniques you should use to do so.

We may be stuck with false results, but we need to understand what our choices are, and how we can get the best outcomes available from messy data.


1 – in talking security, but I’m sure this goes for lots of other people, too.

2. “gray” for our non-Commonwealth readers.

3. good advice seems to be to test several times over several days.

4. “analog”, I suppose – see [2].

5. this is one of the reasons that authentication systems generally use two factors from the three “something you are”, “something you know”, “something you have”.

6. most real-world data, to be honest.

5 resolutions for travellers in 2020

Enjoy the time when you’re not travelling

I’m not a big one for New Year[1] resolutions.  To give you an example, my resolution for 2019 was “not to be mocked by my wife or daughters”.  Given that one of them (my daughters, that is) is a teenager, and the other nearly so, this went about as well as you might expect.  At the beginning of 2018, I wrote a blog post with the top 5 resolutions for security folks.  However, if I re-use the same ones this time round, somebody’s bound to notice[2], so I’m going to come up with some different ones[3].  I do quite a lot of travel, so I thought I’d provide my top 5 resolutions for this year, which I hope will be useful not only for me, but also others.

(I’ve written another article that covers in more depth some of the self-care aspects of this topic which you may find helpful: Of headphones, caffeine and self-care.)

1. Travel lighter

For business trips, I’ve tended to pack a big, heavy laptop, with a big, heavy power “brick” and cable, and then lots of other charging-type cables of different sizes and lengths, and a number of different plugs to fit everything into.  Honestly, there’s just no need for much of it, so this year, I suggest that we all first take stock, and go through all of those cables and see which ones we actually need.  Maybe take one spare for each USB type, but no more.  And we only need the one plug – that nice multi-socket one with a couple of USB sockets will do fine.  And if we lose it or forget it, the hotel will probably have one we can borrow, or we can get one as we go through the next airport.

And the laptop?  Well, I’ve just got a little Chromebook.  There are a variety of these: I managed to pick up a Pixelbook second-hand, with warranty, for about 40% off, and I love it.  I’m pretty sure that I can use it for all the day-to-day tasks I need to perform while travelling, and, as a bonus, the power connection is smaller and lighter than the one for my laptop.  I’ve picked up a port extender (2 x USB C, 1 x USB A, 1 x Ethernet, 1 x HDMI), and I think I’m sorted.  I’m going to try leaving the big laptop at home, and see what happens.

2. Take time

I’m not just talking about leaving early to get to the airport – though that is my standard practice – but also about just, well, taking more time about things.  It’s easy to rush here and there, and work yourself into a state[3], or feel that you need to fill every second of every day with something work-related, when you wouldn’t do that if you were at home.  It may be stepping aside to let other people off the plane, and strolling to the ground transportation exit, rather than hurrying there, or maybe stopping for a few minutes to look at some street art or enjoy the local architecture – whatever it is, give yourself permission not to hurry and not to rush, but just breathe and let the rest of the world slip by, even if it’s just for a few seconds.

3. Look after yourself

Headphones are a key tool for help me look after myself – and one of the things I won’t be discarding as part of my “travel lighter” resolution.  Sometimes I need to take myself away from the hubbub and to chill.  But they are just a tool: I need to remember that I need to stop, and put them on, and listen to some music.  It’s really easy to get caught up in the day, and the self-importance of being the Business Traveller, and forget that I’m not superhuman (and that my colleagues don’t expect me to be).  Taking time is the starting point – and sometimes all you have time for – but at some point you need to stop completely and do something for yourself.

4. Remember you’re tired

Most of us get grumpy when we’re tired[4].  And travelling is tiring, so when you’re at the end of a long trip, or just at the beginning of one, after a long day in cars and airports and planes, remember that you’re tired, and try to act accordingly.  Smile.  Don’t be rude.  Realise that the hotel receptionist is doing their best to sort your room out, or that the person in front of you in the queue for a taxi is just as frustrated with their four children as you are (well, maybe not quite as much).  When you get home, your partner or spouse has probably been picking up the slack of all the things that you’d normally do at home, so don’t snap at them: be nice, show you care.  Whatever you’re doing, expect things to take longer: you’re not at the top of your game.  Oh, and restrict alcohol intake, and go to bed early instead.  Booze may feel like it’s going to help, but it’s really, really not.

5. Enjoy not travelling

My final resolution was going to be “take exercise”, and this still matters, but I decided that even more important is the advice to enjoy the time when you’re not travelling.  Without “down-time”, travelling becomes – for most of us at least – a heavier and heavier burden.  It’s so easy, on returning from a work trip, to head straight back into the world of emails and documents and meetings, maybe catching up over the weekend on those items that you didn’t get done because you were away.  Don’t do this – or do it very sparingly, and if you can, claw back the time over the next few days, maybe taking a little longer over a cup of tea or coffee, or stopping yourself from checking work emails one evening.  Spend time with the family[5], hang out with some friends, run a 5k, go to see a film/movie, play some video games, complete that model railway set-up you’ve been working on[7].  Whatever it is that you’re doing, let your mind and your body know that you’re not “on-the-go”, and that it’s time to recover some of that energy and be ready when the next trip starts.  And you know it will, so be refreshed, and be ready.


1 – I’m using the Western (Gregorian calendar), so this is timely.  If you’re using a different calendar, feel free to adjust.

2 – the list is literally right there if you follow the link.

3 – I considered reversing the order, but the middle one would just stay the same.

4 – I wondered if this is just me, but then remembered the stressed faces of those on aircraft, in airports and checking into hotels, and thought, “no, it’s not”.  And I am informed (frequently) by my family that this is definitely the case for me.

5 – if you have one[6].

6 – and if that’s actually a relaxing activity…

7 – don’t mock: it takes all kinds.

2019: a year of Enarx

We have big plans for demos and more in 2020

This year has, for me, been pretty much all about the Enarx project.  I’ve had other work that I’ve been doing, including meeting with customers, participating in work with IBM (who acquired the company I work for, Red Hat, in July), looking at Kubernetes security, interacting with partners and a variety of other important pieces, but it’s been Enarx that has defined 2019 for me from a work point of view.

We started off the year with a belief that we could do something, and a challenge from our internal leadership to prove that it was possible.  We did that with a demo on AMD’s SEV chipset at Red Hat Summit in Boston, MA in May, and an announcement of the project on this blog.  We followed up with a demo on Intel’s SGX chipset at Open Source Summit Europe in Lyon in October.  I thought I would mention some of the most important components for the development (in the broadest sense) of Enarx this year.

Team

Enarx is not mine: far from it.  I’m proud to be counted one of the co-founders of the project with Nathaniel McCallum, but we wouldn’t be where we are without a broader team, and as an open source project, it belongs to everyone who contributes and to everyone who uses it.  You’ll find many of the members on the contributors page, but not everybody is up there yet, and there have been some very important people whose contribution has been advice, support and sponsorship of the project both within Red Hat and outside it.  I don’t have permission to mention everybody’s name, so I’m going to play it safe and mention none of them.  You know who you are, and we really appreciate your time.

Use cases – and partners

One of the most important things that we’ve done this year is to work out how people might want to use Enarx “in the wild”, as it were, and to perform some fairly detailed analysis and write-ups.  Not enough of these are externally available yet, which is down to me, but the fact that we had done the work was vital in finding partners who are actually interested in using Enarx for real.  I can’t talk about any of these in public yet, but we have some really interesting use cases from a number of multi-national organisations of whom you will definitely have heard, as well as some smaller start-ups about whom you may well be hearing more in the future.  Having this kind of interest was vital to get buy-in to the project and showed that Enarx wasn’t just a flight of fancy by a bunch of enthusiastic engineers.

Looking outside

The most significant event in the project’s year was the announcement of the Confidential Computing Consortium at the Linux Foundation’s Open Source Summit this year.  We at Red Hat realised that Enarx was a great match for this new group, and was very pleased to be a premier member at the official launch in October.  At time of writing, there are 21 members, and it’s becoming clear that this the consortium has identified an area of concern and interest for the wider industry: this is another great endorsement of the aims and principles of Enarx.

Joining the Consortium hasn’t been the only activity in which we’ve been involved this year.  We’ve spoken at conferences, had articles published (on Alice, Eve and Bob, on now + Next and on Opensource.com), spoken to press, recorded webcasts and more.  Most important (arguably), we have hex stickers (if you’re interested, get in touch!).

Last, but not least, we’ve gone external.  From being an internal project (though we always had our code as open source), we’ve taken a number of measures to try to encourage and simplify involvement by non-Red Hat contributors – see 7 tips for kicking off an open source project for a little more information.

Architecture and code

What else?  Oh, there’s code, and an increasingly mature set of architectures for the various components.  We absolutely plan to make all of this externally visible, and the fact that we haven’t yet is that we’re just running to stand still at the moment: there’s just so much to do.  Our focus is on getting code out there for people to use and contribute to themselves and, without giving anything away, we have some pretty big plans for demos and more in 2020.

Finally

There’s one other thing that’s been important, of course, and that’s the fact that I’m writing a book for Wiley on trust, but I actually see that as very much related to Enarx.  Fundamentally, although the technology is cool, and we think that the Enarx project meets an existing need, both Nathaniel and I believe that there’s a real opportunity for it to change how people manage trust for workloads in the cloud, in IoT, at the Edge and wherever else sensitive data and algorithms need to be executed.

This blog is supposed to be about security, and I’m strongly of the opinion that trust is a very important part of that.  Enarx fits into that, so don’t be surprised to see more posts around trust and about Enarx over the coming year.  Please keep an eye out here and at https://enarx.io for the latest information.

 

 

7 tips for kicking off an open source project

It’s not really about project mechanics at all

I’m currently involved – heavily involved – in Enarx, an open source (of course!) project to allow you run sensitive workloads on untrusted hosts.  I’ve had involvement in various open source projects over the years, but this is the first for which I’m one of the founders.  We’re at the stage now where we’ve got a fair amount of code, quite a lot of documentation, a logo and (important!) stickers.  The project should hopefully be included in a Linux Foundation group – the Confidential Computing Consortium – so things are going very well indeed.  We’re at the stage where I thought it might be useful to reflect on some of the things we did to get things going.  To be clear, Enarx is a particular type of project: one that we believe has commercial and enterprise applications.  It’s also not mature yet, and we’ll have hurdles and challenges along the way.  What’s more, the route we’ve taken won’t be right for all projects, but hopefully there’s enough here to give a few pointers to other projects, or people considering starting one up.

The first thing I’d say is that there’s lots of help to be had out there.  I’d start with Opensource.com, where you’ll find lots of guidance.  I’d then follow up by saying that however much of it you follow, you’ll still get things wrong.  Anyway, here’s my list of things to consider.

1. Aim for critical mass

I’m very lucky to work at the amazing Red Hat, where everything we do is open source, and where we take open source and community very seriously.  I’ve heard it called a “critical mass” company: in order to get something taken seriously, you need to get enough people interested in it that it’s difficult to ignore. The two co-founders – Nathaniel McCallum and I – are both very enthusiastic about the project, and have spent a lot of time gaining sponsors within the organisation (you know who you are, and we thank you – we also know we haven’t done a good enough job with you on all occasions!), and “selling” it to engineers to get them interested enough that it was difficult to stop.  Some projects just bobble along with one or two contributors, but if you want to attract people and attention, getting a good set of people together who can get momentum going is a must.

2. Create a demo

If you want to get people involved, then a demo is great.  It doesn’t necessarily need to be polished, but it does need to show that what you’re doing it possible, and that you know what you’re doing.  For early demos, you may be talking to command line output: that’s fine, if what you’re providing isn’t a UI product.  Being able to talk to what you’re doing, and convey both your passion and the importance of the project, is a great boon.  People like to be able to see or experience something, and it’s much easier to communicate your enthusiasm if they have something that’s real which expresses that.

3. Choose a licence

Once you have code, and it’s open source, you want other people to be able to contribute.  This may seem like an unimportant step, but selecting an appropriate open source licence[1] will allow other people to contribute on well-understood and defined terms, making it easier for them, and easier for the organisations for which they work to allow them to be involved.

4. Get documentation

You might think that developer documentation is the most important to get out there – otherwise, how will other people get involved and coding?  I’d disagree, at least to start with.  For a small project, you can probably scale to a few more people just by explaining what the code does, what it should do, and what’s missing.  However, if there’s no documentation available to explain what it’s for, and how it’s going to help people, then why would anyone bother even looking at it?  This doesn’t need to be polished marketing copy, and it doesn’t need to be serious, but it does need to convey to people why they should care.  It’s also going to help you with the first point I mentioned, attaining critical mass, as being able to point to documentation, use cases and the rest will help convince people that you’ve thought through the point of your project.  We’ve used a github wiki as our main documentation hub, and we try to update that with new information as we generate it.  This is an area, to be clear, where we could do better.  But at least we know that.

5. Be visible

People aren’t going to find out about you unless you’re visible.  We were incredibly lucky in that just as we were beginning to get to a level of critical mass, the Confidential Computing Consortium was formed, and we immediately had a platform to increase our exposure.  We have Twitter account, I publish articles on my blog and at Opensource.com, we’ve been lucky enough to have the chance to publish on Red Hat’s now + Next blog, I’ve done interviews the the press and we speak at conferences wherever and whenever we can.  We’re very lucky to have these opportunities, and it’s clear that not all these approaches are appropriate for all projects, but make use of what you can: the more people that know about you, the more people can contribute.

6. Be welcoming

Let’s assume that people have found out about you: what next?  Well, they’re hopefully going to want to get involved.  If they don’t feel welcome, then any involvement they have will taper off soon.  Yes, you need documentation (and, after a while, technical documentation, no matter what I said above), but you need ways for them to talk to you, and for them to feel that they are valued.  We have Gitter channels (https://gitter.im/enarx/), and our daily stand-ups are open to anyone who wants to join.  Recently, someone opened an issue on our issues database, and during the conversation on that thread, it transpired that our daily stand-up time doesn’t work for them given their timezone, so we’re going to ensure that at least one a week does, and we’ve assured that we’ll accommodate them.

7. Work with people you like

I really, really enjoy meeting and working with the members of Enarx project team.  We get on well, we joke, we laugh and we share a common aim: to make Enarx successful.  I’m a firm believer in doing things you enjoy, where possible.  Particularly for the early stages of a project, you need people who are enthusiastic and enjoy working closely together – even if they’re separated by thousands of kilometres[2] geographically.  If they don’t get on, there’s a decent chance that your and their enthusiasm for the project will falter, that the momentum will be lost, and that the project will end up failing.  You won’t always get the chance to choose those with whom you work, but if you can, then choose people you like and get on with.

Conclusion – “people”

I didn’t realise it when I started writing this article, but it’s not really about project mechanics at all: it’s about people.  If you read back, you’ll find the importance of people visible in every tip, even including the one about choosing a licence.  Open source projects aren’t really about code: they’re about people, how they share, how they work together, and how they interact.

I’m certain that your experience of open source projects will vary, and I’d be very surprised if everyone agrees about the top seven things you should do for project success.  Arguably, Enarx isn’t a success yet, and I shouldn’t be giving advice at this stage of our maturity.  But when I think back to all of the open source projects that I can think of which are successful, people feature strongly, and I don’t think that’s a surprise at all.


1 – or “license”, if you’re from the US.

2 – or, in fact, miles.

 

Timely risk or risky times?

Being aware of “the long game”.

On Friday, 29th November 2019, Jack Merritt and Saskia Jones were killed in a terrorist attack.  A number of members of the public (some with with improvised weapons) and of the emergency services acted with great heroism.  I wanted to mention the mention the names of the victims and to praise those involved in stopping him before mentioning the name of the attacker: Usman Khan.  The victims, the attacker were taking part in an offender rehabilitation conference to help offenders released from prison to reintegrate into society: Khan had been convicted to 16 years in prison for terrorist offences.

There’s an important formula that everyone involved in risk – and given that IT security is all about mitigating risk, that’s anyone involved in security – should know. It’s usually expressed thus:

Risk = likelihood x impact

Sometimes likelihood is sometimes expressed as “probability”, impact as “consequence” or “loss”, and I’ve seen some other variants as well, but the version above is generally sufficient for most purposes.

Using the formula

How should you use the formula? Well, it’s most useful for comparing risks and deciding how to mitigate them. Humans are terrible at calculating risk, and any tools that help them[1] is good.  In order to use this formula correctly, you want to compare risks over the same time period.  You could say that almost any eventuality may come to pass over the lifetime of the universe, but comparing the risk of losing broadband access to the risk of your lead developer quitting for another company between the Big Bang and the eventual heat death of the universe is probably not going to give you much actionable information.

Let’s look at the two variables that we need to have in order to calculate risk.  We’ll start with the impact, because I want to devote most of this article to the other part: likelihood.

Impact is what the damage will be if the risk happens.  In a business context, you want to look at the risk of your order system being brought down for a week by malicious attackers.  You might calculate that you would lose £15,000 in orders.  On top of that, there might be a loss of reputation which you might calculate at £30,000.  Fixing the problem might add £10,000.  Add these together, and the impact is £55,000.

What’s the likelihood?  Well, remember that we need to consider a particular time period.  What you choose will depend on what you’re interested in, but a classic use is for budgeting, and so the length of time considered is often a year.  “What is the likelihood of my order system being brought down for a week by malicious attackers over the next twelve months?” is the question you want to ask.  If you decide that it’s 0.005 (or 0.5%), then your risk is calculated thus:

Risk = 0.005 x 55,000

Risk = 275

The units don’t really matter, because what you want to do is compare risks.  If the risk of your order system being brought down through hardware failure is higher (say 500), then you should probably balance the amount of resources you assign to mitigate these risks accordingly.

Time, reputation, trust and risk

What I’m interested in is a set of rather more complicated risks, however: those associated with human behaviour.  I’m very interested in trust, and one of the interesting things about trust is how we decide to trust people.  One way is by their reputation: if someone keeps behaving well over a long period, then we tend to trust them more – or if badly, then to trust them less[2].  If we trust someone more, our calculation of risk is likely to be strongly based on that trust, as our view of the likelihood of a behaviour at odds with the reputation that person holds will be informed by that.

This makes sense: in the absence of perfect information about humans, their motivations and intentions, our view of risk must be based on something, and reputation is actually a fairly good measure for that.  We might say that the likelihood of a customer defaulting on payment terms reduces year by year as we start to think of them as a “trusted customer”.  As the likelihood reduces, we may decide to increase the amount we lend to them – and thereby the impact of defaulting – to keep the risk about the same, year on year.

The risk here is what is sometimes called “playing the long game”.  Humans sometimes manipulate their reputation, or build up a reputation, in order to perform an action once they have gained trust.  Online sellers my make lots of “good” sales in order to get a 5 star rating over time, only to wait and then make a set of “bad” sales, where they don’t ship goods at all, and then just pocket the money.  Or, they may make many small sales in order to build up a good reputation, and then use that reputation to make one big sale which they have no intention of fulfilling.  Online selling sites are wise to some of these tricks, and have algorithms to try to protect buyers (in fact, the same behaviour can be used by sellers in some cases), but these are not perfect.

I’d like to come back to the London Bridge attack.  In this case, it seems likely that the attacker bided his time over many years, behaving well, and raising his “reputation” among those who knew him – the prison staff, parole board, rehabilitation conference organisers, etc. – so that he had the opportunity to perform one major action at odds with that reputation.  The heroism of those around him stopped him being as successful as he may have hoped, but still at the cost of two innocent lives and several serious injuries.

There is no easy way to deal with such issues.  We need reputation, and we need to allow people to show that they have changed and can be integrated into society, but when we make risk calculations based on reputation in any sphere, we should take care to consider whether actors are playing a long game, and what the possible ramifications would be if they were to act at odds with that reputation.

I noted above that humans are bad at calculating risk, and to follow our example of the non-defaulting customer, one mistake might be to increase the credit we give to that customer beyond the balance of the increase of reputation: actually accepting higher risk than we would have done previously, because we consider them trustworthy.  If we do this, we’ve ceased to use the risk formula, and have started to act irrationally.  Don’t do that.

 


1 – OK, then: “us”.

2 – I’m writing this in the lead up to a UK General Election, and it occurs to me that we actually don’t apply this to most of our politicians.

Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”