What is confidential computing?

Industry interest has been high, and overwhelmingly positive.

On Wednesday, 21st August, 2019 (just under a week ago, at time of writing), Jim Zemlin of the Linux Foundation announced the intent to form the Confidential Computing Consortium, with members including Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.  I’m particularly proud as Red Hat (my employer) is one of those[1], and I spent the preceding few weeks and days working very hard to ensure that we would be listed as one of the planned founding members.

“Confidential Computing” sounds like a lofty goal, and it is.  We’ve known for ages that you should encrypt sensitive data at rest (in storage), in transit (on the network), but confidential computing, as defined by the consortium, is about doing the same for sensitive data – and algorithms – in use.  The consortium plans to encourage industry to use hardware technologies generally called Trust Execution Environments to allow applications and processes to be encrypted as they are running.

This may sound somewhat familiar to those who follow my blog, and it should: Enarx, an open source project launched by Red Hat, was announced as one of the projects that should be part of the initial launch.  I’ve written about Enarx in several places:

Additionally, you’ll find lots of information on the introduction page of the Enarx wiki.

The press release from the Linux Foundation lists the following goals for the Confidential Computing Consortium (my emboldening):

The Confidential Computing Consortium will bring together hardware vendors, cloud providers, developers, open source experts and academics to accelerate the confidential computing market; influence technical and regulatory standards; and build open source tools that provide the right environment for TEE development. The organization will also anchor industry outreach and education initiatives.

Enarx, of course, fits perfectly into this description, as per the text in bold.  Beyond that, however, is the alignment that there is with the other aims of the Enarx project, and the opportunities with which a wider consortium presents us.  The addition of hardware vendors gives us – and the other participants – opportunities to discuss implementations (hardware and software) in an open environment, cloud providers and other users will give us great use cases, and academic involvement broadens the likelihood of quick access to new ideas and research.

We also expect industry and regulatory standards to be forthcoming, and a need for education as the more sectors and industries engage with confidential computing: the consortium provides a framework to engage in related activities.

It’s early days for the Confidential Computing Consortium, but I’m really hopeful and optimistic.  Already, the openness displayed between the planned members on both technical and non-technical collaboration has gone far beyond what I would have expected.  The industry interest – as evidenced by press and community activities – has been high, and overwhelmingly positive. Fans of Enarx – and confidential computing generally – should be excited by the prospect of greater visibility and collaboration.  After all, isn’t that what open source is about in the first place?


1 – this seems like a good place to point out that the views in this article and blog are my own, and may not represent those of my employer, of the Confidential Computing Consortium, the Linux Foundation or any other body.

Turtles – and chains of trust

There’s a story about turtles that I want to tell.

One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3].  This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.

As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:

A learned gentleman[5] is giving a public lecture.  He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology.  At the end of the lecture, an elderly lady comes up to him.

“Young man,” she says.

“Yes,” says he.

“That was a very interesting lecture,” she continues.

“I’m glad you enjoyed it,” he replies.

“You are, however, completely wrong.”

“Really?” he says, somewhat taken aback.

“Yes.  All that rubbish about the Earth hovering in space, circling the sun.  Everybody knows that the Earth sits on the back of a turtle.”

The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”

The elderly lady looks at him with a look of disdain.  “What a ridiculous question!  It’s turtles all the way down, of course!”

The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point.  If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware.  Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.

Principles for chains of trust

When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up.  This idea of a chain of trust is very important, and very pervasive, within security.  It allows for some important principles:

  • there has to be a root of trust somewhere (the “bottom turtle”);
  • the chain is only as strong as its weakest link (and attackers will find it);
  • be explicit about each of the links in the chain;
  • realise that some of the links in the chain may change (e.g. if software is updated);
  • be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
  • simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.

Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar).  In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting.  Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank.  In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.

And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust.  Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation?  Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used?  Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.

Chains of trust are everywhere we look.  Some are short, and some are long.  In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low.  This may be due to a variety of factors, including time, expertise and visibility.  But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.


1 – citizens of the US of A,

2 – have a look on a stock photography site like Pixabay if you don’t believe me.

3 – tortoises are land-based, turtles are aquatic, I believe.

4 – Wikipedia has a good article explaining both the concept and the story’s etymology.

5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.

6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!

7 – or is it?  You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle.  For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.

Trust & choosing open source

Your impact on open source can be equal to that of others.

A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”.  Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of  very important issues including (unsurprisingly, given the title), a number of issues around trust.  It defines transitive trust thus:

“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”

It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.

At no point in the document does it mention open source software.  To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.

My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source.  It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises.  I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).

What has this to do with trust, specifically transitive trust?  Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it.  Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.

What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software.  This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself.  What are we doing when we say “I trust open source software”?  We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.

There’s actually a lot going on here, some of which is very interesting:

  • we are trusting architects and designers to design software to meet our use cases and requirements;
  • we are trusting developers to implement code well, to those designs;
  • we are trusting developers to review each others’ code;
  • we are trusting documentation folks to document the software correctly;
  • we are trusting testers to write, run and check tests which are appropriate to my use cases;
  • we are trusting those who deploy the code to run in it ways which are similar to my use cases;
  • we are trusting those who deploy the code to report bugs;
  • we are trusting those who receive bug reports to fix them as expected.

There’s more, of course, but that’s definitely enough to get us going.  Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.

In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low.  There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.

That’s because, for open source, I can be any of the entities outlined above.  I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter.  When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive.  You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.

Why, then, would anybody buy or license open source software from a vendor?  Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above.  There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.

Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.


1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal.  You may need help.

2 – see 1.

3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.

Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.

Announcing Enarx

If I’ve managed the process properly, this article should be posting at almost exactly the time that we show a demo at Red Hat Summit 2019 in Boston.  That demo, to be delivered by my colleague Nathaniel McCallum, will be of an early incarnation of Enarx, a project that a few of us at Red Hat have been working on for a few months now, and which we’re ready to start announcing to the world.  We have code, we have a demo, we have a github repository, we have a logo: what more could a project want?  Well, people – but we’ll get to that.

What’s the problem?

When you run software (a “workload”) on a system (a “host”) on the cloud or on your own premises, there are lots and lots of layers.  You often don’t see those layers, but they’re there.  Here’s an example of the layers that you might see in a standard cloud virtualisation architecture.  The different colours represent different entities that “own”  different layers or sets of layers.

classic-cloud-virt-arch

Here’s a similar diagram depicting a standard cloud container architecture.  As before, each different colour represents a different “owner” of a layer or set of layers.

cloud-container-arch

These owners may be of very different types, from hardware vendors to OEMs to Cloud Service Providers (CSPs) to middleware vendors to Operating System vendors to application vendors to you, the workload owner.  And for each workload that you run, on each host, the exact list of layers is likely to be different.  And even when they’re the same, the versions of the layers instances may be different, whether it’s a different BIOS version, a different bootloader, a different kernel version or whatever else.

Now, in many contexts, you might not worry about this and your Cloud Service Provider goes out of its way to abstract these layers and their version details away from you.  But this is a security blog, for security people, and that means that anybody who’s reading this probably does care.

The reason we care is not just the different versions and the different layers, but the number of different things – and different entities – that we need to trust if we’re going to be happy running any sort of sensitive workload on these types of stacks.  I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised.  This is a big stretch when it comes to running my sensitive workloads.

What’s Enarx?

Enarx is a project which is trying to address this problem of having to trust all of those layers.  We made the decision that we wanted to allow people running workloads to be able to reduce the number of layers – and owners – that they need to trust to the absolute minimum.  We plan to use Trusted Execution Environments (“TEEs” – see Oh, how I love my TEE (or do I?)), and to provide an architecture that looks a little more like this:

reduced-arch

In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware – of which Enarx is part – but you don’t need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application.  The Enarx project will provide attestation of the TEE, so that you know you’re running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath you application.

The initial code is out there – working on AMD’s SEV TEE at the moment – and enough of it works now that we’re ready to tell you about it.

Making sure that your application meets your own security requirements is down to you.  🙂

How do I find out more?

Easiest is to visit the Enarx github: https://github.com/enarx.

We’ll be adding more information there – it’s currently just code – but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we’d like to have, but I thought I’d start here for now.

We’d love to have people in the community getting involved in the project.  It’s currently quite low-level, and requires quite a lot of knowledge to get running, but we’ll work on that.  You will need some specific hardware to make it work, of course.  Oh, and if you’re an early boot or a low-level kvm hacker, we’re particularly interested in hearing from you.

I will, of course, respond to comments on this article.

 

Thinking beyond “zero-trust”

The components in which you have constant trust relationships are “islands in the stream”.

I wrote a fairly complex post a few months ago called “Zero-trust”: my love/hate relationship, in which I discussed in some details what “zero-trust” networks are, and why I’m not convinced.  The key point turned out to be that I’m not happy about the way the word “zero” is being thrown around here, as I think what’s really going on is explicit trust.

Since then, there hasn’t been a widespread movement away from using the term, to my vast lack of surprise.  In fact, I’ve noticed the term “zero-trust” being used in a different context: in p2p (peer-to-peer) and Web 3.0 discussions.  The idea is that there are some components of the ecosystem that we don’t need to trust: they’re “just there”, doing what they were designed to do, and are basically neutral in terms of the rest of the actors in the network.  Now, I like the idea that there are neutral components in the ecosystem: I think it’s a really important distinction to make to other parts of the system.  What I’m not happy about is the suggestion that we have zero trust in those components.  For me, these are the components that we must trust the most of all of the entities in the system.  If they don’t do what we expect them to do, then everything falls apart pretty quickly.  I think the same argument probably applies to “zero-trust” networking, too.

I started thinking quite hard about this, and I think I understand where the confusion arises.  I’ve spent a lot of time over nearly twenty years thinking about trust, and what it means.  I described my definition of trust in another post, “What is trust?” (which goes into quite a lot of detail, and may be worth reading for a deeper understanding of what I’m going on about here):

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”

For the purposes of this discussion, it’s the words “will perform particular actions according to a specific expectation” that are key here.  This sounds to me as exactly what is being described in the requirement above that components are “doing what they’re designed to do”.  It is this trust in their correct functioning which is a key foundation in the systems being described.  As someone with a background in security, I always (try to) have these sorts of properties in mind when I consider a system: they are, as above, typically foundational.

What I think most people are interested in, however – because it’s a visible and core property of many p2p systems – is the building, maintaining and decay of trust between components.  In this equation, the components have zero change in trust unless there’s a failure in the system (which, being a non-standard state, is not a property that is top-of-mind).  If you’re interested in a p2p world where you need constantly to be evaluating and re-evaluating the level of trust you have in other actors, then the components in which you have (hopefully) constant trust relationships are “islands in the stream”.  If they can truly be considered neutral in terms of their trust – they are neither able to be considered “friendly” nor “malevolent” as they are neither allied to nor can be suborned by any of the actors – then their static nature is uninteresting in terms of the standard operation of the system which you are building.

This does not mean that they are uninteresting or unimportant, however.  Their correct creation and maintenance are vital to the system itself.  It’s for this reason that I’m unhappy about the phrase “zero-trust”, as it seems to suggest that these components are not worthy of our attention.  As a security bod, I reckon they’re among the most fascinating parts of any system (partly because, if I were a Bad Person[tm], they would be my first point of attack).  So you can be sure that I’m going to be keeping an eye out for these sorts of components, and trying to raise people’s awareness of their importance.  You can trust me.

Who’s saying “hello”? – agency, intent and AI

Who is saying “hello world?”: you, or the computer?

I don’t yet have one of those Google or Amazon talking speaker thingies in my house or office.  A large part of this is that I’m just not happy about the security side: I know that the respective companies swear that they’re only “listening” when you say the device’s trigger word, but even if that’s the case, I like to pretend[1] that I have at least some semblance of privacy in my life.  Another reason, however, is that I’m not sure that I like what happens to people when they pretend that there’s a person listening to them, but it’s really just a machine.

It’s not just Alexa and the OK, Google persona, however.  When I connect to an automated phone-answering service, I worry when I hear “I’ll direct your call” from a non-human.  Who is “I”?  “We’ll direct your call” is better – “we” could be the organisation with whom I’m interacting.  But “I”?  “I” is the pronoun that people use.  When I hear “I”, I’m expecting sentience: if it’s a machine I’m expecting AI – preferably fully Turing-compliant.

There’s a more important point here, though.  I’m entirely aware that there’s no sentience behind that “I”[2], but there’s an important issue about agency that we should unpack.

What, then, is “agency”?  I’m talking about the ability of an entity to act on its or another’s behalf, and I touched on this this in a previous post, “Wow: autonomous agents!“.  When somebody writes some code, what they’re doing is giving ability to the system that will run that code to do something – that’s the first part.  But the agency doesn’t really occur, I’d say, until that code is run/instantiated/executed.  At this point, I would argue, the software instance has agency.

But whose agency, exactly?  For whom is this software acting?

Here are some answers.  I honestly don’t think that any of them is right.

  1. the person who owns the hardware (you own the Alexa hardware, right?  You paid Amazon for it…  Or what about running applications on the cloud?).
  2. the person who started the software (you turned on the Alexa hardware, which started the software…  And don’t forget software which is automatically executed in response to triggers or on a time schedule.)
  3. the person who gave the software the instructions (what do you mean, “gave it the instructions”?  Wrote its config file?  Spoke to it?  Set up initial settings?  Typed in commands?  And even if you gave it instructions, do you think that your OK Google hardware is implementing your wishes, or Google’s?  For whom is it actually acting?  And what side effects (like recording your search history and deciding what to suggest in your feed) are you happy to believe are “yours”?)
  4. the person who installed the software (your phone comes with all sorts of software installed, but surely you are the one who imbues it with agency?  If not, whom are you blaming: Google (for the Android apps) or Samsung (which actually put them on the phone)?)
  5. the person who wrote the software (I think we’ve already dealt with this, but even then, is it a single person, or an organisation?  What about open source software, which is typically written, compiled and documented by many different people?  Ascribing “ownership” or “authorship” is a distinctly tricky (and intentionally tricky) issue when you discuss open source)

Another way to think of this problem is to ask: when you write and execute a program, who is saying “hello world?”: you, or the computer?

There are some really interesting questions that come out of this.  Here are a couple that come to mind, which seem to me to be closely connected.

  • In the film Wargames[3], is the automatic dialling that Matthew Broderick’s character’s computer carries out an act with agency?  Or is it when it connects to another machine?  Or when it records the details of that machine?  I don’t think anyone would argue that the computer is acting with agency once David Lightman actually gets it to complete a connection and interact with it, but what about before?
  • Google used to run automated programs against messages received as part of the Gmail service looking for information and phrases which it could use to serve ads.  They were absolutely adamant that they, Google, weren’t doing the reading: it was just a computer program.  I’m not sure how clear or safe a distinction that is.

Why does this all matter?  Well, one of the more pressing reasons is because of self-driving cars.  Whose fault is it when one goes wrong and injures or kills someone?  What about autonomous defence systems?

And here’s the question that really interests – and vexes – me: is this different when the program which is executing can learn.  I don’t even mean strong AI: just that it can change what it does based on the behaviour it “sees”, “hears” or otherwise senses.  It feels to me that there’s a substantive difference between:

a) actions carried out at the explicit (asynchronous) request of a human operator, or according to sets of rules coded into a program

AND

b) actions carried out in response to rules that have been formed by the operation of the program itself.  There is what I’d called synchronous intent within the program.

You can argue that b) has pretty much always been around, in basic forms, but it seems to me to be different when programs are being created with the expectation that humans will not necessarily be able to decode the rules, and where the intent of the human designers is to allow rulesets to be created in this way.

There is some discussion about at the moment as to how and/or whether rulesets generated by open source projects should be shared.  I think the general feeling is that there’s no requirement for them to be – in the same way that material I write using an open source text editor shouldn’t automatically be considered open source – but open data is valuable, and finding ways to share it is a good idea, IMHO.

In Wargames, that is the key difference between the system as originally planned, and what it ends up doing: Joshua has synchronous intent.

I really don’t think this is all bad: we need these systems, and they’re going to improve our lives significantly.  But I do feel that it’s important that you and I start thinking hard about what is acting for whom, and how.

Now, if you wouldn’t mind opening the Pod bay doors, HAL…[5]


1. and yes, I know it’s a pretense.

2. yet…

3. go on – re-watch it: you know you want to[4].

4. and if you’ve never watched it, then stop reading this article and go and watch it NOW.

5. I think you know the problem just as well as I do, Dave.