I had previously been (woefully) unaware of the opportunities for abusing various naming systems
(NOTE: this post deals with a particular company, and though they didn’t sponsor it, I was the grateful recipient of some excellent swag from them at an industry conference, and promised to write an article as thanks!)
A year ago, I visited RSA Conference North America in San Francisco. This was far from my first trip to RSA, which is one of the great (and probably the biggest) global security conferences. There’s a huge exhibitor hall – in fact, several – and many people attend just this, rather than the full conference. I always make a point of having a look at all of the different booths to see if there are any new companies or organisations in the areas that interest me or to find out about things I was previously unaware of. There are people using all kinds of incentives to try to get you to pay attention to them, from food to magicians, from give-away swag to prizes. I’d been doing a lot of walking around and was tired, and happened to discover quite a large booth which had some little seats to sit on. The deal, of course, was the if you sat down, you had to listen to the company pitch – and at the end, they’d do a prize draw and you might win something fun.
At it happened, I’d not heard of the company before and didn’t really have much interest in what they seemed to be talking about – DNS security, it looked like – but I really needed to rest my feet, so I sat down and reminded myself that I had a chance of winning something, even if the subject was as boring as many of the pitches I’ve heard over the years.
It turned out not to be. The company was Infoblox and, to my surprise, I went back several times to find out more about what they do and the research they publish. I went back even after I’d managed to secure one of their prizes, what they do is specialise in an area which I had previously known almost nothing about. On leaving the conference, I promised to write a blog post about what they do, as a gesture of thanks. And I realised as I was preparing to travel to RSA this year (it starts next week, at time of writing) that I’d never fulfilled my promise, and was feeling about it, so this is the post, to assuage my guilt and maybe to prompt you, my dear reader, into finding out more about network security solutions, or what they call DDI (DNS, DHCP, and IPAM) management.
Most companies at exhibitions and conferences spend most of their time telling you about their products, but Infoblox took a different approach – which I heartily recommend to anyone in a similar situation. Rather than just pitching their products and services, they presented the research that they do into the various vulnerabilities, bad actors, criminal traffic distribution systems (TDS) and rest. They had the researchers talking about the work, and made them available after the brief pitch for further questions. Did they mention their products and services? Well, yes, but that wasn’t the main thrust of the presentations. And the presentations were fascinating.
I had previously been (woefully) unaware of the opportunities for abusing the configuration and control of the various naming systems around which our digital lives revolve. I suppose that if I’d thought about it, I might have realised that there would be bad actors messing with these, but the extent to which criminal – and state-sponsored – actors are using these systems shocked me, if only because it’s an area of security that I’d hardly thought about in the 30 or so years that I’ve been in the field. Criminal gangs hijack domains, trick users, redirect traffic and sometimes camp out for years in quiet areas of the Internet, ready to deploy exploits when the rewards seem worthwhile enough. I’ve written over the years about attackers “playing the long game” and biding their time before employing particular techniques or exploiting specific vulnerabilities, but the sheer scale of these networks honestly astounded me. I can’t do justice to this topic, and the very best I can offer is to suggest that you have a look at some of the research that Infoblox provides. They do, of course, also provide services to help you protect your organisation from these threats and to mitigate the risks that you are exposed to, but as I’m not an expert in this particular area, I don’t feel qualified to comment on them: I recommend that you investigate them yourself. All I can say is that if Infoblox do as thorough and expert job around the services they provide as they do in their research activities, then they’re definitely worth taking seriously.
Technologies, when combined, sometimes yield fascinating – and commercially exciting – results.
Mike Bursell
Sponsored by Super Protocol
Introduction
One of the things that I enjoy the most is taking two different technologies, accelerating them at speed and seeing what comes out when they hit, rather in the style of a particle physicist with a Large Hadron Collider. Technologies which may not seem to be obvious fits for each other, when combined, sometimes yield fascinating – and commercially exciting – results, and the idea of putting Web3 and Confidential Computing together is certainly one of those occasions. Like most great ideas, once someone explained it to me, it was a “oh, well, of course that’s going to make sense!” moment, and I’m hoping that this article, which attempts to explain the combination of the two technologies, will give you the same reaction. I’ll start with an introduction to the two technologies separately, why they are interesting from a business context, and then look at what happens when you put them together. We’ll finish with more of a description of a particular implementation: that of Super Protocol, using the Polygon blockchain.
Business context
Introduction to the technologies
In this section, we look at blockchain in general and Web3 in particular, followed by a description of the key aspects of Confidential Computing. If you’re already an expert in either of these technologies, feel free to skip these, of course.
Blockchain
Blockchains offer a way for groups of people to agree about the truth of key aspects of the world. They let people say: “the information that is part of that blockchain is locked in, and we – the other people who use it and I – believe that is correct and represents a true version of certain facts.” This is a powerful capability, but how does it arise? The key point about a blockchain is that it is immutable. More specifically, anything that is placed on the blockchain can’t be changed without such a change being obvious to anybody with access to it. And another key point about many blockchains is that they are public – that is, anybody with access to the Internet and the relevant software is able to access them. Such blockchains are sometimes called “permissionless”, in juxtaposition to blockchains to which only authorised entities have access, which are known as “permissioned”. In both cases, the act of putting something on a blockchain is very important – if we want to view blockchains as providing a source of truth about the world – then the ability to put something onto the blockchain is a power that comes with great responsibility. The various consensus mechanisms employed vary between implementations but all of them aim for consensus among the parties that are placing their trust in the blockchain, a consensus that what is being represented is correct and valid. Once such a consensus has been met a cryptographichash is used to seal the latest information and anchor it to previous parts of the blockchain, adding a new block to it.
While this provides enough for some use cases, the addition of smart contracts provides a new dimension of capabilities. I’ve noted before that smart contracts aren’t very well named (they’re arguably neither smart nor contracts!), but what they basically allow is for programs and their results to be put on a blockchain. If I create a smart contract and there’s consensus that it allows deterministic results from known inputs, and it’s put onto the blockchain, then that means that when it’s run, if people can see the inputs – and be assured that the contract was run correctly, a point to which we’ll be returning later in this article – then they will happy to put the results of that smart contract on the blockchain. What we’ve just created is a way to create data that is known to be correct and valid, and which we can be happy to put directly on the blockchain without further checking: the blockchain can basically add results to itself!
Web3
Blockchains and smart contracts, on their own, are little more than a diverting combination of cryptography and computing: it’s the use cases that make things interesting. The first use case that everyone thinks of is crypto-currency, the use of blockchains to create wholly electronic currencies that can be (but don’t have) divorced from centralised government-backed banking systems. (Parenthetically, the fact that the field of use and study of these crypto-currencies has become known to its enthusiasts as “crypto” drives most experts in the much older and more academic field of cryptology wild.)
There are other uses of blockchains and smart contracts, however, and the one which occupies our attention here is Web3. I’m old (I’m not going to give a precise age, but let’s say early-to-mid Gen X, shall we?), so I cut my professional teeth on the technologies that make up what are now known as Web1. Web1 was the world of people running their own websites with fairly simple static pages and CGI interactions with online databases. Web2 came next and revolves around centralised platforms – often cloud-based – and user-generated data, typically processed and manipulated by large organisations. While data and information may be generated by users, it’s typically sucked into the platforms owned by these large organisations (banks, social media companies, governments, etc.), and passes almost entirely out of user control. Web3 is the next iteration, and the big change is that it’s a move to decentralised services, transparency and user control of data. Web3 is about open protocols – data and information isn’t owned by those processing it: Web3 provides a language of communication and says “let’s start here”. And Web3 would be impossible without the use of blockchains and smart contracts.
Confidential Computing
Confidential Computing is a set of technologies that arose in the mid 2010s, originally to address a number of the problems that people started to realise were associated with cloud computing and Web2. As organisations moved their applications to the cloud, it followed that the data they were processing also moved there, and this caused issues. It’s probably safe to say that the first concerns that surfaced were around the organisations’ own data. Keeping financial data, intellectual property, cryptographic keys and the like safe from prying eyes on servers operated in clouds owned and managed by completely different companies, sometimes in completely different jurisdictions, started to become a worry. But that worry was compounded by the rising tide of regulation being enacted to protect the data not of the organisations, but of the customers who they (supposedly) served. This, and the growing reputational damage associated with the loss of private data, required technologies that would allow the safeguarding of sensitive data and applications from the cloud service providers and, in some cases, from the organisations who “owned” – or at least processed – that data themselves.
Confidential Computing requires two main elements. The first is a hardware-based Trusted Execution Environment (TEE): a set of capabilities on a chip (typically a CPU or GPU at this point) that can isolate applications and their data from the rest of the system running them, including administrators, the operating system and even the lowest levels of the computer, the kernel itself. Even someone with physical access to the machine cannot overcome the protection that a TEE provides, except in truly exceptional circumstances. The second element is remote attestation. It’s all very well setting up a TEE on a system in, say, a public cloud, but how can you know that it’s actually in place or even that the application you wanted to load into it is the one that’s actually running? Remote attestation addresses this problem in a multi-step process. There are a number of ways to manage this, but the basic idea is that the application in the TEE asks the CPU (which understands how this works) to create a measurement of some or all of the memory in the TEE. The CPU does this, and signs this with a cryptographic key, creating an attestation measurement. This measurement is then passed to a different system (hence “remote”), which checks it to see if it conforms to the expectations of the party (or parties) running the application and, if it does, provides a verification confirms that all is well. This basically allows a certificate to be created that attests to the correctness of the CPU, the validity of the TEE’s configuration and the state of any applications or data within the TEE.
With these elements – TEEs and remote attestation – in place, organisations can use Confidential Computing to prove to themselves, their regulators and their customers that no unauthorised peeking or tampering is possible with those sensitive applications and data that need to be protected.
Combining blockchain & CC
One thing – possibly the key thing – about Web3 is that it’s decentralised. That means that anyone can offer to provide services and, most importantly, computing services, to anybody else. This means that you don’t need to go to one of the big (and expensive) cloud service providers to run your application – you can run a DApp (Decentralised Application) – or a standard application such as a simple container image – on the hardware of anyone willing to host it. The question, of course, is whether you can trust them with your application and your data; and the answer, of course in many, if not most, use cases, is “no”. Cloud service providers may not be entirely worthy of organisations’ trust – hence the need for Confidential Computing – but at least they are publicly identifiable, have reputations and are both shameable and suable. It’s very difficult to say the same about in a Web3 world about a provider of computing resources who may be anonymous or pseudonymous and with whom you have never had any interactions before – nor are likely to have any in the future. And while there is sometimes scepticism about whether independent actors can create complex computational infrastructure, we only need look at the example of Bitcoin and other cryptocurrency miners, who have built computational resources which rival those of even the largest cloud providers.
Luckily for Web3, it turns out that Confidential Computing, while designed primarily for Web2, has just the properties needed to allow us to build systems that do allow us to do Web3 computing with confidence (I’ll walk through some of the key elements of one such implementation – by Super Protocol – below). TEEs allow DApps to be isolated from the underlying hardware and system software and remote attestation can provide assurances to clients that everything has been set up correctly (and a number of other properties besides).
Open source
There is one important characteristic that Web3 and Confidential Computing share that is required to ensure the security and transparency that is a key to a system that combines them: open source software. Where software is proprietary and closed from scrutiny (this is the closed from which open source is differentiated), the development of trust in the various components and how they interact is impossible. Where proprietary software might allow trust in a closed system of actors and clients who already have trust with each other – or external mechanisms to establish it – the same is not true in a system such as Web3 whose very decentralised nature doesn’t allow for such centralised authorities.
Open source software is not automatically or by its very nature more secure than proprietary software – it is written by humans, after all (for now!), and – but its openness and availability to scrutiny means that experts can examine it, check it and, where necessary, fix it. This allows the open source community and those that interact with it to establish that it is worthy of trust in particular contexts and use cases (see Chapter 9: Open Source and Trust in my book for more details of how this can work). Confidential Computing – using TEEs and remote attestation – can provide cryptographic assurances not only the elements of a Web3 system are valid and have appropriate security properties, but also that the components of the TEE itself do as well.
Some readers may have noted the apparent circularity in this set-up – there are actually two trust relationships that are required for Confidential Computing to work: in the chip manufacturer and in the attestation verification service. The first of these is unavoidable with current systems, while the other can be managed in part by performing the attestation oneself. It turns out that allowing the creation of trust relationships between mutually un-trusting parties is extremely complex, but one way that this can be done is what we will now address.
Super Protocol’s approach
Super Protocol have created a system which uses Confidential Computing to allow execution of complex applications to be made within a smart contract on the blockchain and for all the parties in the transaction to have appropriate trust in the performance and result of that execution without having to know or trust each other. The key layers are:
Client Infrastructure, allowing a client to interact with the blockchain, initiate an instance and interact with it
Blockchain, including smart contracts
Various providers (TEE, Data, Solution, Storage).
Central to Super Protocol’s approach are two aspects of the system: that it is open source, and that remote attestation is required to allow the client to have sufficient assurance of the system’s security. Smart contracts – themselves open source – allow the resources made available by the various actors and combined into an offer that is placed on the blockchain and is available to anyone with access to the blockchain – to execute it, given sufficient resources from all involved. What makes this approach a Web3 approach, and differentiates it from a more Web2 system, is that none of these actors needs to be connected contractually.
Benefits of This Approach
How does this approach help? Well, you don’t need to store or process data (which may be sensitive or just very large) locally: TEEs can handle it, providing confidentiality and integrity assurances that would otherwise be impossible. And communications between the various applications are also encrypted transparently, reducing or removing risks of data leakage and exposure, without requiring complex key management by users, but keeping the flexibility and exposure offered by decentralisation and Confidential Computing.
But the step change that this opens up is the network effect enabled by the possibility of building huge numbers of interconnected Web3 agents and applications, operating with the benefits of integrity and confidentiality offered by Confidential Computing, and backed up by remote attestation. One of the recurring criticisms of Web2 ecosystems is their fragility and lack of flexibility (not to mention the problems of securing them in the first place): here we have an opportunity to create complex, flexible and robust ecosystems where decentralised agents and applications can collaborate, with privacy controls designed in and clearly defined security assurances and policies.
Technical details
In this section, I dig a little further into some of the technical details of Super Protocol’s system. It is, of course, not the only approach to combining Confidential Computing and Web3, but it is available right now, seems carefully architected and designed with security foremost in mind and provides a good example of the technologies and the complexities involved.
You can think of Super Protocol’s service as being in two main parts: on-chain and off-chain. The marketplace, with smart contract offers, sits on an Ethereum blockchain, and the client interacts with that, never needing to know the details of how and where their application instance is running. The actual running applications are off-chain, supported by other infrastructure to allow initial configuration and then communication services between clients and running applications. The “bridge” between the two parts, which moves from an offer to an actual running instance of the application, is a component called a Trusted Loader, which sets up the various parts of the application and sets it running. The data it is managing contains sensitive information such as cryptographic keys which need to be protected as they provide security for all the other parts of the system and the Trusted Loader also manages the important actions of hash verification (ensuring that what is being loaded what was originally offered) and order integrity (ensuring that no changes can be made whilst the loading is taking place and execution starting).
Trusted Loader – configuration and deployment with Data and Application information into TEE instance
But what is actually running? The answer is that the unit of execution for an application in this service is a Kubernetes Pod, so each application is basically a container image which is run within a Pod, which itself executes within a TEE, isolating it from any unauthorised access. This Pod itself is – of course! – measured, creating an attestation measurement that can now be verified by clients of the application. We should also remember that the application itself – the container image – needs protection as well. This is part of the job of the Trusted Loader, as the container image is stored encrypted, and the Trusted Loader has appropriate keys to decrypt this and other resources required to allow execution. This is not the only thing that the Trusted Loader does: it also gathers and sets up resources from the smart contract for networking and storage, putting everything together, setting it running and connecting the client to the running instance.
There isn’t space in this article to go into deeper detail of how the system works, but by combining the capabilities offered by Confidential Computing and a system of cryptographic keys and certificates, the overall system enforces a variety of properties that are vital for sensitive, distributed and decentralised Web3 applications.
Decentralised storage: secrets are kept in multiple places instead of one, making them harder to access, steal or leak.
Developer independence: creators of applications can’t access these secrets, continuing the lack of need for trust relationships between the various actors. In other words, each instance of an application is isolated from its creator, maintaining data confidentiality.
Unique secrets: Each application gets its own unique secrets that nobody else can use or see and which are not shared between instances.
Thanks
Thanks to Super Protocol for sponsoring this article. Although they made suggestions and provided assistance around the technical details, this article represents my views, the text is mine and final editorial control (and with it the blame for any mistakes!) rests with the author.
Just six months ago, I started a YouTube channel, What is cybersecurity?, to provide short videos (most are under 4 minutes and all are currently well under 10 minutes) discussing topics and issues in cybersecurity. I’ve spent 25+ years in the field (well before anyone called it “cybersecurity”) and had been wondering how people get into it these days. In particular, I’m aware that not everyone processes information in the same way, and that for many people, short video content is there preferred way of gaining new knowledge. So I decided that this was what I’d do: create short videos, publish frequently and see how it went.
To celebrate this, here’s a post describing various aspects of the process.
Methodology
I thought it might be interesting to people to understand how I’ve gone about choosing the topics for videos. When I decided to do this, I created a long list of topics (the initial list was over 150) and realised very early on that I was going to have to start with simple issues and build up to more complicated ones if I wanted to be able to address sophisticated concepts. This meant that I’ve started off with some of the basic building blocks in computing which aren’t specifically security-related, just because I wanted to be able to provide basic starting points for people coming to the field.
I was slightly concerned when I started that I’d run out of ideas for topics: this hasn’t been a problem, and I don’t expect it to be any time in the future. Currently, with 100 videos published, I have over 250 topics that I want to cover (which I haven’t recorded yet). Whenever I come across a topic or concept, I add it to the list. There are few books that I mine for ideas, of which the most notable are:
Trust in Computer Systems and the Cloud – Mike Bursell (my book!)
Security Enginineering (3rd edition) – Ross Anderson
CISSP Exam Guide (9th edition) – Fernando Maymi, Shon Harris
As mentioned above, the videos are all short, and, so far, they’re all single-takes, in that each is a single recording, without editing pieces together. That doesn’t mean that I don’t have to re-record quite frequently – I’d say, on average, that 50% of videos require two or more takes to get right.
Audience
Who do I expect to be my audience? These are the personae that I’ve targeted to start with:
undergraduates reading Computer Science or similar, with an interest in cybersecurity
masters students looking to move into cybersecurity
computing professionals wanting more information on specific cybersecurity topics
managers or professionals in non-computing roles looking for a definition or explanation of a particular term
(after looking at UK students) A level students in Computer Science
Playlists
YouTube encourages you to create playlists to help people find related topics on your channel. These are the playlists that I currently have (I expect to create more as I get into more complex topics):
Cybersecurity concepts compared takes two or more topics and draws out the differences (and similarities). There are so many complex topics in cybersecurity which are really close to each other and it’s not always easy to differentiate them.
Camera: Razer Kiyo Pro (though currently I’m trying out a Sony ZV-E10, which provides lovely video, but requires a 175ms audio delay due to USB streaming processing performance)
The CCC is currently working to create a User Advisory Council (UAC)
Disclaimer: the views expressed in this article (and this blog) do not necessarily reflect those of any of the organisations or companies mentioned, including my employer (Red Hat) or the Confidential Computing Consortium.
The Confidential Computing Consortium was officially formed in October 2019, nearly a year and a half ago now. Despite not setting out to be a high membership organisation, nor going out of its way to recruit members, there are, at time of writing, 9 Premier members (of which Red Hat, my employer, is one), 22 General members, and 3 Associate members. You can find a list of each here, and a brief analysis I did of their business interests a few weeks ago in this article: Review of CCC members by business interests.
The CCC has two major committees (beyond the Governing Board):
Technical Advisory Board (TAC) – this coordinates all technical areas in which the CCC is involved. It recommends whether software projects should be accepted into the CCC (no hardware projects have been introduced so far, thought it’s possible they might be), coordinates activities like special interest groups (we expect one on Attestation to start very soon), encourages work across projects, manages conversations with other technical bodies, and produces material such as the technical white paper listed here.
Outreach Committee – when we started the CCC, we decided against going with the title “Marketing Committee”, as we didn’t think it represented the work we hoped this committee would be doing, and this was a good decision. Though there are activities which might fall under this heading, the work of the Outreach Committee is much wider, including analyst and press relations, creation of other materials, community outreach, cross-project discussions, encouraging community discussions, event planning, webinar series and beyond.
These two committees have served the CCC well, but now that it’s fairly well established, and has a fairly broad industry membership of hardware manufacturers, CSPs, service providers and ISVs (see my other article), we decided that there was one set of interested parties who were not well-represented, and which the current organisational structure did not do a sufficient job of encouraging to get involved: end-users.
It’s all very well the industry doing amazing innovation, coming up with astonishingly well-designed, easy to integrate, security-optimised hardware-software systems for confidential computing if nobody wants to use them. Don’t get me wrong: we know from many conversations with organisations across multiple sectors that users absolutely want to be able to make use of TEEs and confidential computing. That is not that same, however, as understanding their use cases in detail and ensuring that what we – the members of the CCC, who are focussed mainly on creating services and software – actually provide what users need. These users are across many sectors – finance, government, healthcare, pharmaceutical, Edge, to name but a few – and their use cases and requirements are going to be different.
This is why the CCC is currently working to create a User Advisory Council (UAC). The details are being worked out at the moment, but the idea is that potential and existing users of confidential computing technologies should have a forum in which they can connect with the leaders in the space (which hopefully describes the CCC members), share their use cases, find out more about the projects which are part of the CCC, and even take a close look at those projects most relevant to them and their needs. This sort of engagement isn’t likely, on the whole, to require attendance at lots of meetings, or to have frequent input into the sorts of discussions which the TAC and the Outreach Committee typically consider, and the general feeling is that as we (the CCC) are aiming to service these users, we shouldn’t be asking them to pay for the privilege (!) of talking to us. The intention, then, is to allow a low bar for involvement in the UAC, and for there to be no membership fee required. That’s not to stop UAC members from joining the CCC as members if they wish – it would be a great outcome if some felt that they were so keen to become more involved that membership was appropriate – but there should be no expectation of that level of commitment.
I should be clear that the plans for the UAC are not complete yet, and some of the above may change. Nor should you consider this a formal announcement – I’m writing this article because I think it’s interesting, and because I believe that this is a vital next step in how those involved with confidential computing engages with the broader world, not because I represent the CCC in this context. But there’s always a danger that “cool” new technologies develop into something which fits only the fundamentally imaginary needs of technologists (and I’ll put my hand up and say that I’m one of those), rather than the actual needs of businesses and organisations which are struggling to operate around difficult issues in the real world. The User Advisory Council, if it works as we hope, should allow the techies (me, again) to hear from people and organisations about what they want our technologies to do, and to allow the CCC to steer its efforts in these directions.
Scratching the surface of the technologies which led to the saving of a life
When a loved one calls you from the bathroom at 3.30 in the morning, and you find them collapsed, unconscious on the floor, what does technology do for you? I’ve had the opportunity to consider this over the past few days after a family member was rushed to hospital for an emergency operation which, I’m very pleased to say, seems to have been completely successful. Without it, or if it had failed (the success rate is around 50%), they would, quite simply, be dead now.
We are eternally grateful to all those directly involved in my family member’s care, and to the NHS, which means that there are no bills to pay, just continued National Insurance taken as tax from our monthly pay packets, and which we begrudge not one jot. But I thought it might be worth spending a few minutes just scratching the surface of the sets of technologies which led to the saving of a life, from the obvious to the less obvious. I have missed out many: our lives are so complex and interconnected that it is impossible to list everything, and it is only when they are missing that we realise how it all fits together. But I want to say a huge – a HUGE – thank you to anyone who has ever been involved in any of the systems or technologies, and to ask you to remind yourself that even if you are seldom thanked, your work saves lives every day.
The obvious
The combined ECG and blood pressure unit attached to the patient which allows the ambulance crew to react quickly enough to save the patient’s life
The satellite navigation systems which guided the crew to the patient’s door
The landline which allowed the call to the emergency systems
The triage and dispatch system which prioritised the sending of the crew
The mobile phone system which allowed a remote member of the family to talk to the crew before they transported the patient
The visible (and audible)
The anaesthesiology and monitoring equipment which kept the patient alive during the operation
The various scanning equipment at the hospital which allowed a diagnosis to be reached in time
The sirens and flashing lights on the ambulances
The technology behind the training (increasingly delivered at least partly online) for all of those involved in the patient’s care
The invisible
The drugs and medicines used in the patient’s care
Equipment: batteries for ambulances, scalpels for operating theatres, paper for charts, keyboards, CPUs and motherboards for computers, soles for shoes, soap for hand-washing, paint for hospital corridors, pillows and pillow cases for beds and everything else that allows the healthcare system to keep running
The infrastructure to get fuel to the ambulances and into the cars, trains and buses which transported the medical staff to hospital
The maintenance schedules and processes for the ambulances
The processes behind the ordering of PPE for all involved
The supply chains which allowed those involved to access the tea, coffee, milk, sugar and other (hopefully legal) stimulants to keep staff going through the day and night
Staff timetabling software for everyone from cleaners to theatre managers, maintenance people to on-call surgeons
The music, art, videos, TV shows and other entertainment that kept everyone involved sufficiently energised to function
The infrastructure
Clean water
Roads
Electricity
Internet access and routing
Safety processes and culture in healthcare
… and everything else I’ve neglected to mention.
A final note
I hope it’s clear that I’m aware that the technology is all interconnected, and too complex to allow every piece to be noted: I’m sorry if I missed your piece out. The same, however, goes for the people. I come from a family containing some medical professionals and volunteers, and I’m aware of the sacrifices made not only by them, but also by the people around them who they know and love, and who see less of them than they might like, or how have to work around difficult shift patterns, or see them come back home after a long shift, worn out or traumatised by what they’ve seen and experienced. The same goes for ancillary workers and services worked in other, supporting industries.
I thank you all, both those involved directly and those involved in any of the technologies which save lives, those I’ve noted and those I’ve missed. In a few days, I hope to see a member of my family who, without your involvement, I would not ever be seeing again in this life. That is down to you.