Defining the Edge

How might we differentiate Edge computing from Cloud computing?

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

There’s been a lot of talk about the Edge, and almost as many definitions as there are articles out there. As usual on this blog, my main interest is around trust and security, so this brief look at the Edge concentrates on those aspects, and particularly on how we might differentiate Edge computing from Cloud computing.

The first difference we might identify is that Edge computing addresses use cases where consolidating compute resource in a centralised location (the typical Cloud computing case) is not necessarily appropriate, and pushes some or all of the computing power out to the edges of the network, where it can computing resources can process data which is generated at the fringes, rather than having to transfer all the data over what may be low-bandwidth networks for processing. There is no generally accepted single industry definition of Edge computing, but examples might include:

  • placing video processing systems in or near a sports stadium for pre-processing to reduce the amount of raw footage that needs to be transmitted to a centralised data centre or studio
  • providing analysis and safety control systems on an ocean-based oil rig to reduce reliance and contention on an unreliable and potentially low-bandwidth network connection
  • creating an Internet of Things (IoT) gateway to process and analyse data environmental sensor units (IoT devices)
  • mobile edge computing, or multi-access edge computing (both abbreviated to MEC), where telecommunications services such as location and augmented reality (AR) applications are run on cellular base stations, rather than in the telecommunication provider’s centralised network location.

Unlike Cloud computing, where the hosting model is generally that computing resources are consumed by tenants – customers of a public cloud, for instance- in the Edge case, the consumer of computing resources is often the owner of the systems providing them (though this is not always the case). Another difference is the size of the host providing the computing resources, which may range from very large to very small (in the case of an IoT gateway, for example). One important factor about most modern Edge computing environments is that they employ the same virtualisation and orchestration techniques as the cloud, allowing more flexibility in deployment and lifecycle management over bare-metal deployments.

A table comparing the various properties typically associated with Cloud and Edge computing shows us a number of differences.


Public cloud computingPrivate cloud computingEdge computing
LocationCentralisedCentralisedDistributed
Hosting modelTenantsOwnerOwner or tenant(s)
Application typeGeneralisedGeneralisedMay be specialised
Host system sizeLargeLargeLarge to very small
Network bandwidthHighHighMedium to low
Network availabilityHighHighHigh to low
Host physical securityHighHighLow
Differences between Edge computing and public/private cloud computing

In the table, I’ve described two different types of cloud computing: public and private. The latter is sometimes characterised as on premises or on-prem computing, but the point here is that rather than deploying applications to dedicated hosts, workloads are deployed using the same virtualisation and orchestration techniques employed in the public cloud, the key difference being that the hosts and software are owned and managed by the owner of the applications. Sometimes these services are actually managed by an external party, but in this case there is a close commercial (and concomitant trust) relationship to this managed services provider, and, equally important, single tenancy is assured (assuming that security is maintained), as only applications from the owner of the service are hosted[1]. Many organisations will mix and match workloads to different cloud deployments, employing both public and private clouds (a deployment model known as hybrid cloud) and/or different public clouds (a deployment model known as multi-cloud). All these models – public computing, private computing and Edge computing – share an approach in common: in most cases, workloads are not deployed to bare-metal servers, but to virtualisation platforms.

Deployment model differences

What is special about each of the models and their offerings if we are looking at trust and security?

One characteristic that the three approaches share is scale: they all assume that hosts will have with multiple workloads per host – though the number of hosts and the actual size of the host systems is likely to be highest in the public cloud case, and lowest in the Edge case. It is this high workload density that makes public cloud computing in particular economically viable, and one of the reasons that it makes sense for organisations to deploy at least some of their workloads to public clouds, as Cloud Service Providers can employ economies of scale which allow them to schedule workloads onto their servers from multiple tenants, balancing load and bringing sets of servers in and out of commission (a computation- and time-costly exercise) infrequently. Owners and operators of private clouds, in contrast, need to ensure that they have sufficient resources available for possible maximum load at all times, and do not have the opportunities to balance loads from other tenants unless they open up their on premises deployment to other organisations, transforming themselves into Cloud Service Providers and putting them into direct competition with existing CSPs.

It is this push for high workload density which is one of the reasons for the need for strong workload-from-workload (type 1) isolation, as in order to be able to maintain high density, cloud owners need to be able to mix workloads from multiple tenants on the same host. Tenants are mutual untrusting; they are in fact likely to be completely unaware of each other, and, if the host is doing its job well, unaware of the presence of other workloads on the same host as them. More important than this property, however, is a strong assurance that their workloads will not be negatively impacted by workloads from other tenants. Although negative impact can occur in other contexts to computation – such as storage contention or network congestion – the focus is mainly on the isolation that hosts can provide.

The likelihood of malicious workloads increases with the number of tenants, but reduces significantly when the tenant is the same as the host owner – the case for private cloud deployments and some Edge deployments. Thus, the need for host-from-workload (type 2) isolation is higher for the public cloud – though the possibility of poorly written or compromised workloads means that it should not be neglected for the other types of deployment.

One final difference between the models is that for both public and private cloud deployments the physical vulnerability of hosts is generally considered to be low[2], whereas the opportunities for unauthorised physical access to Edge computing hosts are considered to be much higher. You can read a little more about the importance of hardware as part of the Trusted Compute Base in my article Turtles – and chains of trust, and it is a fundamental principle of computer security that if an attacker has physical access to a system, then the system must be considered compromised, as it is, in almost all cases, possible to compromise the confidentiality, integrity and availability of workloads executing on it.

All of the above are good reasons to apply Confidential Computing techniques not only to cloud computing, but to Edge computing as well: that’s a topic for another article.


1 – this is something of a simplification, but is a useful generalisation.

2 – Though this assumes that people with authorised access to physical machines are not malicious, a proposition which cannot be guaranteed, but for which monitoring can at least be put in place.

Arm joins the Confidential Computing party

Arm’s announcement of Realms isn’t just about the Edge

The Confidential Computing Consortium is a Linux Project designed to encourage open source projects around confidential computing. Arm has been part of the consortium for a while – in fact, the company is Premier Member – but things got interesting on the 30th March, 2021. That’s when Arm announced their latest architecture: Arm 9. Arm 9 includes a new set of features, called Realms. There’s not a huge amount of information in the announcement about Realms, but Arm is clear that this is their big play into Confidential Computing:

To address the greatest technology challenge today – securing the world’s data – the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA).

I happen to live about 30 minutes’ drive from the main Arm campus in Cambridge (UK, of course), and know a number of Arm folks professionally and socially – I think I may even have interviewed for a job with them many moons ago – but I don’t want to write a puff piece about the company or the technology[1]. What I’m interested in, instead, is the impact this announcement is likely to have on the Confidential Computing landscape.

Arm has had an element in their architecture for a while called TrustZone which provides a number of capabilities around security, but TrustZone isn’t a TEE (Trusted Execution Environment) on its own. A TEE is the generally accepted unit of confidential computing – the minimum building block on which you can build. It is arguably possible to construct TEEs using TrustZone, but that’s not what it’s designed for, and Arm’s decision to introduce Realms strongly suggests that they want to address this. This is borne out by the press release.

Why is all this important? I suspect that few of you have laptops or desktops that run on Arm (Raspberry Pi machines apart – see below). Few of the servers in the public cloud run Arm, and Realms are probably not aimed particularly at your mobile phone (for which TrustZone is a better fit). Why, then, is Arm bothering to make a fuss about this and to put such an enormous design effort into this new technology? There are two answers, it seems to me, one of which is probably pretty much a sure thing, and the other of which is more of a competitive gamble.

Answer 1 – the Edge

Despite recent intrusions by both AMD and Intel into the Edge space, the market is dominated by Arm-based[3] devices. And Edge security is huge, partly because we’re just seeing a large increase in the number of Edge devices, and partly because security is really hard at the Edge, where devices are more difficult to defend, both logically (they’re on remote networks, more vulnerable to malicious attack) and physically (many are out of the control of their owners, living on customer premises, up utility poles, on gas pipelines or in sports stadia, just to give a few examples). One of the problems that confidential computing aims to solve is the issue that, traditionally, once an attacker has physical access to a system, it should be considered compromised. TEEs allow some strong mitigations against that problem (at least against most attackers and timeframes), so making it easy to create and use TEEs on the Edge makes a lot of sense. With the addition of Realms to the Arm 9 architecture, Arm is signally its intent to address security on the Edge, and to defend and consolidate its position as leader in the market.

Answer 2 – the Cloud

I mentioned above that few public cloud hosts run Arm – this is true, but it’s likely to change. Arm would certainly like to see it change, and to see its chipsets move into the cloud mainstream. There has been a lot of work to improve support for server-scale Arm within Linux (in fact, open source support for Arm is generally excellent, not least because of the success of Arm-based chips in Raspberry Pi machines). Amazon Cloud Services (AWS) started offering Arm-based servers to customers as long ago as 2018. This is a market in which Arm would clearly love to be more active and carve out a larger share, and the growing importance of confidential computing in the cloud (and public and private) means that having a strong story in this space was important: Realms are Arm’s answer to this.

What next?

An announcement of an architecture is not the same as availability of hardware or software to run on it. We can expect it to be quite a few months before we see production chips running Arm 9, though evaluation hardware should be available to trusted partners well before that, and software emulation for various components of the architecture will probably come even sooner. This means that those interested in working with Realms should be able to get things moving and have something ready pretty much by the time of availability of production hardware. We’ll need to see how easy they are to use, what performance impact they have, etc., but Arm do have an advantage here: as they are not the first into the confidential computing space, they’ve had the opportunity to watch Intel and AMD and see what has worked, and what hasn’t, both technically and in terms of what the market seems to like. I have high hopes for Arm Realms, and Enarx, the open source confidential computing project with which I’m closely involved, has plans to support them when we can: our architecture was designed with multi-platform support from the beginning.


1 – I should also note that I participated in a panel session on Confidential Computing which was put together by Arm for their “Arm Vision Day”, but I was in no way compensated for this[2].

2 -in fact, the still for the video is such a terrible picture of me that I think maybe I have grounds to sue for it to be taken down.

3 – Arm doesn’t manufacture chips itself: it licenses its designs to other companies, who create, manufacture and ship devices themselves.

Saving one life

Scratching the surface of the technologies which led to the saving of a life

When a loved one calls you from the bathroom at 3.30 in the morning, and you find them collapsed, unconscious on the floor, what does technology do for you? I’ve had the opportunity to consider this over the past few days after a family member was rushed to hospital for an emergency operation which, I’m very pleased to say, seems to have been completely successful. Without it, or if it had failed (the success rate is around 50%), they would, quite simply, be dead now.

We are eternally grateful to all those directly involved in my family member’s care, and to the NHS, which means that there are no bills to pay, just continued National Insurance taken as tax from our monthly pay packets, and which we begrudge not one jot. But I thought it might be worth spending a few minutes just scratching the surface of the sets of technologies which led to the saving of a life, from the obvious to the less obvious. I have missed out many: our lives are so complex and interconnected that it is impossible to list everything, and it is only when they are missing that we realise how it all fits together. But I want to say a huge – a HUGE – thank you to anyone who has ever been involved in any of the systems or technologies, and to ask you to remind yourself that even if you are seldom thanked, your work saves lives every day.

The obvious

  • The combined ECG and blood pressure unit attached to the patient which allows the ambulance crew to react quickly enough to save the patient’s life
  • The satellite navigation systems which guided the crew to the patient’s door
  • The landline which allowed the call to the emergency systems
  • The triage and dispatch system which prioritised the sending of the crew
  • The mobile phone system which allowed a remote member of the family to talk to the crew before they transported the patient

The visible (and audible)

  • The anaesthesiology and monitoring equipment which kept the patient alive during the operation
  • The various scanning equipment at the hospital which allowed a diagnosis to be reached in time
  • The sirens and flashing lights on the ambulances
  • The technology behind the training (increasingly delivered at least partly online) for all of those involved in the patient’s care

The invisible

  • The drugs and medicines used in the patient’s care
  • Equipment: batteries for ambulances, scalpels for operating theatres, paper for charts, keyboards, CPUs and motherboards for computers, soles for shoes, soap for hand-washing, paint for hospital corridors, pillows and pillow cases for beds and everything else that allows the healthcare system to keep running
  • The infrastructure to get fuel to the ambulances and into the cars, trains and buses which transported the medical staff to hospital
  • The maintenance schedules and processes for the ambulances
  • The processes behind the ordering of PPE for all involved
  • The supply chains which allowed those involved to access the tea, coffee, milk, sugar and other (hopefully legal) stimulants to keep staff going through the day and night
  • Staff timetabling software for everyone from cleaners to theatre managers, maintenance people to on-call surgeons
  • The music, art, videos, TV shows and other entertainment that kept everyone involved sufficiently energised to function

The infrastructure

  • Clean water
  • Roads
  • Electricity
  • Internet access and routing
  • Safety processes and culture in healthcare
  • … and everything else I’ve neglected to mention.

A final note

I hope it’s clear that I’m aware that the technology is all interconnected, and too complex to allow every piece to be noted: I’m sorry if I missed your piece out. The same, however, goes for the people. I come from a family containing some medical professionals and volunteers, and I’m aware of the sacrifices made not only by them, but also by the people around them who they know and love, and who see less of them than they might like, or how have to work around difficult shift patterns, or see them come back home after a long shift, worn out or traumatised by what they’ve seen and experienced. The same goes for ancillary workers and services worked in other, supporting industries.

I thank you all, both those involved directly and those involved in any of the technologies which save lives, those I’ve noted and those I’ve missed. In a few days, I hope to see a member of my family who, without your involvement, I would not ever be seeing again in this life. That is down to you.

Review of CCC members by business interests

Reflections on the different types of member in the Confidential Computing Consortium

This is a brief post looking at the Confidential Computing Consortium (the “CCC”), a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.” First, a triple disclaimer: I’m a co-founder of the Enarx project (a member project of the CCC), an employee of Red Hat (which donated Enarx to the CCC and is a member) and an officer (treasurer) and voting member of two parts of the CCC (the Governing Board and Technical Advisory Committee), and this article represents my personal views, not (necessarily) the views of any of the august organisations of which I am associated.

The CCC was founded in October 2019, and is made up of three different membership types: Premier, General and Associate members. Premier members have a representative who gets a vote on various committees, and General members are represented by elected representatives on the Governing Board (with a representative elected for every 10 General Members). Premier members pay a higher subscription than General Members. Associate membership is for government entities, academic and nonprofit organisations. All members are welcome to all meetings, with the exception of “closed” meetings (which are few and far between, and are intended to deal with issues such as hiring or disciplinary matters). At the time of writing, there are 9 Premier members, 20 General members and 3 Associate members. There’s work underway to create an “End-User Council” to allow interested organisations to discuss their requirements, use cases, etc. with members and influence the work of the consortium “from the outside” to some degree.

The rules of the consortium allow only one organisation from a “group of related companies” to appoint a representative (where they are Premier), with similar controls for General members. This means, for instance, that although Red Hat and IBM are both active within the Consortium, only one (Red Hat) has a representative on the Governing Board. If Nvidia’s acquisition of Arm goes ahead, the CCC will need to decide how to manage similar issues there.

What I really wanted to do in this article, however, was to reflect on the different types of member, not by membership type, but by their business(es). I think it’s interesting to look at various types of business, and to reflect on why the CCC and confidential computing in general are likely to be of interest to them. You’ll notice a number of companies – most notably Huawei and IBM (who I’ve added in addition to Red Hat, as they represent a wide range of business interests between them) – appearing in several of the categories. Another couple of disclaimers: I may be misrepresenting both the businesses of the companies represented and also their interests! This is particularly likely for some of the smaller start-up members with whom I’m less familiar. These are my thoughts, and I apologise for errors: please feel free to contact me with suggestions for corrections.

Cloud Service Providers (CSPs)

Cloud Service Providers are presented with two great opportunities by confidential computing: the ability to provide their customers with greater isolation from other customers’ workloads, and the chance to avoid having to trust the CSP themselves. The first is the easiest to implement, and the one on which the CSPs have so far concentrated, but I hope we’re going to see more of the latter in the future, as regulators (and customers’ CFOs/auditors) realise that deploying to the cloud does not require a complex trust relationship with the operators of the hosts running the workload.

  • Google
  • IBM
  • Microsoft

The most notable missing player in this list is Amazon, whose AWS offering would seem to make them a good fit for the CCC, but who have not joined up to this point.

Silicon vendors

Silicon vendors produce their own chips (or license their designs to other vendors). They are the ones who are providing the hardware technology to allow TEE-based confidential computing. All of the major silicon vendors are respresented in the CCC, though not all of them have existing products in the market. It would be great to see more open source hardware (RISC-V is not represented in the CCC) to increase the trust the users can have in confidential computing, but the move to open source hardware has been slow so far.

  • AMD
  • Arm
  • Huawei
  • IBM
  • Intel
  • Nvidia

Hardware manufacturers

Hardware manufacturers are those who will be putting TEE-enabled silicon in their equipment and providing services based on it. It is not surprising that we have no “commodity” hardware manufacturers represented, but interesting that there are a number of companies who create dedicated or specialist hardware.

  • Cisco
  • Google
  • Huawei
  • IBM
  • Nvidia
  • Western Digital
  • Xilinx

Service companies

In this category I have added companies which provide services of various kinds, rather than acting as ISVs or pure CSPs. We can expect a growing number of service companies to realise the potential of confidential computing as a way of differentiating their products and providing services with interesting new trust models for their customers.

  • Accenture
  • Ant Group
  • Bytedance
  • Facebook
  • Google
  • Huawei
  • IBM
  • Microsoft
  • Red Hat
  • Swisscom

ISVs

There are a number of ISVs (Independent Software Vendors) who are members of the CCC, and this heading is in some ways a “catch-all” for members who don’t necessarily fit cleanly under any of the other headings. There is a distinct subset, however, of blockchain-related companies which I’ve separated out below.

What is particularly interesting about the ISVs represented here is that although the CCC is dedicated to providing open source access to TEE-based confidential computing, most of the companies in this category do not provide open source code, or if they do, do so only for a small part of the offering. Membership of the CCC does not in any way require organisations to open source all of their related software, however, so their membership is not problematic, at least from the point of view of the charter. As a dedicated open source fan, however, I’d love to see more commitment to open source from all members.

  • Anjuna
  • Anqlave
  • Bytedance
  • Cosmian
  • Cysec
  • Decentriq
  • Edgeless Systems
  • Fortanix
  • Google
  • Huawei
  • IBM
  • r3
  • Red Hat
  • VMware

Blockchain

As permissioned blockchains gain traction for enterprise use, it is becoming clear that there are some aspects and components of their operation which require strong security and isolation to allow trust to be built into the operating model. Confidential computing provides ways to provide many of the capabilities required in these contexts, which is why it is unsurprising to see so many blockchain-related companies represented in the CCC.

  • Appliedblockchain
  • Google
  • IBM
  • iExec
  • Microsoft
  • Phala network
  • r3

The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.

Why physical compromise is game over

Systems have physical embodiments and are actually things we can touch.

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

We spend a lot of our time – certainly I do – worrying about how malicious attackers might break into systems that I’m running, architecting, designing, implementint or testing. And most of that time is spent thinking about logical – that is software-based – attacks. But these systems don’t exist solely in the logical realm: they have physical embodiments, and are actually things we can touch. And we need to worry about that more than we typically do – or, more accurately, more of us need to worry about that.

If a system is compromised in software, it’s possible that you may be able to get it back and clean, but there is a general principle in IT security that if an attacker has physical access to a system, then that system should be considered compromised. In trust terms (something I care about a lot, given my professional interests), “if an attacker has physical access to a system, then any assurances around expected actions should be considered reduced or void, thereby requiring a re-evaluation of any related trust relationships”. Tampering (see my previous article on this, Not quantum-safe, not tamper-proof, not secure) is the typical concern when considering physical access, but exactly what an attacker will be able to achieve given physical access will depend on a number of factors, not least the skill of the attacker, resources available to them and the amount of time they have physical access. Scenarios range from an unskilled person attaching a USB drive to a system in short duration Evil Maid[1] attacks and long-term access by national intelligence services. But it’s not just running (or provisioned, but not currently running) systems that we need to be worried about: we should extend our scope to those which have yet to be provisioned, or even necessarily assembled, and to those which have been decommissioned.

Many discussions in the IT security realm around supply chain, concentrate mainly on software, though there are some very high profile concerns that some governments (and organisations with nationally sensitive functions) have around hardware sourced from countries with whom they do not share entirely friendly diplomatic, military or commercial relations. Even this scope is too narrow: there are many opportunities for other types of attackers to attack systems at various points in their life-cycles. Dumpster diving, where attackers look for old computers and hardware which has been thrown out by organisations but not sufficiently cleansed of data, is an old and well-established technique. At the other end of the scale, an attacker who was able to get a job at a debit or credit card personalisation company and was then able to gain information about the cryptographic keys inserted in the bank card magnetic strips or, better yet, chips, might be able to commit fraud which was both extensive and very difficult to track down. None of these attacks require damage to systems, but they do require physical access to systems or the manufacturing systems and processes which are part of the systems’ supply chain.

An exhaustive list and description of physical attacks on systems is beyond the scope of this article (readers are recommended to refer to Ross Anderson’s excellent Security Engineering: A Guide to Building Dependable Distributed Systems for more information on this and many other topics relevant to this blog), but some examples across the range of threats may serve to give an idea of sort of issues that may be of concern.

AttackLevel of sophisticationTime requiredDefences
USB drive to retrieve dataLowSecondsDisable USB ports/use software controls
USB drive to add malware to operating systemLowSecondsDisable USB ports/use software controls
USB drive to change boot loaderMediumMinutesChange BIOS settings
Attacks on Thunderbolt ports[2]MediumMinutesFirmware updates; turn off machine when unattended
Probes on buses and RAMHighHoursPhysical protection of machine
Cold boot attack[3]HighMinutesPhysical protection of machine/TPM integration
Chip scraping attacksHighDaysPhysical protection of machine
Electron microscope probesHighDaysPhysical protection of machine

The extent to which systems are vulnerable to these attacks varies enormously, and it is particularly notable that systems which are deployed at the Edge are particularly vulnerable to some of them, compared to systems in an on-premises data centre or run by a Cloud Service Provider in one of theirs. This is typically either because it is difficult to apply sufficient physical protections to such systems, or because attackers may be able to achieve long-term physical access with little likelihood that their attacks will be discovered, or, if they are, with little danger of attribution to the attackers.

Another interesting point about the majority of the attacks noted above is that they do not involve physical damage to the system, and are therefore unlikely to show tampering unless specific measures are in place to betray them. Providing as much physical protection as possible against some of the more sophisticated and long-term attacks, alongside visual checks for tampering, is the best defence for techniques which can lead to major, low-level compromise of the Trusted Computing Base.


1 – An Evil Maid attack assumes that an attacker has fairly brief access to a hotel room where computer equipment such as a laptop is stored: whilst there, they have unfettered access, but are expected to leave the system looking and behaving in the same way it was before they arrived. This places some bounds on the sorts of attacks available to them, but such attacks are notoriously different to defend.

2 – I wrote an article on one of these: Thunderspy – should I care?

2- A cold boot attack allows an attacker with access to RAM to access data recently held in memory.