This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.
We spend a lot of our time – certainly I do – worrying about how malicious attackers might break into systems that I’m running, architecting, designing, implementint or testing. And most of that time is spent thinking about logical – that is software-based – attacks. But these systems don’t exist solely in the logical realm: they have physical embodiments, and are actually things we can touch. And we need to worry about that more than we typically do – or, more accurately, more of us need to worry about that.
If a system is compromised in software, it’s possible that you may be able to get it back and clean, but there is a general principle in IT security that if an attacker has physical access to a system, then that system should be considered compromised. In trust terms (something I care about a lot, given my professional interests), “if an attacker has physical access to a system, then any assurances around expected actions should be considered reduced or void, thereby requiring a re-evaluation of any related trust relationships”. Tampering (see my previous article on this, Not quantum-safe, not tamper-proof, not secure) is the typical concern when considering physical access, but exactly what an attacker will be able to achieve given physical access will depend on a number of factors, not least the skill of the attacker, resources available to them and the amount of time they have physical access. Scenarios range from an unskilled person attaching a USB drive to a system in short duration Evil Maid attacks and long-term access by national intelligence services. But it’s not just running (or provisioned, but not currently running) systems that we need to be worried about: we should extend our scope to those which have yet to be provisioned, or even necessarily assembled, and to those which have been decommissioned.
Many discussions in the IT security realm around supply chain, concentrate mainly on software, though there are some very high profile concerns that some governments (and organisations with nationally sensitive functions) have around hardware sourced from countries with whom they do not share entirely friendly diplomatic, military or commercial relations. Even this scope is too narrow: there are many opportunities for other types of attackers to attack systems at various points in their life-cycles. Dumpster diving, where attackers look for old computers and hardware which has been thrown out by organisations but not sufficiently cleansed of data, is an old and well-established technique. At the other end of the scale, an attacker who was able to get a job at a debit or credit card personalisation company and was then able to gain information about the cryptographic keys inserted in the bank card magnetic strips or, better yet, chips, might be able to commit fraud which was both extensive and very difficult to track down. None of these attacks require damage to systems, but they do require physical access to systems or the manufacturing systems and processes which are part of the systems’ supply chain.
An exhaustive list and description of physical attacks on systems is beyond the scope of this article (readers are recommended to refer to Ross Anderson’s excellent Security Engineering: A Guide to Building Dependable Distributed Systems for more information on this and many other topics relevant to this blog), but some examples across the range of threats may serve to give an idea of sort of issues that may be of concern.
|Attack||Level of sophistication||Time required||Defences|
|USB drive to retrieve data||Low||Seconds||Disable USB ports/use software controls|
|USB drive to add malware to operating system||Low||Seconds||Disable USB ports/use software controls|
|USB drive to change boot loader||Medium||Minutes||Change BIOS settings|
|Attacks on Thunderbolt ports||Medium||Minutes||Firmware updates; turn off machine when unattended|
|Probes on buses and RAM||High||Hours||Physical protection of machine|
|Cold boot attack||High||Minutes||Physical protection of machine/TPM integration|
|Chip scraping attacks||High||Days||Physical protection of machine|
|Electron microscope probes||High||Days||Physical protection of machine|
The extent to which systems are vulnerable to these attacks varies enormously, and it is particularly notable that systems which are deployed at the Edge are particularly vulnerable to some of them, compared to systems in an on-premises data centre or run by a Cloud Service Provider in one of theirs. This is typically either because it is difficult to apply sufficient physical protections to such systems, or because attackers may be able to achieve long-term physical access with little likelihood that their attacks will be discovered, or, if they are, with little danger of attribution to the attackers.
Another interesting point about the majority of the attacks noted above is that they do not involve physical damage to the system, and are therefore unlikely to show tampering unless specific measures are in place to betray them. Providing as much physical protection as possible against some of the more sophisticated and long-term attacks, alongside visual checks for tampering, is the best defence for techniques which can lead to major, low-level compromise of the Trusted Computing Base.
1 – An Evil Maid attack assumes that an attacker has fairly brief access to a hotel room where computer equipment such as a laptop is stored: whilst there, they have unfettered access, but are expected to leave the system looking and behaving in the same way it was before they arrived. This places some bounds on the sorts of attacks available to them, but such attacks are notoriously different to defend.
2 – I wrote an article on one of these: Thunderspy – should I care?
2- A cold boot attack allows an attacker with access to RAM to access data recently held in memory.
2 thoughts on “Why physical compromise is game over”
Very nice article and summary Mike! There’s a low-tech option to try and detect physical tampering that i’m quite fond of, which was suggested by researchers at a CCC congress a few years back: put some glitter nail polish on your laptop’s screws. If it ever was opened behind your back, the glitter nail polish would be broken and, on the off-chance that the attackers are resourceful enough to carry nail polish with them to recreate the nail polish tamper-seals, the glitter in the nail polish will never settle in the same position. You can compare with a previously taken picture and conclude that the machine was tampered with (+ get an idea of the level of sophistication of your attackers for free!).
There’s more fun stuff in Eric Michaud and Ryan Lackey presentation : https://media.ccc.de/v/30C3_-_5600_-_en_-_saal_1_-_201312301245_-_thwarting_evil_maid_attacks_-_eric_michaud_-_ryan_lackey