SVB & finance stress: remember the (other) type of security

Now is the time for more vigilance, not less.

This is the week that the start-up world has been reeling after the collapse of Silicon Valley Bank. There have been lots of articles about it and about how the larger ecosystem (lawyers, VCs, other banks and beyond) have rallied to support those affected, written (on the whole, at least!) by people much better qualified than me to do so. But there’s another point that could get lost in the noise, and that’s the opportunity presented to bad actors by all of this.

When humans are tired, stressed, confused or have too many inputs, they (we – I’ve not succumbed to the lure of ChatGPT yet…) are prone to make poor decisions, or to take less time over decisions – even important decisions – than they ought to. Sadly, bad people know this, and that means that they will be going out of their way to exploit us (I’m very aware that I’m as vulnerable to this type of exploitation as anybody else). The problem is that when banks start looking dodgy, or when money is at stake, people need to do risky things. And these are often risky things which involve an awful lot of money, things like:

  • withdrawing large amounts of money
  • moving large amounts of money between accounts
  • opening new accounts
  • changing administrative access permissions and privileges on accounts
  • adding new people as administrators on accounts.

All of the above are actions (or involve actions) which we would normally be very careful about, and take very seriously (though that doesn’t stop us making the occasional mistake). The problem (and the opportunity for bad actors) is that when we’re stressed or in a hurry (as we’re likely to be in the current situation), we may pay less attention to important steps than we might otherwise. We might not enable multi-factor authentication, we might not check website certificates, we might click-through on seemingly helpful offers in emails to help us out, or we might not check the email addresses to which we’re sending invitations. All of these could lead bad folks to get at our money. They know this, and they’ll be going out of their way to find ways to encourage us to make mistakes, be less careful or hurry our way through vital processes.

My plea, then, is simple: don’t drop your guard because of the stress of the current situation. Now is the time for more vigilance, not less.

Open source and cyberwar

If cyberattacks happen to the open source community, the impact may be greater than you expect.

There are some things that it’s more comfortable not thinking about, and one of them is war. For many of us, direct, physical violence is a long way from us, and that’s something for which we can be very thankful. As the threat of physical violence recedes, however, it’s clear that the spectre of cyberattacks as part of a response to aggression – physical or virtual – is becoming more and more likely.

It’s well attested that many countries have “cyber-response capabilities”, and those will include aggressive as well as protective measures. And some nation states have made it clear not only that they consider cyberwarfare part of any conflict, but that they would be entirely comfortable with initiating cyberwarfare with attacks.

What, you should probably be asking, has that to do with us? And by “us”, I mean the open source software community. I think that the answer, I’m afraid, is “a great deal”. I should make it clear that I’m not speaking from a place of privileged knowledge here, but rather from thoughtful and fairly informed opinion. But it occurs to me that the “old style” of cyberattacks, against standard “critical infrastructure” like military installations, power plants and the telephone service, was clearly obsolete when the Two Towers collapsed (if not in 1992, when the film Sneakers hypothesised attacks against targets like civil aviation). Which means that any type of infrastructure or economic system is a target, and I think that open source is up there. Let me explore two ways in which open source may be a target.

Active targets

If we had been able to pretend that open source wasn’t a core part of the infrastructure of nations all over the globe, that self-delusion was finally wiped away by the log4j vulnerabilities and attacks. Open source is everywhere now, and whether or not your applications are running any open source, the chances are that you deploy applications to public clouds running open source, at least some of your employees use an open source operating system on their phones, and that the servers running your chat channels, email providers, Internet providers and beyond make use – extensive use – of open source software: think apache, think bind, think kubernetes. At one level, this is great, because it means that it’s possible for bugs to be found and fixed before they can be turned into vulnerabilities, but that’s only true if enough attention is being paid to the code in the first place. We know that attackers will have been stockpiling exploits, and many of them will be against proprietary software, but given the amount of open source deployed out there, they’d be foolish not to be collecting exploits against that as well.

Passive targets

I hate to say it, but there also are what I’d call “passive targets”, those which aren’t necessarily first tier targets, but whose operation is important to the safe, continued working of our societies and economies, and which are intimately related to open source and open source communities. Two of the more obvious ones are GitHub and GitLab, which hold huge amounts of our core commonwealth, but long-term attacks on foundations such as the Apache Foundation and the Linux Foundation, let alone kernel.org, could also have impact on how we, as a community, work. Things are maybe slightly better in terms of infrastructure like chat services (as there’s a choice of more than one, and it’s easier to host your own instance), but there aren’t that many public servers, and a major attack on either them or the underlying cloud services on which many of them rely could be crippling.

Of course, the impact on your community, business or organisation will depend on your usage of difference pieces of infrastructure, how reliant you are on them for your day-to-day operation, and what mitigations you have available to you. Let’s quickly touch on that.

What can I do?

The Internet was famously designed to route around issues – attacks, in fact – and that helps. But, particularly where there’s a pretty homogeneous software stack, attacks on infrastructure could still have very major impact. Start thinking now:

  • how would I support my customers if my main chat server went down?
  • could I continue to develop if my main git provider became unavailable?
  • would we be able to offer at least reduced services if a cloud provider lost connectivity for more than an hour or two?

By doing an analysis of what your business dependencies are, you have the opportunity to plan for at least some of the contingencies (although, as I note in my book, Trust in Computer Systems and the Cloud, the chances of your being able to analyse the entire stack, or discover all of the dependencies, is lower than you might think).

What else can you do? Patch and upgrade – make sure that whatever you’re running is the highest (supported!) version. Make back-ups of anything which is business critical. This should include not just your code but issues and bug-tracking, documentation and sales information. Finally, consider having backup services available for time-critical services like a customer support chat line.

Cyberattacks may not happen to your business or organisation directly, but if they happen to the open source community, the impact may be greater than you expect. Analyse. Plan. Mitigate.

Masks, vaccinations, social distancing – and cybersecurity

The mappingn the realms of cybersecurity and epidemiology is not perfect, but metaphors can be useful.

Waaaaay back in 2018 (which seems a couple of decades ago now), I wrote an article called Security patching and vaccinations: a surprising link. In those days, Spectre and Meltdown were still at the front of our minds, and the impact of something like Covid-19 was, if not unimaginable, then far from what most of us considered in our day-to-day lives. In the article, I argued that patching of software and vaccination (of humans or, I guess, animals[1]) had some interesting parallels (the clue’s in the title, if I’m honest). As we explore the impact of new variants of the Covid-19 virus, the note that “a particular patch may provide resistance to multiple types of attack of the same family, as do some vaccinations” seems particularly relevant. I also pointed out that although there are typically some individuals in a human population for whom vaccination is too risky, a broad effort to vaccinate the rest of the population has positive impact for them; in a similar way to how patching most systems in a deployment can restrict the number of “jumping off points” or attack.

I thought it might be interesting to explore other similarities between disease management in the human sphere with how we do things in cybersecurity, not because they are exact matches, but because they can be useful metaphors to explain to colleagues, family and friends what we do.

Vaccinations

We’ve looked a vaccinations a bit above: the key point here is that once a vulnerability is discovered, software vendors can release patches which, if applied correctly, protect the system from those attacks. This is not dissimilar in effect to the protection provided by vaccinations in human settings, though the mechanism is very different. Computer systems don’t really have an equivalent to anti-bodies or immune systems, but patches may – like vaccines – provide protection for more than one specific attack (think virus strain) if others exploit the same type of weakness.

Infection testing

As we have discovered since the rise of Covid-19, testing of the population is a vital measure to understand what other mechanisms need to be put in place to control infection. The same goes for cybersecurity. Testing the “health” of a set of systems, monitoring their behaviour and understanding which may be compromised, by which attacks, leveraging which vulnerabilities, is a key part of any cybersecurity strategy, and easily overlooked when everything seems to be OK.

Masks

I think of masks as acting a little like firewalls, or mechanisms like SELinux which act to prevent malicious programs from accessing parts of the system which you want to protect. Like masks, these mechanisms reduce the attack surface available to bad actors by stopping up certain “holes” in the system, making it more difficult to get in. In the case of firewalls, it’s network ports, and in the case of SELinux, it’s activities like preventing unauthorised system calls (syscalls). We know that masks are not wholly effective in preventing transmission of Covid-19 – they don’t give 100% protection from oral transmission, and if someone sneezes into your eye, for instance, that could lead to infection – but we also know that if two people are meeting, and they both wear masks, the chance of transmission from an infected to an uninfected person is reduced. Most cybersecurity controls aim mainly to protect the systems on which they reside, but a well thought-out deployment may also aim to put in controls to prevent attackers from jumping from one system to another.

Social distancing

This last point leads us to our final metaphor: social distancing. Here, we put in place controls to try to make it difficult for an attacker (or virus) to jump from one system to another (or human to another). While the rise of zero trust architectures has led to something of a down-playing of some of these techniques within cybersecurity, mechanisms such as DMZs, policies such as no USB drives and, at the extreme end, air-gapping of systems (where there is no direct network connection between them) all aim to create physical or logical barriers to attacks or transmission.

Conclusion

The mapping between controls in the realms of cybersecurity and epidemiology is not perfect, but metaphors can be useful in explaining the mechanisms we use and also in considering differences (is there an equivalent of “virus load” in computer systems, for instance?). If there are lessons we can learn from the world of disease managemet, then we should be keen to do so.


1 – it turns out that you can actually vaccinate plants, too: neat.

Why physical compromise is game over

Systems have physical embodiments and are actually things we can touch.

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

We spend a lot of our time – certainly I do – worrying about how malicious attackers might break into systems that I’m running, architecting, designing, implementint or testing. And most of that time is spent thinking about logical – that is software-based – attacks. But these systems don’t exist solely in the logical realm: they have physical embodiments, and are actually things we can touch. And we need to worry about that more than we typically do – or, more accurately, more of us need to worry about that.

If a system is compromised in software, it’s possible that you may be able to get it back and clean, but there is a general principle in IT security that if an attacker has physical access to a system, then that system should be considered compromised. In trust terms (something I care about a lot, given my professional interests), “if an attacker has physical access to a system, then any assurances around expected actions should be considered reduced or void, thereby requiring a re-evaluation of any related trust relationships”. Tampering (see my previous article on this, Not quantum-safe, not tamper-proof, not secure) is the typical concern when considering physical access, but exactly what an attacker will be able to achieve given physical access will depend on a number of factors, not least the skill of the attacker, resources available to them and the amount of time they have physical access. Scenarios range from an unskilled person attaching a USB drive to a system in short duration Evil Maid[1] attacks and long-term access by national intelligence services. But it’s not just running (or provisioned, but not currently running) systems that we need to be worried about: we should extend our scope to those which have yet to be provisioned, or even necessarily assembled, and to those which have been decommissioned.

Many discussions in the IT security realm around supply chain, concentrate mainly on software, though there are some very high profile concerns that some governments (and organisations with nationally sensitive functions) have around hardware sourced from countries with whom they do not share entirely friendly diplomatic, military or commercial relations. Even this scope is too narrow: there are many opportunities for other types of attackers to attack systems at various points in their life-cycles. Dumpster diving, where attackers look for old computers and hardware which has been thrown out by organisations but not sufficiently cleansed of data, is an old and well-established technique. At the other end of the scale, an attacker who was able to get a job at a debit or credit card personalisation company and was then able to gain information about the cryptographic keys inserted in the bank card magnetic strips or, better yet, chips, might be able to commit fraud which was both extensive and very difficult to track down. None of these attacks require damage to systems, but they do require physical access to systems or the manufacturing systems and processes which are part of the systems’ supply chain.

An exhaustive list and description of physical attacks on systems is beyond the scope of this article (readers are recommended to refer to Ross Anderson’s excellent Security Engineering: A Guide to Building Dependable Distributed Systems for more information on this and many other topics relevant to this blog), but some examples across the range of threats may serve to give an idea of sort of issues that may be of concern.

AttackLevel of sophisticationTime requiredDefences
USB drive to retrieve dataLowSecondsDisable USB ports/use software controls
USB drive to add malware to operating systemLowSecondsDisable USB ports/use software controls
USB drive to change boot loaderMediumMinutesChange BIOS settings
Attacks on Thunderbolt ports[2]MediumMinutesFirmware updates; turn off machine when unattended
Probes on buses and RAMHighHoursPhysical protection of machine
Cold boot attack[3]HighMinutesPhysical protection of machine/TPM integration
Chip scraping attacksHighDaysPhysical protection of machine
Electron microscope probesHighDaysPhysical protection of machine

The extent to which systems are vulnerable to these attacks varies enormously, and it is particularly notable that systems which are deployed at the Edge are particularly vulnerable to some of them, compared to systems in an on-premises data centre or run by a Cloud Service Provider in one of theirs. This is typically either because it is difficult to apply sufficient physical protections to such systems, or because attackers may be able to achieve long-term physical access with little likelihood that their attacks will be discovered, or, if they are, with little danger of attribution to the attackers.

Another interesting point about the majority of the attacks noted above is that they do not involve physical damage to the system, and are therefore unlikely to show tampering unless specific measures are in place to betray them. Providing as much physical protection as possible against some of the more sophisticated and long-term attacks, alongside visual checks for tampering, is the best defence for techniques which can lead to major, low-level compromise of the Trusted Computing Base.


1 – An Evil Maid attack assumes that an attacker has fairly brief access to a hotel room where computer equipment such as a laptop is stored: whilst there, they have unfettered access, but are expected to leave the system looking and behaving in the same way it was before they arrived. This places some bounds on the sorts of attacks available to them, but such attacks are notoriously different to defend.

2 – I wrote an article on one of these: Thunderspy – should I care?

2- A cold boot attack allows an attacker with access to RAM to access data recently held in memory.

Thunderspy – should I care?

Thunderspy is a nasty attack, but easily prevented.

There’s a new attack out there which is getting quite a lot of attention this week. It’s called Thunderspy, and it uses the Thunderbolt port which is on many modern laptops and other computers to suck data from your machine. I thought that it might be a good issue to cover this week, as although it’s a nasty attack, there are easy ways to defend yourself, some of which I’ve already covered in previous articles, as they’re generally good security practice to follow.

What is Thunderspy?

Thunderspy is an attack on your computer which allows an attacker with moderate resources to get at your data under certain circumstances. The attacker needs:

  • physical access to your machine – not for long (maybe five minutes), but they do need it. This type of attack is sometimes called an “evil maid” attack, as it can be carried out by hotel staff with access to your room;
  • the ability to take your computer apart (a bit) – all we’re talking here is a screwdriver;
  • a little bit of hardware – around $400 worth, according to one source;
  • access to some freely available software;
  • access to another computer at the same time.

There’s one more thing that the attacker needs, and that’s for you to leave your computer on, or in suspend mode. I’ve discussed different power modes before (in 3 laptop power mode options), and mentioned, as well, that leaving your machine in suspend mode is generally a bad idea (in 7 security tips for travelling with your laptop). It turns out I was right.

What’s the bad news?

Well, there’s quite a lot of bad news:

  • lots of machines have Thunderbolt ports (you can find pictures of both the port and connectors on Wikipedia’s Thunderbolt page, in case you’re not sure whether your machine is affected);
  • machines are vulnerable even if you have full disk encryption;
  • Windows machines are vulnerable;
  • Linux machines are vulnerable;
  • Macintosh machines are vulnerable;
  • most machines with a Thunderbolt port from 2011 onwards are vulnerable;
  • although protection is available on some newer machines (from around 2019)
    • the extent of its efficacy is unclear;
    • lots of manufacturers don’t implement it;
  • some protections that you can turn on break USB and other functionality;
  • one variant of the attack breaks Thunderbolt security permanently, meaning that the attacker won’t need to take your computer apart at all for subsequent attacks: they just need physical access to the port whilst your machine it turned on (or in suspend mode).

The worst thing to note is that full disk encryption does not help you if your computer is turned on or in suspend mode.

Note – I’ve been unable to find out whether any Chromebooks have Thunderbolt support. Please check your model’s specifications or datasheet to be certain.

What’s the good news?

The good news is short and sweet: if you turn your computer completely off, or ensure that it’s in Hibernate mode, then it’s not vulnerable. Thunderspy is a nasty attack, but it’s easily prevented.

What should I do?

  1. Turn your computer off when you leave it unattended, even for short amounts of time.

That was easy, wasn’t it? This is best practice anyway, and it turns out that hibernate mode is also OK. What the attacker is looking for is a powered-up, logged-on computer with Thunderbolt. If you can stop them finding a computer that meets those criteria, then you’re fine. Putting your computer into hibernate mode is also OK.

Avoid: a) contact; b) phishing.

It is impossible to ascertain at first look whether a phishing email is genuine or not.

I was thinking about not posting this week, as many, many of us have rather a lot on our minds at the moment. Like the rest of the UK, our household is in lock-down, and I’m trying to juggle work with the (actually very limited) demands of my two children, alongside my wife. So far, our broadband is holding out, and I’ve worked from home long enough that I’ve managed to get the rest of the family’s remote access issues sorted so far.

But I decided that I wanted to post, because I wanted to issue a warning, in case you’ve missed it elsewhere: watch out for phishing emails.

I try to keep these articles relevant to people who aren’t IT professionals: “technically credible, but something you could show to your parents or your manager”. Given how many parents (not to mention grandparents and, scarily, managers) seem to be going online for pretty much the first time, here’s my first definition of phishing emails[1].

“A phishing email is one which pretends to be from a person or company you trust, trying to get personal details such as logins or bank information, or to install malicious software on your device.”

Here’s my second definition, particularly relevant now.

“A phishing email is one sent by low-lives who are attempting to scam vulnerable, scared people for their own personal gain.”

Many phishing emails look exactly the same as a normal email from the relevant party. To be clear, it is impossible for anyone, even an expert, to ascertain at first look whether a polished and sophisticated phishing email is genuine or not. There are ways to tell, if you’re an expert, by looking in more detail at the actual details of the email, but most people will not be able to tell. I have nearly been caught over the past week, as have one of my kids and my wife. Two that have come round recently were particularly impressive: one from Netflix, and one for the TV licensing authority in the UK. Luckily, I’ve trained my family well, and they knew what to do, which is this:

NEVER CLICK ON ANY LINKS IN AN EMAIL.

There. That’s all you need to do. If you get an email which is asking you to click on a link, a graphic or a picture, don’t do it. Instead, go to the website of the actual company or organisation, or contact the individual who allegedly sent it to you. You should easily be able to work out whether your credit card has been declined, your email account has been suspended, you need to pay extra tax within the next 24 hours (hint: you don’t), your friend is stuck in Tenerife or you have used up all of your phone data. If in doubt, stay calm, don’t panic, and contact a more expert friend[2], and get them to help.

To be clear, it’s easy to get it wrong. I work in IT security, and I’ve been caught in the past: see my article I got phished this week: what did I do? This will also tell you (or your designated expert) what to do in the event that you do click on the link.

And, of course, keep safe.


1 – the word “phishing” is derived from “fishing”, as the emails are “fishing” for your details. The “ph” is a standard geek affectation.

2 – often, but not always, son or daughter, grandson or granddaughter[3].

3 – or one of the people whose manager you are, of course.

Learn to hack online – h4x0rz and pros

Removing these videos hinders defenders much more significantly than it impairs the attackers.

Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos.  According to The Register, the policy first appeared on the 5th April 2019:

“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Now, I can see why they’ve done this: it’s basic backside-covering.  YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content.  The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly.  I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.

Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.

Hacking[3] videos are different, though.  The most important point is that they have a  legitimate didactic function: in other words, they’re useful for teaching.  Nor do I think that there’s a public outcry from groups wanting them banned.  In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that.  Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge.  This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks.  IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.

“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest.  Well, yes, some will.  But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.

And there is an imbalance between attackers and defenders: this move exacerbates it.  Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems.  By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.

I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams.  We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.

We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round.  To be clear: the (good) many need access to these videos to protect against the (malicious) few.  Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.


1 – it’s pronounced “few-ROAR-ray”.  And “NEEsh”.  And “CLEEK”[2].

2 – yes, I should probably calm down.

3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.

3 laptop power mode options

Don’t suspend your laptop.

I wrote a post a couple of weeks ago called 7 security tips for travelling with your laptop.  The seventh tip was “Don’t suspend”: in other words, when you’re finished doing what you’re doing, either turn your laptop off, or put it into “hibernate” mode.  I thought it might be worth revisiting this piece of advice, partly to explain the difference between these different states, and partly to explain exactly why it’s a bad idea to use the suspend mode.  A very bad idea indeed.  In fact, I’d almost go as far as saying “don’t suspend your laptop”.

So, what are the three power modes usually available to us on a laptop?  Let’s look at them one at a time.  I’m going to assume that you have disk encryption enabled (the second of the seven tips in my earlier article), because you really, really should.

Power down

This is what you think it is: your laptop has powered down, and in order to start it up again, you’ve got to go through an entire boot process.  Any applications that you had running before will need to be restarted[1], and won’t come back in the same state that they were before[2].  If somebody has access to your laptop when you’re not there, then there’s not immediate way that they can get at your data, as it’s encrypted[3].  See the conclusion for a couple of provisos, but powering down your laptop when you’re not using it is pretty safe, and the time taken to reboot a modern laptop with a decent operating system on it is usually pretty quick these days.

It’s worth noting that for some operating systems – Microsoft Windows, at least – when you tell your laptop to power down, it doesn’t.  It actually performs a hibernate without telling you, in order to speed up the boot process.  There are (I believe – as a proud open source user, I don’t run Windows, so I couldn’t say for sure) ways around this, but most of the time you probably don’t care: see below on why hibernate mode is pretty good for many requirements and use cases.

Hibernate

Confusingly, hibernate is sometimes referred to as “suspend to disk”.  What actually happens when you hibernate your machine is that the contents of RAM (your working memory) are copied and saved to your hard disk.  The machine is then powered down, leaving the state of the machine ready to be reloaded when you reboot.  When you do this, the laptop notices that it was hibernated, looks for saved state, and loads it into RAM[4].  Your session should come back pretty much as it was before – though if you’ve moved to a different wifi network or a session on a website has expired, for instance, your machine may have to do some clever bits and pieces in the background to make things as nice as possible as you resume working.

The key thing about hibernating your laptop is that while you’ve saved state to the hard drive, it’s encrypted[3], so anyone who manages to get at your laptop while you’re not there will have a hard time getting any data from it.  You’ll need to unlock your hard drive before your session can be resumed, and given that your attacker won’t have your password, you’re good to go.

Suspend

The key difference between suspend and the other two power modes we’ve examined above is that when you choose to suspend your laptop, it’s still powered on.  The various components are put into low-power mode, and it should wake up pretty quickly when you need it, but, crucially, all of the applications that you were running beforehand are still running, and are still in RAM.  I mentioned in my previous post that this increases the attack surface significantly, but there are some protections in place to improve the security of your laptop when it’s in suspend mode.  Unluckily, they’re not always successful, as was demonstrated a few days ago by an attack described by the Register.  Even if your laptop is not at risk from this particular attack, my advice just not to use suspend.

There are two usages of suspend that are difficult to manage.  The first is when you have your machine set to suspend after a long period of inactivity.  Typically, you’ll set the screen to lock after a certain period of time, and then the system will suspend.  Normally, this is only set for when you’re on battery – in other words, when you’re not sat at your desk with the power plugged in.  My advice would be to change this setting so that your laptop goes to hibernate instead.  It’s a bit more time to boot it up, but if you’re leaving your laptop unused for a while, and it’s not plugged in, then it’s most likely that you’re travelling, and you need to be careful.

The second is when you get up and close the lid to move elsewhere.  If you’re moving around within your office or home, then that’s probably OK, but for anything else, try training yourself to hibernate or power down your laptop instead.

Conclusion

There are two important provisos here.

The first I’ve already mentioned: if you don’t have disk encryption turned on, then someone with access to your laptop, even for a fairly short while, is likely to have quite an easy time getting at your data.  It’s worth pointing out that you want full disk encryption turned on, and not just “home directory” encryption.  That’s because if someone has access to your laptop for a while, they may well be able to make changes to the boot-up mechanism in such a way that they can wait until you log in and either collect your password for later use or have the data sent to them over the network.  This is much less easy with full disk encryption.

The second is that there are definitely techniques available to use hardware and firmware attacks on your machine that may be successful even with full disk encryption.  Some of these are easy to spot – don’t turn on your machine if there’s anything in the USB port that you don’t recognise[5] – but others, where hardware may be attached or even soldered to the motherboard, or firmware changed, are very difficult to spot.  We’re getting into some fairly sophisticated attacks here, and if you’re worried about them, then consider my first security tip “Don’t take a laptop”.


1 – some of them automatically, either as system processes (you rarely have to remember to have to turn networking back on, for instance), or as “start-up” applications which most operating systems will allow you to specify as auto-starting when you log in.

2 – this isn’t actually quite true for all applications: it might have been more accurate to say “unless they’re set up this way”.  Some applications (web browsers are typical examples) will notice if they weren’t shut down “nicely”, and will attempt to get back into the state they were beforehand.

3 – you did enable disk encryption, right?

4 – assuming it’s there, and hasn’t been corrupted in some way, in which case the laptop will just run a normal boot sequence.

5 – and don’t just use random USB sticks from strangers or that you pick up in the carpark, but you knew that, right?

7 security tips for travelling with your laptop

Our laptop is a key tool that we tend to keep with us.

I do quite a lot of travel, and although I had a quiet month or two over the summer, I’ve got several trips booked over the next few months.  For many of us, our laptop is a key tool that we tend to keep with us, and most of us will have sensitive material of some type on our laptops, whether it’s internal emails, customer, partner or competitive information, patent information, details of internal processes, strategic documents or keys and tools for accessing our internal network.  I decided to provide a few tips around security and your laptop[1]. Of course, a laptop presents lots of opportunities for attackers – of various types.  Before we go any further, let’s think about some of the types of attacker you might be worrying about.  The extent to which you need to be paranoid will depend somewhat on what attackers you’re most concerned about.

Attackers

Here are some types of attackers that spring to my mind: bear in mind that there may be overlap, and that different individuals may take on different roles in different situations.

  • opportunistic thieves – people who will just steal your hardware.
  • opportunistic viewers – people who will have a good look at information on your screen.
  • opportunistic probers – people who will try to get information from your laptop if they get access to it.
  • customers, partners, competitors – it can be interesting and useful for any of these types to gain information from your laptop.  The steps they are willing to take to get that information may vary based on a variety of factors.
  • hackers/crackers – whether opportunistic or targeted, you need to be aware of where you – and your laptop – are most vulnerable.
  • state actors – these are people with lots and lots of resources, for whom access to your laptop, even for a short amount of time, gives them lots of chances to do short-term and long-term damage to you data and organisation.

 

7 concrete suggestions

  1. Don’t take a laptop.  Do you really need one with you?  There may be occasions when it’s safer not to travel with a laptop: leave it in the office, at home, in your bag or in your hotel room.  There are risks associated even with your hotel room (see below), but maybe a bluetooth keyboard with your phone, a USB stick or an emailed presentation will be all you need.  Not to suggest that any of those are necessarily safe, but you are at least reducing your attack surface.  Oh, and if you do travel with your laptop, make sure you keep it with you, or at least secured at all times.
  2. Ensure that you have disk encryption enabled.  If you have disk encryption, then if somebody steals your laptop, it’s very difficult for them to get at your data.  If you don’t, it’s really, really easy.  Turn on disk encryption: just do.
  3. Think about your screen. When your screen is on, people can see it.  Many laptop screens have a very broad viewing angle, so people to either side of you can easily see what’s on it.  The availability of high resolution cameras on mobile phones means that people don’t need long to capture what’s on your screen, so this is a key issue to consider.  What are your options?  The most common is to use a privacy screen, which fits over your laptop screen, typically reducing the angle from which it can be viewed.  These don’t stop people being able to view what’s on it, but it does mean that viewers need to be almost directly behind you.  This may sound like a good thing, but in fact, that’s the place you’re least likely to notice a surreptitious viewer, so employ caution.  I worry that these screens can give you a false sense of security, so I don’t use one.  Instead, I make a conscious decision never to have anything sensitive on my screen in situations where non-trusted people might see it.   If I really need to do some work, I’ll find a private place where nobody can look at my screen – and even try to be aware of the possibility of reflections in windows.
  4. Lock your screen.  Even if you’re stepping away for just a few seconds, always, always lock your screen.  Even if it’s just colleagues around.  Colleagues sometimes find it “funny” to mess with your laptop, or send emails from your account.  What’s more, there can be a certain kudos to having messaged with “the security guy/gal’s” laptop.  Locking the screen is always a good habit to get into, and rather than thinking “oh, I’ll only be 20 seconds”: think how often you get called over to chat to someone, or decide that you want a cup of tea/coffee, or just forget what you were doing, and just wander off.
  5. Put your laptop into airplane mode.  There are a multitude of attacks which can piggy-back on the wifi and bluetooth capabilities of your laptop (and your phone).  If you don’t need them, then turn them off.  In fact, turn off bluetooth anyway: there’s rarely a reason to leave it on.  There may be times to turn on wifi, but be careful about the networks you connect to: there are lots of attacks which pretend to be well-known wifi APs such as “Starbucks” which will let your laptop connect and then attempt to do Bad Things to it.  One alternative – if you have sufficient data on your mobile phone plan and you trust the provider you’re using – is to set your mobile (cell) phone up as a mobile access point and to connect to that instead.
  6. Don’t forget to take upgrades.  Just because you’re on the road, don’t forget to take software upgrades.  Obviously, you can’t do that with wifi off – unless you have Ethernet access – but when you are out on the road, you’re often more vulnerable than when you’re sitting behind the corporate firewall, so keeping your software patched and updated is a sensible precaution.
  7. Don’t suspend.  Yes, the suspend mode[2] makes it easy to get back to what you were doing, and doesn’t take much battery, but leaving your laptop in suspend increases the attack surface available to somebody who steals your laptop, or just has access to it for a short while (the classic “evil maid” attack of someone who has access to your hotel room, for instance).  If you turn off your laptop, and you’ve turned on disk encryption (see above), then you’re in much better shape.

Are there more things you can do?  Yes, of course.  But all of the above are simple ways to reduce the chance that you or your laptop are at risk from


1 – After a recent blog post, a colleague emailed me with a criticism.  It was well-intentioned, and I took it as such.  The comment he made was that although he enjoys my articles, he would prefer it if there were more suggestions on how to act, or things to do.  I had a think about it, and decided that this was entirely apt, so this week, I’m going to provide some thoughts and some suggestions this week.  I can’t promise to be consistent in meeting this aim, but this is at least a start.

2 – edited: I did have “hibernate” mode in here as well, but a colleague pointed out that hibernate should force disk encryption, so should be safer than suspend.  I never use either, as booting from cold is usually so quick these days.