“All systems nominal” – borrowing a useful phrase

Wouldn’t it be lovely if everything were functioning exactly as it should?

Wouldn’t it be lovely if everything were functioning exactly as it should?  All the time.  Alas, that is not to be our lot: we in IT security know that there’s always something that needs attention, something that’s not quite right, something that’s limping along and needs to be fixed.

The question I want to address in this article is whether that’s actually OK.  I’ve written before about managed degradation (the idea that planning for when things go wrong is a useful and important practice[1]) in Service degradation: actually a good thing.  The subject of this article, however, is living in a world where everything is running almost normally – or, more specifically, “nominally”.  Most definitions of the word “nominal” focus on its meaning of “theoretical” or “token” value.  A quick search of online dictionaries provided two definitions which were more relevant to the usage I’m going to be looking at today:

  • informal (chiefly in the context of space travel) functioning normally or acceptably[2].
  • being according to plan: satisfactory[3].

I’d like to offer a slightly different one:

  • within acceptable parameters for normal system operation.

I’ve seen “tolerances” used instead of “parameters”, and that works, too, but it’s not a word that I think we use much within IT security, so I lean towards “parameters”[4].

Utility

Why do I think that this is a useful concept?  Because, as I noted above, we all know that it’s a rare day when everything works perfectly.  But we find ways to muddle through, or we find enough bandwidth to make the backups happen without significant impact on database performance, or we only lose 1% of the credit card details collected that day[5].  All of these (except the last one), are fine, and if we are wise, we might start actually defining what counts as acceptable operation – nominal operation – for all of our users. This is what we should be striving for, and exactly how far off perfect operation we are in will give us clues as to how much effort we need to expend to sort them out, and how quickly we need to perform mitigations.

In fact, many organisations which provide services do this already: that’s where SLAs (Service Level Agreements) come from.  I remember, at school, doing some maths[6] around food companies ensuring that they were in little danger of being in trouble for under-filling containers by looking at standard deviations to understand what the likely amount in each would be.  This is similar, and the likelihood of hardware failures, based on similar calculations, is often factored into uptime planning.

So far, much of the above seems to be about resilience: where does security come in?  Well, your security components, features and functionality are also subject to the same issues of resiliency to which any other part of your system is.  The problem is that if a security piece fails, it may well be a single point of failure, which means that although the rest of the system is operating at 99% performance, your security just hit zero.

These are the reasons that we perform failure analysis, and why we consider defence in depth.  But when we’re looking at defence in depth, do we remember to perform second order analysis?  For instance, if my back-up LDAP server for user authentication is running on older hardware, what are the chances that it will fail when put under load?

Broader usage

It should come as no surprise to regular readers that I want to expand the scope of the discussion beyond just hardware and software components in systems to the people who are involved, and beyond that to processes in general.

Train companies are all too aware of the impact on their services if a bad flu epidemic hits their drivers – or if the weather is good, so their staff prefer to enjoy the sunshine with their families, rather than take voluntary overtime[7].  You may have considered the impact of a staff member or two being sick, but have you gone as far as modelling what would happen if four members of your team were sick, and, just as important, how likely that is?  Equally vital to consider may be issues of team dynamics, or even terrorism attacks or union disputes.  What about external factors, like staff not being able to get into work because of train cancellations?  What are the chances of broadband failures occurring at the same time, scuppering your fall-back plan of allowing key staff to work from home?

We can go deeper and deeper into this, and at some point it become pointless to go any further.  But I believe that it’s useful to consider how far to go with it, and also to spend some time considering exactly what you consider “nominal” operation, particularly for you security systems.

 


1 – I nearly wrote “art”.

2 – Oxford Dictionaries: https://en.oxforddictionaries.com/definition/nominal.

3 – Merriam-Webster: https://www.merriam-webster.com/dictionary/nominal.

4 – this article was inspired by the Public Service Broadcasting Song Go.  Listen to it: it rocks.  And they’re great live, too.

5 – Note: this is a joke, and not a very funny one.  You’ve probably just committed a GDPR breach, and you need to tell someone about it.  Now.

6 – in the UK, and most countries speaking versions of Commonwealth English, we do more than one math.  Because “mathematics“.

7 – this happened: I saw it it on TV[8].

8 – so it must be true.

6 types of attack: learning from Supermicro, State Actors and silicon

… it could have happened, and it could be happening now.

Last week, Bloomberg published a story detailing how Chinese state actors had allegedly forced employees of Supermicro (or companies subcontracting to them) to insert a small chip – the silicon in the title – into motherboards destined for Apple and Amazon.  The article talked about how an investigation into these boards had uncovered this chip and the steps that Apple, Amazon and others had taken.  The story was vigorously denied by Supermicro, Apple and Amazon, but that didn’t stop Supermicro’s stock price from tumbling by over 50%.

I have heard strong views expressed by people with expertise in the topic on both sides of the argument: that it probably didn’t happen, and that it probably did.  One side argues that the denials by Apple and Amazon, for instance, might have been impacted by legal “gagging orders” from the US government.  An opposing argument suggests that the Bloomberg reporters might have confused this story with a similar one that occurred a few months ago.  Whether this particular story is correct in every detail, or a fabrication – intentional or unintentional – is not what I’m interested in at this point.  What I’m interested in is not whether it did happen in this instance: the clear message is that it could have happened, and it could be happening now.

I’ve written before about State Actors, and whether you should worry about them.  There’s another question which this story brings up, which is possibly even more germane: what can you do about it if you are worried about them?  This breaks down further into two questions:

  • how can I tell if my systems have been compromised?
  • what can I do if I discover that they have?

The first of these is easily enough to keep us occupied for now [1], so let’s spend some time on that.  First, let’s first define six types of compromise, think about how they might be carried out, and then consider the questions above for each:

  • supply-chain hardware compromise;
  • supply-chain firmware compromise;
  • supply-chain software compromise;
  • post-provisioning hardware compromise;
  • post-provisioning firmware compromise;
  • post-provisioning software compromise.

This article doesn’t provide sufficient space to go into detail of these types of attack, and provides an overview of each, instead[2].

Terms

  • Supply-chain – all of the steps up to when you start actually running a system.  From manufacture through installation, including vendors of all hardware components and all software, OEMs, integrators and even shipping firms that have physical access to any pieces of the system.  For all supply-chain compromises, the key question is the extent to which you, the owner of a system, can trust every single member of the supply chain[3].
  • Post-provisioning – any point after which you have installed the hardware, put all of the software you want on it, and started running it: the time during which you might consider the system “under your control”.
  • Hardware – the physical components of a system.
  • Software – software that you have installed on the system and over which you have some control: typically the Operating System and application software.  The amount of control depends on factors such as whether you use proprietary or open source software, and how much of it is produced, compiled or checked by you.
  • Firmware – special software that controls how the hardware interacts with the standard software on the machine, the hardware that comprises the system, and external systems.  It is typically provided by hardware vendors and its operation opaque to owners and operators of the system.

Compromise types

See the table at the bottom of this article for a short summary of the points below.

  1. Supply-chain hardware – there are multiple opportunities in the supply chain to compromise hardware, but the more hard they are made to detect, the more difficult they are to perform.  The attack described in the Bloomberg story would be extremely difficult to detect, but the addition of a keyboard logger to a keyboard just before delivery (for instance) would be correspondingly more simple.
  2. Supply-chain firmware – of all the options, this has the best return on investment for an attacker.  Assuming good access to an appropriate part of the supply chain, inserting firmware that (for instance) impacts network performance or leaks data over a wifi connection is relatively simple.  The difficulty in detection comes from the fact that although it is possible for the owner of the system to check that the firmware is what they think it is, what that measurement confirms is only that the vendor has told them what they have supplied.  So the “medium” rating relates only to firmware that was implanted by members in the supply chain who did not source the original firmware: otherwise, it’s “high”.
  3. Supply-chain software – by this, I mean software that comes installed on a system when it is delivered.  Some organisations will insist in “clean” systems being delivered to them[4], and will install everything from the Operating System upwards themselves.  This means that they basically now have to trust their Operating System vendor[5], which is maybe better than trusting other members of the supply chain to have installed the software correctly.  I’d say that it’s not too simple to mess with this in the supply chain, if only because checking isn’t too hard for the legitimate members of the chain.
  4. Post-provisioning hardware – this is where somebody with physical access to your hardware – after it’s been set up and is running – inserts or attaches hardware to it.  I nearly gave this a “high” rating for difficulty below, assuming that we’re talking about servers, rather than laptops or desktop systems, as one would hope that your servers are well-protected, but the ease with which attackers have shown that they can typically get physical access to systems using techniques like social engineering, means that I’ve downgraded this to “medium”.  Detection, on the other hand, should be fairly simple given sufficient resources (hence the “medium” rating), and although I don’t believe anybody who says that a system is “tamper-proof”, tamper-evidence is a much simpler property to achieve.
  5. Post-provisioning firmware – when you patch your Operating System, it will often also patch firmware on the rest of your system.  This is generally a good thing to do, as patches may provide security, resilience or performance improvements, but you’re stuck with the same problem as with supply-chain firmware that you need to trust the vendor: in fact, you need to trust both your Operating System vendor and their relationship with the firmware vendor.
  6. Post-provisioning software – is it easy to compromise systems via their Operating System and/or application software?  Yes: this we know.  Luckily – though depending on the sophistication of the attack – there are generally good tools and mechanisms for detecting such compromises, including behavioural monitoring.

Table

 

Compromise type Attacker difficulty Detection difficulty
Supply-chain hardware High High
Supply-chain firmware Low Medium
Supply-chain software Medium Medium
Post-provisioning hardware Medium Medium
Post-provisioning firmware Medium Medium
Post-provisioning software Low Low

Conclusion

What are your chances of spotting a compromise on your system?  I would argue that they are generally pretty much in line with the difficulty of performing the attack in the first place: with the glaring exception of supply-chain firmware.  We’ve seen attacks of this type, and they’re very difficult to detect.  The good news is that there is some good work going on to help detection of these types of attacks, particularly in the world of Linux[6] and open source.  In the meantime, I would argue our best forms of defence are currently:

  • for supply-chain: build close relationships, use known and trusted suppliers.  You may want to restrict as much as possible of your supply chain to “friendly” regimes if you’re worried about State Actor attacks, but this is very hard in the global economy.
  • for post-provisioning: lock down your systems as much as possible – both physically and logically – and use behavioural monitoring to try to detect anomalies in what you expect them to be doing.

1 – I’ll try to write something on this other topic in a different article.

2 – depending on interest, I’ll also consider a series of articles to go into more detail on each.

3 – how certain are you, for instance, that your delivery company won’t give your own government’s security services access to the boxes containing your equipment before they deliver them to you?

4 – though see above: what about the firmware?

5 – though you can always compile your own Operating System if you use open source software[6].

6 – oh, you didn’t compile your compiler yourself?  All bets off, then…

7 – yes, “GNU Linux”.

Change, refuse, report

I’m busy over the next couple of days, and wasn’t going to post, but the issue is important, so I’m taking a few minutes to post.

There are some nasty extortion/blackmail emails out there at the moment.  People are being emailed, with a the subject line including a real password, and told to send fairly large amounts of bitcoin in order to stop incriminating or embarrassing material being spread to friends, family and the public.  Here’s what you should do.

Change

Change your passwords.  Particularly if the one that was quoted in the title or email body is current. Use a password manager, follow the advice here: The gift that keeps on giving: passwords.

Refuse

Refuse to pay.  Don’t even contact the sender.  Even if you’re worried that the material may exist or the threat is real.

Report

Report it to your local law enforcement agency: particularly if you’re concerned that this may be a real threat to you.  There are steps that law enforcement can take, and they can help you.

 

That’s it: be safe, and let’s shut down these criminals by not playing their game.

Knowing me, knowing you: on Russian spies and identity

Who are you, and who tells me so?

Who are you, and who tells me so?  These are questions which are really important for almost any IT-related system in use today.  I’ve previously discussed the difference between identification and authentication (and three other oft-confused terms) in Explained: five misused security words, and what I want to look at in this post is the shady hinterland between identification and authentication.

There has been a lot in the news recently about the poisoning in the UK of two Russian nationals and two British nationals, leading to the tragic death of Dawn Sturgess.  I’m not going to talk about that, or even about the alleged perpetrators, but about the problem of identity – their identity – and how that relates to IT.  The two men who travelled to Salisbury, and were named by British police as the perpetrators, travelled under Russian passports.  These documents provided their identities, as far as the British authorities – including UK Border Control, who allowed them into the country – were aware, and led to their being allowed into the country.

When we set up a new user in an IT system or allow them physical access to a building, for instance, we often ask for “Government-issued ID” as the basis for authenticating the identity that they have presented, in preparation for deciding whether to authorise them to perform whatever action they have requested.  There are two issues here – one obvious, and one less so.  The first, obvious one, is that it’s not always easy to tell whether a document has actually been issued by the authority by which it appears to be have been issued – document forgers have been making a prosperous living for hundreds, if not thousands of years.  The problem, of course, is that the more tell-tale signs of authenticity you reveal to those whose job it is to check a document, the more clues you give to would-be forgers for how to improve the quality of the false versions that they create.

The second, and less obvious problem, is that just because a document has been issued by a government authority doesn’t mean that it is real.  Well, actually, it does, and there’s the issue.  Its issuance by the approved authority makes it real – that is to say “authentic” – but it doesn’t mean that it is correct.  Correctness is a different property to authenticity. Any authority may decide to issue identification documents that may be authentic, but do not truly represent the identity of the person carrying them. When we realise that a claim of identity is backed up by an authority which is issuing documents that we do not believe to be correct, that means that we should change our trust relationship with that authority.  For most entities, IDs which have been authentically issued by a government authority are quite sufficient, but it is quite plausible, for instance, that the UK Border Force (and other equivalent entities around the world) may choose to view passports issued by certain government authorities as suspect in terms of their correctness.

What impact does this have on the wider IT security community?  Well, there are times when we are accepting government-issued ID when we might want to check with relevant home nation authorities as to whether we should trust them[1].  More broadly than that, we must remember that every time that we authenticate a user, we are making a decision to trust the authority that represented that user’s identity to us.  The level of trust we place in that authority may safely be reduced as we grow to know that user, but it may not – either because our interactions are infrequent, or maybe because we need to consider that they are playing “the long game”, and are acting as so-called “sleepers”.

What does this continuous trust mean?  What it means is that if we are relying on an external supplier to provide contractors for us, we need to keep remembering that this is a trust relationship, and one which can change.  If one of those contractors turns out to have faked educational qualifications, then we need to doubt the authenticity of the educational qualifications of all of the other contractors – and possibly other aspects of the identity which the external supplier has presented to us.  This is because we have placed transitive trust in the external supplier, which we must now re-evaluate.  What other examples might there be?  The problem is that the particular aspects of identity that we care about are so varied and differ between different roles that we perform.  Sometimes, we might not care about education qualifications, but credit score, or criminal background, or blood type.  In the film Gattaca[2], identity is tied to physical and mental ability to perform a particular task.

There are various techniques available to allow us to tie a particular individual to a set of pieces of information: DNA, iris scans and fingerprints are pretty good at telling us that the person in front of us now is the person who was in front of us a year ago.  But tying that person to other information relies on trust relationships with external entities, and apart from a typically small set of inferences that we can draw from our direct experiences of this person, it’s difficult to know exactly what is truly correct unless we can trust those external entities.


1 – That assumes, of course, that we trust our home nation authorities…

2 – I’m not going to put a spoiler in here, but it’s a great film, and really makes you think about identity: go and watch it!

3 laptop power mode options

Don’t suspend your laptop.

I wrote a post a couple of weeks ago called 7 security tips for travelling with your laptop.  The seventh tip was “Don’t suspend”: in other words, when you’re finished doing what you’re doing, either turn your laptop off, or put it into “hibernate” mode.  I thought it might be worth revisiting this piece of advice, partly to explain the difference between these different states, and partly to explain exactly why it’s a bad idea to use the suspend mode.  A very bad idea indeed.  In fact, I’d almost go as far as saying “don’t suspend your laptop”.

So, what are the three power modes usually available to us on a laptop?  Let’s look at them one at a time.  I’m going to assume that you have disk encryption enabled (the second of the seven tips in my earlier article), because you really, really should.

Power down

This is what you think it is: your laptop has powered down, and in order to start it up again, you’ve got to go through an entire boot process.  Any applications that you had running before will need to be restarted[1], and won’t come back in the same state that they were before[2].  If somebody has access to your laptop when you’re not there, then there’s not immediate way that they can get at your data, as it’s encrypted[3].  See the conclusion for a couple of provisos, but powering down your laptop when you’re not using it is pretty safe, and the time taken to reboot a modern laptop with a decent operating system on it is usually pretty quick these days.

It’s worth noting that for some operating systems – Microsoft Windows, at least – when you tell your laptop to power down, it doesn’t.  It actually performs a hibernate without telling you, in order to speed up the boot process.  There are (I believe – as a proud open source user, I don’t run Windows, so I couldn’t say for sure) ways around this, but most of the time you probably don’t care: see below on why hibernate mode is pretty good for many requirements and use cases.

Hibernate

Confusingly, hibernate is sometimes referred to as “suspend to disk”.  What actually happens when you hibernate your machine is that the contents of RAM (your working memory) are copied and saved to your hard disk.  The machine is then powered down, leaving the state of the machine ready to be reloaded when you reboot.  When you do this, the laptop notices that it was hibernated, looks for saved state, and loads it into RAM[4].  Your session should come back pretty much as it was before – though if you’ve moved to a different wifi network or a session on a website has expired, for instance, your machine may have to do some clever bits and pieces in the background to make things as nice as possible as you resume working.

The key thing about hibernating your laptop is that while you’ve saved state to the hard drive, it’s encrypted[3], so anyone who manages to get at your laptop while you’re not there will have a hard time getting any data from it.  You’ll need to unlock your hard drive before your session can be resumed, and given that your attacker won’t have your password, you’re good to go.

Suspend

The key difference between suspend and the other two power modes we’ve examined above is that when you choose to suspend your laptop, it’s still powered on.  The various components are put into low-power mode, and it should wake up pretty quickly when you need it, but, crucially, all of the applications that you were running beforehand are still running, and are still in RAM.  I mentioned in my previous post that this increases the attack surface significantly, but there are some protections in place to improve the security of your laptop when it’s in suspend mode.  Unluckily, they’re not always successful, as was demonstrated a few days ago by an attack described by the Register.  Even if your laptop is not at risk from this particular attack, my advice just not to use suspend.

There are two usages of suspend that are difficult to manage.  The first is when you have your machine set to suspend after a long period of inactivity.  Typically, you’ll set the screen to lock after a certain period of time, and then the system will suspend.  Normally, this is only set for when you’re on battery – in other words, when you’re not sat at your desk with the power plugged in.  My advice would be to change this setting so that your laptop goes to hibernate instead.  It’s a bit more time to boot it up, but if you’re leaving your laptop unused for a while, and it’s not plugged in, then it’s most likely that you’re travelling, and you need to be careful.

The second is when you get up and close the lid to move elsewhere.  If you’re moving around within your office or home, then that’s probably OK, but for anything else, try training yourself to hibernate or power down your laptop instead.

Conclusion

There are two important provisos here.

The first I’ve already mentioned: if you don’t have disk encryption turned on, then someone with access to your laptop, even for a fairly short while, is likely to have quite an easy time getting at your data.  It’s worth pointing out that you want full disk encryption turned on, and not just “home directory” encryption.  That’s because if someone has access to your laptop for a while, they may well be able to make changes to the boot-up mechanism in such a way that they can wait until you log in and either collect your password for later use or have the data sent to them over the network.  This is much less easy with full disk encryption.

The second is that there are definitely techniques available to use hardware and firmware attacks on your machine that may be successful even with full disk encryption.  Some of these are easy to spot – don’t turn on your machine if there’s anything in the USB port that you don’t recognise[5] – but others, where hardware may be attached or even soldered to the motherboard, or firmware changed, are very difficult to spot.  We’re getting into some fairly sophisticated attacks here, and if you’re worried about them, then consider my first security tip “Don’t take a laptop”.


1 – some of them automatically, either as system processes (you rarely have to remember to have to turn networking back on, for instance), or as “start-up” applications which most operating systems will allow you to specify as auto-starting when you log in.

2 – this isn’t actually quite true for all applications: it might have been more accurate to say “unless they’re set up this way”.  Some applications (web browsers are typical examples) will notice if they weren’t shut down “nicely”, and will attempt to get back into the state they were beforehand.

3 – you did enable disk encryption, right?

4 – assuming it’s there, and hasn’t been corrupted in some way, in which case the laptop will just run a normal boot sequence.

5 – and don’t just use random USB sticks from strangers or that you pick up in the carpark, but you knew that, right?

On conversation and the benefits of boasting

On Monday and Tuesday this week I’m attending DevSecCon in Boston – a city which is much more pleasant when it’s not raining or snowing, which it often seems to be doing while I’m here.  There are a bunch of interesting talks[1] and workshops, and I was asked, at the last minute, to facilitate an “Open Space Discussion” at the end of the first day (as two people hadn’t arrived as expected).  Facilitating discussions is about not talking all the time, but encouraging other people to talk[2]: my approach to this is to tell a story, and then encourage them to share stories.

People enjoy listening to stories, and people enjoy telling stories, and there is a type of story that is particularly useful and important in the world of work: “war-stories”.  Within the IT industry, at least, this refers to stories about experiences – usually bad experiences – from our day-to-day working lives.  They are often used to illustrate a point or lend experiential weight to an opinion being put forward. But they are also great learning experiences.

What I learned yesterday – or re-learned – is the immense value of conversation with our peers in a neutral setting, with no formal bounds or difference in “rank”.  We had at least one participant who was only two years out of college, participants with 25-30 years of experience, a CISO of a major healthcare provider, a CEO, DevOps engineers, customer-facing people, security people, non-security people, people with Humanities[4] degrees, people with Computer Science degrees.  We were about twelve people, and everybody contributed, to greater or lesser degrees.  I hope that we managed to maintain a conversation where age and numbers of years in the industry were unimportant, but the experiences shared were.

And I learned about other people’s opinions, their viewpoints, their experiences, their tips for what works – and doesn’t work – and made, I hope, some new friends.  Certainly some new peers.  What we talked about isn’t vitally important to this article[5]: the important thing was the conversation, and the stories they told that brought their shared wisdom to the table.  I felt, by the end of the session, that we had added something to the commonwealth of knowledge within the industry

I was looking for a way to close the session as we were moving to the end, and hit upon something which seemed to work: I encouraged everybody to spend 30 seconds or so to tell the group about an incident in their career that they are proud of.  We got some great stories, and not only did we learn from them, but I think it’s really important that we get the chance to express our pride in the things that we’ve done.  We rarely get the chance to boast, or to let people outside our general circle know why we think we should be valued.  There’s nothing wrong with being proud of the things we’ve done, but we’re often – usually – discouraged from doing so.  It was great to have people share their various experiences of personal expertise, and to think about how they would use them to further their career.  I didn’t force everybody to speak – and was thanked by one of the silent participants later – and it’s important to realise that not everybody will be happy doing so.  But I think that the rapport that we’d built as a group meant that more people were happy to contribute something than would have considered it at the beginning of the session.  I left with a respect for all of the participants, and a realisation of the importance of shared experience.

 


1 – I gave a talk based on my blog article Why I love technical debtI found it interesting…

2 – based on this definition, it may surprise regular readers – and people who know me IRL[3] – that I’d even consider participating, let alone facilitating.

3 – does anybody use this term anymore?

4 – Liberal Arts/Social Sciences.

5 – but included:

  • the impact of different open source licences
  • how legal teams engage with open source questions
  • how to encourage more conversation between technical and legal folks
  • the importance of systems engineering
  • how to talk to customers and vendors
  • how to build teams through social participation[6]
  • the NIST 800 series and other models to consider security
  • risk: how to talk about it, measure it, discuss it with other functions within the organisation.

6 – the word “beer” came up.  From somebody else, on this occasion.

 

7 security tips for travelling with your laptop

Our laptop is a key tool that we tend to keep with us.

I do quite a lot of travel, and although I had a quiet month or two over the summer, I’ve got several trips booked over the next few months.  For many of us, our laptop is a key tool that we tend to keep with us, and most of us will have sensitive material of some type on our laptops, whether it’s internal emails, customer, partner or competitive information, patent information, details of internal processes, strategic documents or keys and tools for accessing our internal network.  I decided to provide a few tips around security and your laptop[1]. Of course, a laptop presents lots of opportunities for attackers – of various types.  Before we go any further, let’s think about some of the types of attacker you might be worrying about.  The extent to which you need to be paranoid will depend somewhat on what attackers you’re most concerned about.

Attackers

Here are some types of attackers that spring to my mind: bear in mind that there may be overlap, and that different individuals may take on different roles in different situations.

  • opportunistic thieves – people who will just steal your hardware.
  • opportunistic viewers – people who will have a good look at information on your screen.
  • opportunistic probers – people who will try to get information from your laptop if they get access to it.
  • customers, partners, competitors – it can be interesting and useful for any of these types to gain information from your laptop.  The steps they are willing to take to get that information may vary based on a variety of factors.
  • hackers/crackers – whether opportunistic or targeted, you need to be aware of where you – and your laptop – are most vulnerable.
  • state actors – these are people with lots and lots of resources, for whom access to your laptop, even for a short amount of time, gives them lots of chances to do short-term and long-term damage to you data and organisation.

 

7 concrete suggestions

  1. Don’t take a laptop.  Do you really need one with you?  There may be occasions when it’s safer not to travel with a laptop: leave it in the office, at home, in your bag or in your hotel room.  There are risks associated even with your hotel room (see below), but maybe a bluetooth keyboard with your phone, a USB stick or an emailed presentation will be all you need.  Not to suggest that any of those are necessarily safe, but you are at least reducing your attack surface.  Oh, and if you do travel with your laptop, make sure you keep it with you, or at least secured at all times.
  2. Ensure that you have disk encryption enabled.  If you have disk encryption, then if somebody steals your laptop, it’s very difficult for them to get at your data.  If you don’t, it’s really, really easy.  Turn on disk encryption: just do.
  3. Think about your screen. When your screen is on, people can see it.  Many laptop screens have a very broad viewing angle, so people to either side of you can easily see what’s on it.  The availability of high resolution cameras on mobile phones means that people don’t need long to capture what’s on your screen, so this is a key issue to consider.  What are your options?  The most common is to use a privacy screen, which fits over your laptop screen, typically reducing the angle from which it can be viewed.  These don’t stop people being able to view what’s on it, but it does mean that viewers need to be almost directly behind you.  This may sound like a good thing, but in fact, that’s the place you’re least likely to notice a surreptitious viewer, so employ caution.  I worry that these screens can give you a false sense of security, so I don’t use one.  Instead, I make a conscious decision never to have anything sensitive on my screen in situations where non-trusted people might see it.   If I really need to do some work, I’ll find a private place where nobody can look at my screen – and even try to be aware of the possibility of reflections in windows.
  4. Lock your screen.  Even if you’re stepping away for just a few seconds, always, always lock your screen.  Even if it’s just colleagues around.  Colleagues sometimes find it “funny” to mess with your laptop, or send emails from your account.  What’s more, there can be a certain kudos to having messaged with “the security guy/gal’s” laptop.  Locking the screen is always a good habit to get into, and rather than thinking “oh, I’ll only be 20 seconds”: think how often you get called over to chat to someone, or decide that you want a cup of tea/coffee, or just forget what you were doing, and just wander off.
  5. Put your laptop into airplane mode.  There are a multitude of attacks which can piggy-back on the wifi and bluetooth capabilities of your laptop (and your phone).  If you don’t need them, then turn them off.  In fact, turn off bluetooth anyway: there’s rarely a reason to leave it on.  There may be times to turn on wifi, but be careful about the networks you connect to: there are lots of attacks which pretend to be well-known wifi APs such as “Starbucks” which will let your laptop connect and then attempt to do Bad Things to it.  One alternative – if you have sufficient data on your mobile phone plan and you trust the provider you’re using – is to set your mobile (cell) phone up as a mobile access point and to connect to that instead.
  6. Don’t forget to take upgrades.  Just because you’re on the road, don’t forget to take software upgrades.  Obviously, you can’t do that with wifi off – unless you have Ethernet access – but when you are out on the road, you’re often more vulnerable than when you’re sitting behind the corporate firewall, so keeping your software patched and updated is a sensible precaution.
  7. Don’t suspend.  Yes, the suspend mode[2] makes it easy to get back to what you were doing, and doesn’t take much battery, but leaving your laptop in suspend increases the attack surface available to somebody who steals your laptop, or just has access to it for a short while (the classic “evil maid” attack of someone who has access to your hotel room, for instance).  If you turn off your laptop, and you’ve turned on disk encryption (see above), then you’re in much better shape.

Are there more things you can do?  Yes, of course.  But all of the above are simple ways to reduce the chance that you or your laptop are at risk from


1 – After a recent blog post, a colleague emailed me with a criticism.  It was well-intentioned, and I took it as such.  The comment he made was that although he enjoys my articles, he would prefer it if there were more suggestions on how to act, or things to do.  I had a think about it, and decided that this was entirely apt, so this week, I’m going to provide some thoughts and some suggestions this week.  I can’t promise to be consistent in meeting this aim, but this is at least a start.

2 – edited: I did have “hibernate” mode in here as well, but a colleague pointed out that hibernate should force disk encryption, so should be safer than suspend.  I never use either, as booting from cold is usually so quick these days.