Learn to hack online – h4x0rz and pros

Removing these videos hinders defenders much more significantly than it impairs the attackers.

Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos.  According to The Register, the policy first appeared on the 5th April 2019:

“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Now, I can see why they’ve done this: it’s basic backside-covering.  YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content.  The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly.  I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.

Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.

Hacking[3] videos are different, though.  The most important point is that they have a  legitimate didactic function: in other words, they’re useful for teaching.  Nor do I think that there’s a public outcry from groups wanting them banned.  In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that.  Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge.  This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks.  IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.

“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest.  Well, yes, some will.  But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.

And there is an imbalance between attackers and defenders: this move exacerbates it.  Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems.  By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.

I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams.  We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.

We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round.  To be clear: the (good) many need access to these videos to protect against the (malicious) few.  Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.


1 – it’s pronounced “few-ROAR-ray”.  And “NEEsh”.  And “CLEEK”[2].

2 – yes, I should probably calm down.

3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.

Are my messages safe? No, but…

“Are any of these messaging services secure?”

Today brought another story about insecurity of a messenger app, and by a brilliant coincidence, I’m listening to E.L.O.’s “Secret Messages” as I start to compose this post. This article isn’t, however, about my closet 70s musical tastes[1], but about the messages you send from your mobile phone, tablet or computer to friends, families and colleagues, and how secure they are.

There are loads of options out there for messaging services, with some of the better-known including WhatsApp, Facebook Messenger, Google Chat, Signal and Telegram. Then there’s good old SMS. First question first: do I use any of these myself? Absolutely. I also indulge in Facebook, LinkedIn and Twitter. Do I trust these services? Let’s get back to this question later.

A more pressing question might be: “are any of these messaging services secure?” It turns out that this is a really simple question to answer: of course they’re not. No service is “secure”: it’s a key principle of IT security that there is no “secure”. This may sound like a glib – and frankly unhelpful – answer, but it’s not supposed to be. Once you accept that there is no perfectly secure system, you’re forced to consider what you are trying to achieve, and what risks you’re willing to take. This is a recurring theme of this blog, so regular readers shouldn’t be surprised.

Most of the popular messaging services can be thought of as consisting of at least seven components. Let’s assume that Alice is sending a message from her phone to Bob’s phone. Here’s what the various components might look like:

  1. Alice’s messenger app
  2. Alice’s phone
  3. Communications channel Alice -> server
  4. Server
  5. Communications channel server -> Bob
  6. Bob’s phone
  7. Bob’s messenger app

Each of these is a possible attack surface: combined, they make up the attack surface for what we can think of as the Alice <-> Bob and messaging system.

Let’s start in the middle, with the server. For Alice and Bob to be happy with the security of the system for their purposes, they must be happy that this server has sufficiently secure to cope with whatever risks they need to address. So, it may be that they trust that the server (which will be run, ultimately, by fallible and possibly subornable humans who also are subject to legal jurisdiction(s)) is not vulnerable. Not vulnerable to whom? Hacktivists? Criminal gangs? Commercial competitors? State actors? Legal warrants from the server’s jurisdiction? Legal warrants from Alice or Bob’s jurisdiction(s)? The likelihood of successful defence against each of these varies, and the risk posed to Alice and Bob by each is also different, and needs to be assessed, even if that assessment is “we can ignore this”.

Each of the other components is subject to similar questions. For the communication channels, we will assume that they’re encrypted, but we have to be sure that the cryptography and cryptographic protocols have been correctly implemented, and that all keys are appropriately protected by all parties. The messaging apps must be up to date, well designed and well implemented. Obviously, if they’re open source, you have a much, much better chance of being sure of the security of both software (never, ever use cryptography or protocols which have not been not open sourced and peer reviewed: just don’t). The phones in which the software is running must also be uncompromised – not to mention protected by Alice and Bob from physical tampering and delivered new to them from the manufacturer with no vulnerabilities[2].

How sure are Alice and Bob of all of the answers to all of these questions? The answer, I would submit, is pretty much always going to be “not completely”. Does this mean that Alice and Bob should not use messaging services? Not necessarily. But it does mean that they should consider what messages they should exchange via any particular messaging service. They might be happy to arrange a surprise birthday party for a colleague, but not to exchange financial details of a business deal. They might be happy to schedule a trip to visit a Non-Governmental Organisation to discuss human rights, but not to talk about specific cases over the messaging service.

This is the view that I take: I consider what information I’m happy to transfer over or store on messaging services and social media platforms. There are occasions where I may happy to pass sensitive data across messaging services, but break the data up between different services (using “different channels” in the relevant parlance): using one service for a username and another for the associated password, for instance. I still need to be careful about shared components: the two phones in the example above might qualify, but I’ve reduced the shared attack surface, and therefore the risk. I’m actually more likely to require that the password is exchanged over a phone call, and if I’m feeling particularly paranoid, I’ll use a different phone to receive that call.

My advice, therefore, is this:

  1. Keep your devices and apps up to date;
  2. Evaluate the security of your various messaging service options;
  3. Consider the types of information that you’ll be transferring and/or storing;
  4. Think about the risks you’re willing to accept;
  5. Select the appropriate option on a case by case basis:
  6. Consider using separate channels where particularly sensitive data can be split for added security.

1 – I’m also partial to 1920’s Jazz and a bit of Bluegrass, as it happens.

2 – yeah, right.

I’m turning off your security.

“Don’t worry, I know what I’m doing.”

Today’s security story is people turning security off.  For me, the fact that it’s even a story is the story.  This particular story is covered in The Register, who explain (to nobody’s surprise) that some of the patches to fix issues identified in CPU’s (think Spectre, Meltdown, etc.) can actually slow down the applications running on them.  The problem is that, in some cases, they don’t slow them down a little bit, but rather a lot.  By which I mean up to 50%.  And if you’ve bought expensive hardware – or rented it [1] – then you’d generally prefer it if it runs your applications/programs/workloads quickly, rather than just half as fast as they might run.

And so you turn off the security patches.  Your decision: fine.

No, stop: this isn’t what has happened.

The mythical “you”, the person running the workload, isn’t the person who makes the decision, in most cases, because it’s been made for you.  This is the real story.

Linus Torvalds, and a bunch of other experts in the Linux kernel[2], have decided that although the patch that could make your workloads secure is available, the functionality that does it should be “off” by default.  They reason – quite correctly, in my opinion – that the vast majority of people running workloads, won’t easily be able to turn this functionality on themselves

They also reason – again, correctly, in my opinion – that most people will care more about how quickly their workloads run than about how secure they are.  I’m not happy about this, but that’s the way it is.

What I worry about is the final step in the logic to making the decision.  I’m going to quote Linus:

“Have you seen any actual realistic attacks for normal human users?” he asked. “Things where the kernel should actually care? The JavaScript thing is for the browser to fix up, not for the kernel to say ‘now everything should run up to 50 per cent slower.'”

I get the reasoning behind this, but I don’t like it.  To give some context, somebody came up with an example attack which could compromise certain workloads, and Linus points out that there are better ways to fix this attack than fixing it in the kernel. My concerns are two-fold:

  1. although there may be better places to fix that particular attack, a kernel-level fix is likely to fix an entire class of attacks, meaning better protection for users who are using any application which might include an attack vector.
  2. pointing out that there haven’t been any attacks yet not only ignores the fact that there is a future out there[3] but also points malicious actors in the direction of a likely attack vector.

Now, I know that the more dedicated malicious actors are already looking for these things, but do we really need to advertise?

What’s my fix?

I don’t have one, or at least not an easy one.

Somebody, somewhere, needs to decide whether security is turned on or off.  What I’d honestly like to see is an easier set of controls to allow people to turn on or off security, and to understand the trade-offs when they do that.  The problems with that are:

  • the trade-offs are often much more complex than just “fast and insecure” or “slow and secure”, and are really difficult to explain.
  • in order to make a sensible decision about trade-offs, people need to understand risk.  And people are awful at understanding risk.

And there’s a “chicken and egg problem”[7] here: people won’t understand risk until they are offered the chance to make decisions, but there’s little incentive to offer them complex decisions unless they understand risk.

My plea?  Where possible, expose risk, and explain what it is.  And if you’re turning off security-related functionality, make it easy to turn back on for those who need it.


1 – a quick heads-up: this is what “deploying to the cloud” actually is.

2 – what sits at the bottom of many of the workloads that are running in servers.

3 – hopefully.  If the Three Minute Warning[4] sounds while you’re reading this, you may wish to duck and cover.  You can come back to it later[6].

4 – “… sounds like this …”[5].

5 – 80s reference.

6 – or not.  See [3].

7 – for non-native English readers, this means “a problem where the solution requires two pieces, both of which are dependent on each other”.

3 laptop power mode options

Don’t suspend your laptop.

I wrote a post a couple of weeks ago called 7 security tips for travelling with your laptop.  The seventh tip was “Don’t suspend”: in other words, when you’re finished doing what you’re doing, either turn your laptop off, or put it into “hibernate” mode.  I thought it might be worth revisiting this piece of advice, partly to explain the difference between these different states, and partly to explain exactly why it’s a bad idea to use the suspend mode.  A very bad idea indeed.  In fact, I’d almost go as far as saying “don’t suspend your laptop”.

So, what are the three power modes usually available to us on a laptop?  Let’s look at them one at a time.  I’m going to assume that you have disk encryption enabled (the second of the seven tips in my earlier article), because you really, really should.

Power down

This is what you think it is: your laptop has powered down, and in order to start it up again, you’ve got to go through an entire boot process.  Any applications that you had running before will need to be restarted[1], and won’t come back in the same state that they were before[2].  If somebody has access to your laptop when you’re not there, then there’s not immediate way that they can get at your data, as it’s encrypted[3].  See the conclusion for a couple of provisos, but powering down your laptop when you’re not using it is pretty safe, and the time taken to reboot a modern laptop with a decent operating system on it is usually pretty quick these days.

It’s worth noting that for some operating systems – Microsoft Windows, at least – when you tell your laptop to power down, it doesn’t.  It actually performs a hibernate without telling you, in order to speed up the boot process.  There are (I believe – as a proud open source user, I don’t run Windows, so I couldn’t say for sure) ways around this, but most of the time you probably don’t care: see below on why hibernate mode is pretty good for many requirements and use cases.

Hibernate

Confusingly, hibernate is sometimes referred to as “suspend to disk”.  What actually happens when you hibernate your machine is that the contents of RAM (your working memory) are copied and saved to your hard disk.  The machine is then powered down, leaving the state of the machine ready to be reloaded when you reboot.  When you do this, the laptop notices that it was hibernated, looks for saved state, and loads it into RAM[4].  Your session should come back pretty much as it was before – though if you’ve moved to a different wifi network or a session on a website has expired, for instance, your machine may have to do some clever bits and pieces in the background to make things as nice as possible as you resume working.

The key thing about hibernating your laptop is that while you’ve saved state to the hard drive, it’s encrypted[3], so anyone who manages to get at your laptop while you’re not there will have a hard time getting any data from it.  You’ll need to unlock your hard drive before your session can be resumed, and given that your attacker won’t have your password, you’re good to go.

Suspend

The key difference between suspend and the other two power modes we’ve examined above is that when you choose to suspend your laptop, it’s still powered on.  The various components are put into low-power mode, and it should wake up pretty quickly when you need it, but, crucially, all of the applications that you were running beforehand are still running, and are still in RAM.  I mentioned in my previous post that this increases the attack surface significantly, but there are some protections in place to improve the security of your laptop when it’s in suspend mode.  Unluckily, they’re not always successful, as was demonstrated a few days ago by an attack described by the Register.  Even if your laptop is not at risk from this particular attack, my advice just not to use suspend.

There are two usages of suspend that are difficult to manage.  The first is when you have your machine set to suspend after a long period of inactivity.  Typically, you’ll set the screen to lock after a certain period of time, and then the system will suspend.  Normally, this is only set for when you’re on battery – in other words, when you’re not sat at your desk with the power plugged in.  My advice would be to change this setting so that your laptop goes to hibernate instead.  It’s a bit more time to boot it up, but if you’re leaving your laptop unused for a while, and it’s not plugged in, then it’s most likely that you’re travelling, and you need to be careful.

The second is when you get up and close the lid to move elsewhere.  If you’re moving around within your office or home, then that’s probably OK, but for anything else, try training yourself to hibernate or power down your laptop instead.

Conclusion

There are two important provisos here.

The first I’ve already mentioned: if you don’t have disk encryption turned on, then someone with access to your laptop, even for a fairly short while, is likely to have quite an easy time getting at your data.  It’s worth pointing out that you want full disk encryption turned on, and not just “home directory” encryption.  That’s because if someone has access to your laptop for a while, they may well be able to make changes to the boot-up mechanism in such a way that they can wait until you log in and either collect your password for later use or have the data sent to them over the network.  This is much less easy with full disk encryption.

The second is that there are definitely techniques available to use hardware and firmware attacks on your machine that may be successful even with full disk encryption.  Some of these are easy to spot – don’t turn on your machine if there’s anything in the USB port that you don’t recognise[5] – but others, where hardware may be attached or even soldered to the motherboard, or firmware changed, are very difficult to spot.  We’re getting into some fairly sophisticated attacks here, and if you’re worried about them, then consider my first security tip “Don’t take a laptop”.


1 – some of them automatically, either as system processes (you rarely have to remember to have to turn networking back on, for instance), or as “start-up” applications which most operating systems will allow you to specify as auto-starting when you log in.

2 – this isn’t actually quite true for all applications: it might have been more accurate to say “unless they’re set up this way”.  Some applications (web browsers are typical examples) will notice if they weren’t shut down “nicely”, and will attempt to get back into the state they were beforehand.

3 – you did enable disk encryption, right?

4 – assuming it’s there, and hasn’t been corrupted in some way, in which case the laptop will just run a normal boot sequence.

5 – and don’t just use random USB sticks from strangers or that you pick up in the carpark, but you knew that, right?

What are they attacking me for?

There are three main types of motivations: advantages to them; disadvantages to us; resources.

I wrote an article a few weeks ago called What’s a State Actor, and should I care?, and a number of readers asked if I could pull apart a number of the pieces that I presented there into separate discussions[1].  One of those pieces was the question of who is actually likely to attack me.

I presented a brief list thus:

  • insiders
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists
  • … and more.

One specific “more” that I mentioned was State Actors.  If you look around, you’ll find all manner of lists.  Other attacker types that I didn’t mention in my initial list include:

  • members of organised crime groups
  • terrorists
  • “mercenary” hackers.

I suspect that you could come up with more supersets or subsets if you tried hard enough.

This is all very well, but what’s the value in knowing who’s likely to attack you in the first place[3]?  There’s a useful dictum: “No system is secure against a sufficiently resourced and motivated attacker.”[5]  This gives us a starting point, because it causes us to ask the question

  • what motivates the attacker?

In other words: what do they want to achieve?  What, in fact, are they trying to do or get when they attack us?  This is the core theme of this article.

There are three main types of motivations:

  1. advantages to them
  2. disadvantages to us
  3. resources.

There is overlap between the three, but I think that they are sufficiently separate to warrant separate discussion.

Advantages to them

Any successful attack is arguably a disadvantage to us, the attacked, but that does not mean that the primary motivation of an attacker is necessarily to cause harm.  There are a number of other common motivations, including:

  • reputation or “bragging rights” – a successful attack may well be used to prove the skills of an attacker to other parties.
  • information to share – sometimes attackers wish to gain information about our systems to share with others, whether for gain or to enhance their reputation (see above).  Such attacks may be painted a security research, but if they occur outside an ethical framework (such as provided by academic institutions) and without consent, it is difficult to consider them anything other than hostile.
  • information to keep – attackers may gain information and keep it for themselves for later use, either against our systems or against similarly configured systems elsewhere.
  • practice/challenge – there are attacks which are undertaken solely to practice techniques or as a personal challenge (where an external challenge is made, I would categorise them under “reputation”).  Harmless as this motivation may seem to some parts of the community, such attacks still cause damage and require mitigation, and should be considered hostile.
  • for money – some attacks are undertaken at the request of others, with the primary motivation of the attacker being that money or other material recompense (though the motivation of the party commissioning that attack likely to be one of these other ones listed)[6].

Disadvantages to us

Attacks which focus on causing negative impact to the individual or organisation attacked can be listed in the following categories:

  • business impact – impact to the normal functioning of the organisation or individual attacked: causing orders to be disrupted, processes to be slowed, etc..
  • financial impact – direct impact to the financial functioning of the attacked party: fraud, for instance.
  • reputational impact – there have been many attacks where the intention has clearly been to damage the reputation of the attacked party.  Whether it is leaking information about someone’s use of a dating website, disseminating customer information or solely replacing text or images on a corporate website, the intention is the same: to damage the standing of those being attacked.  Such damage may be indirect – for instance if an attacker were to cause the failure of an oil pipeline, affecting the reputation of the owner or operator of that pipeline.
  • personal impact – subtly different from reputational or business impact, this is where the attack intends to damage the self-esteem of an individual, or their ability to function professionally, physically, personally or emotionally.  This could cover a wide range of attacks such as “doxxing” or use of vulnerabilities in insulin pumps.
  • ecosystem impact – this type of motivation is less about affecting the ability of the individual or organisation to function normally, and more about affecting the ecosystem that exists around it.  Impacting the quality control checks of a company that made batteries might impact the ability of a mobile phone company to function, for instance, or attacking a water supply might impact the ability of a fire service to respond to incidents.

Resources

The motivations for some attacks may be partly or solely to get access to resources.  These resources might include:

  • financial resources – by getting access to company accounts, attackers might be able to purchase items for themselves or others or otherwise defraud the company.
  • compute resources – access to compute resources can lead to further attacks or be used for purposes such as cryptocurrency mining.
  • storage resources – attackers may wish to store illegal or compromising material on others’ systems.
  • network resources – access to network resources allows attackers to launch attacks elsewhere or to stream information with little traceability.
  • human resources – access to some systems may allow human resources to be deployed in ways unintended by the party being attacked: deploying police officers to a scene a long distance away from a planned physical attack, for instance.
  • physical resources – access to some systems may also allow physical resources to be deployed in ways unintended by the party being attacked: sending ammunition to the wrong front in a war, might, for example, lead to military force becoming weakened.

Conclusion

It may seem unimportant to consider the motivations of those attacking us, but if we can understand what it is that they are looking for, we can decide what we should defend, and sometimes what types of defence we should put in place.  As always, I welcome comments on this article: I’m sure that I’ve missed out some points, or misrepresented others, so please do get in touch and let me know your thoughts.


1 – I considered this a kind and polite way of saying “you stuffed too much into a single article: what were you thinking?”[2]

2 – and I don’t necessarily disagree.

3 – unless you’re just trying to scare senior management[4].

4 – which may be enjoyable, but is ultimately likely to backfire if you’re doing it without evidence and for a good reason.

5 – I made a (brief) attempt to track the origins of this phrase: I’m happy to attribute if someone can find the original.

6 – hat-tip to Reddit user poopin for spotting that I’d missed this one out.