Demonising children (with help from law enforcement)

Let’s just not teach children to read: we’ll definitely be safe then.

Oh, dear: it’s happened again. Ill-informed law enforcement folks are demonising people for getting interested in security. As The Register reports, West Midlands police in the UK have put out a poster aimed at teachers, parents and guardians which advises them to get in touch if they find any of the following on a child’s computer:

  • Tor browser
  • Virtual machines
  • Kali Linux
  • Wifi Pineapple
  • Discord
  • Metasploit

“If you see any of these on their computer, or have a child you think is hacking, please let us know so we can give advice and engage them into positive diversions.”

Leaving aside the grammar of that sentence, let’s have a look at those tools. Actually, first, let’s address the use of the word “hacking”. It’s not the first time that I’ve had a go at misuse of this word, but on the whole, I think that we’ve lost the battle in popular media to allow us to keep the positive use of the term. In this context, however, if I ask a teenager or young person who’s in possession of a few of the above if they’re hacking, they answer will probably be “yes”, which is good. And not because they’re doing dodgy stuff – cracking – but because they’re got into the culture of a community where hacking is still a positive word: it means trying stuff out, messing around and coding. This is a world I – and the vast majority of my colleagues – inhabit and work in on a day-to-day basis.

So – those tools. Tor, as they point out, can be used to access the dark web. More likely, it’s being used by a savvy teenager to hide their access to embarrassing material. VMs can apparently be used to hide OSes such as Kali Linux. Well, yes, but “hide”? And there is a huge number of other, positive and creative uses to which VMs are put every day.

Oh, and Kali Linux is an OS “often used for hacking”. Let’s pull that statement apart. It could mean:

  1. many uses of Kali Linux are for illegal or unethical activities;
  2. many illegal or unethical activities use Kali Linux.

In the same way that you might say “knives are often used for violent attacks”, such phrasing is downright misleading, because you know (and any well-informed law-enforcement officer should know) that 2 is more true than 1.

Next is Wifi Pineapple: this is maybe a little more borderline. Although there are legitimate uses for one of these, I can see that they might raise some eyebrows if you starting going around your local area with one.

Metasploit: well, it’s the tool to get to know if you want to get involved in security. There are so many things you can do with it – like Kali Linux – that are positive, including improving your own security, learning how to protect your systems and adopting good coding practice. If I wanted to get an interested party knowledgeable about how computers really work, how security is so often poor, and how to design better, more secure systems, Metasploit would be the tool I’d point them at.

You may have noticed that I left one out: Discord. Dear, oh dear, oh dear. Discord is, first and foremost, a free gaming chat server. If a child is using Discord, they’re probably playing – wait for it – a computer game.

This poster isn’t just depressing – it’s short-sighted, and misleading. It’s going to get children mislabelled and put upon by people who don’t know better, and assume that information put out by their local police service will be helpful and straightforward. It’s all very well for West Midlands police to state that “[t]he software mentioned is legal and, in the vast majority of cases is used legitimately, giving great benefit to those interested in developing their digital skills”, and that they’re trying to encourage those with parental responsibility to “start up a conversation”, but this is just crass.

I have two children, both around teenage age. I can tell you know that any conversation starting with “what’s that on your computer? It’s a hacking tool! Are you involved in something you shouldn’t be?” is not going to end well, and it’s not going to end well for a number of reasons, including:

  • it makes me look like an idiot, particularly if what I’m reacting to is something completely innocuous like Discord;
  • you’re not treating the young person with any level of respect;
  • it’s a negative starting point of engagement, which means that they’ll go into combative, denial mode;
  • it will make them feel that I suspect them of something, leading them to be more secretive from now on.

And, do you know what? I don’t blame them: if someone said something like that to me, that would be precisely my reaction, too. What’s the alternative suggested in the poster? Oh, yes: contact the police. That’s going to go well: “I saw this on your computer, and I got in touch with the police, and they suggested I have chat with you…” Young people love that sort of conversation, too. Oh, and exactly how sure are you that the police haven’t taken the details of the child and put them on a list somewhere? Yes, I’m exactly that sure, as well.

Now, don’t get me wrong: there are tools out there that are dangerous and can be misused, and some of them will be. By teenagers, children and young adults. People of this age aren’t always good at making choices, and they’re sensitive to peer pressure, and they will make mistakes. But this is not the way to go about addressing this. We need to build trust, treat young people with respect, discuss choices, while encouraging careful research and learning. Hacking – the good type – can lead to great opportunities.

Alternatively, we can start constraining these budding security professionals early, and stop them in their tracks by refusing to let them use the Internet. Or phone. Or computers. Or read books. Actually, let’s start there. Let’s just not teach children to read: we’ll definitely be safe then (and there’s no way they’ll teach themselves, resent our control and turn against us: oh, no).

Isolationism – not a 4 letter word (in the cloud)

Things are looking up if you’re interested in protecting your workloads.

In the world of international relations, economics and fiscal policy, isolationism doesn’t have a great reputation. I could go on, I suppose, if I did some research, but this is a security blog[1], and international relations, fascinating are of study though it is, isn’t my area of expertise: what I’d like to do is borrow the world and apply it to a different field: computing, and specifically cloud computing.

In computing, isolation is a set of techniques to protect a process, application or component from another (or a set of the former from a set of the latter). This is pretty much always a good thing – you don’t want another process interfering with the correct workings of your one, whether that’s by design (it’s malicious) or in error (because it’s badly designed or implemented). Isolationism, therefore, however unpopular it may be on the world stage, is a policy that you generally want to adopt for your applications, wherever they’re running.

This is particularly important in the “cloud”. Cloud computing is where you run your applications or processes on shared infrastructure. If you own that infrastructure, then you might call that a “private cloud”, and infrastructure owned by other people a “public cloud”, but when people say “cloud” on its own, they generally mean public clouds, such as those operated by Amazon, Microsoft, IBM, Alibaba or others.

There’s a useful adage around cloud computing: “Remember that the cloud is just somebody else’s computer”. In other words, it’s still just hardware and software running somewhere, it’s just not being run by you. Another important thing to remember about cloud computing is that when you run your applications – let’s call them “workloads” from here on in – on somebody else’s cloud (computer), they’re unlikely to be running on their own. They’re likely to be running on the same physical hardware as workloads from other users (or “tenants”) of that provider’s services. These two realisations – that your workload is on somebody else’s computer, and that it’s sharing that computer with workloads from other people – is where isolation comes into the picture.

Workload from workload isolation

Let’s start with the sharing problem. You want to ensure that your workloads run as you expect them to do, which means that you don’t want other workloads impacting on how yours run. You want them to be protected from interference, and that’s where isolation comes in. A workload running in a Linux container or a Virtual Machine (VM) is isolated from other workloads by hardware and/or software controls, which try to ensure (generally very successfully!) that your workload receives the amount of computing time it should have, that it can send and receive network packets, write to storage and the rest without interruption from another workload. Equally important, the confidentiality and integrity of its resources should be protected, so that another workload can’t look into its memory and/or change it.

The means to do this are well known and fairly mature, and the building blocks of containers and VMs, for instance, are augmented by software like KVM or Xen (both open source hypervisors) or like SELinux (an open source capabilities management framework). The cloud service providers are definitely keen to ensure that you get a fair allocation of resources and that they are protected from the workloads of other tenants, so providing workload from workload isolation is in their best interests.

Host from workload isolation

Next is isolating the host from the workload. Cloud service providers absolutely do not want workloads “breaking out” of their isolation and doing bad things – again, whether by accident or design. If one of a cloud service provider’s host machines is compromised by a workload, not only can that workload possibly impact other workloads on that host, but also the host itself, other hosts and the more general infrastructure that allows the cloud service provider to run workloads for their tenants and, in the final analysis, make money.

Luckily, again, there are well-known and mature ways to provide host from workload isolation using many of the same tools noted above. As with workload from workload isolation, cloud service providers absolutely do not want their own infrastructure compromised, so they are, of course, going to make sure that this is well implemented.

Workload from host isolation

Workload from host isolation is more tricky. A lot more tricky. This is protecting your workload from the cloud service provider, who controls the computer – the host – on which your workload is running. The way that workloads run – execute – is such that such isolation is almost impossible with standard techniques (containers, VMs, etc.) on their own, so providing ways to ensure and prove that the cloud service provider – or their sysadmins, or any compromised hosts on their network – cannot interfere with your workload is difficult.

You might expect me to say that providing this sort of isolation is something that cloud service providers don’t care about, as they feel that their tenants should trust them to run their workloads and just get on with it. Until sometime last year, that might have been my view, but it turns out to be wrong. Cloud service providers care about protecting your workloads from the host because it allows them to make more money. Currently, there are lots of workloads which are considered too sensitive to be run on public clouds – think financial, health, government, legal, … – often due to industry regulation. If cloud service providers could provide sufficient isolation of workloads from the host to convince tenants – and industry regulators – that such workloads can be safely run in the public cloud, then they get more business. And they can probably charge more for these protections as well! That doesn’t mean that isolating your workloads from their hosts is easy, though.

There is good news, however, for both cloud service providers and their teants, which is that there’s a new set of hardware techniques called TEEs – Trusted Execution Environments – which can provide exactly this sort of protection[2]. This is rapidly maturing technology, and TEEs are not easy to use – in that it can not only be difficult to run your workload in a TEE, but also to ensure that it’s running in a TEE – but when done right, they do provide the sorts of isolation from the host that a workload wants in order to maintain its integrity and confidentiality[3].

There are a number of projects looking to make using TEEs easier – I’d point to Enarx in particular – and even an industry consortium to promote open TEE adoption, the Confidential Computing Consortium. Things are looking up if you’re interested in protecting your workloads, and the cloud service providers are on board, too.


1 – sorry if you came here expecting something different, but do stick around and have a read: hopefully there’s something of interest.

2 – the best known are Intel’s SGX and AMD’s SEV.

3 – availability – ensuring that it runs fairly – is more difficult, but as this is a property that is also generally in the cloud service provider’s best interest, and something that can can control, it’s not generally too much of a concern[4].

4 – yes, there are definitely times when it is, but that’s a story for another article.

A cybersecurity tip from Hazzard County

Don’t place that bet in Boss Hogg’s betting saloon: you know he’s up to no good!

It’s a slightly guilty secret, but I used to love watching The Dukes of Hazzard in the early 80’s (the first series started in late 1979, but I suspect that it didn’t make it to the UK until the next year at the earliest).  It all seemed very glamourous, and there were lots of fast car chases.  And a basset hound, which was an extra win.  To say this was early days for cybersecurity would be an understatement, and though there are references in the Wikipedia plot summaries to computers, I can’t honestly say that I remember any of those particular episodes.

One episode has stuck with me, however, for reasons that I can’t fathom.  It’s called “Hazzard Hustle” and (*SPOILER ALERT*) in it, Boss Hogg sets up a crooked betting saloon.  The swindle (if I remember it correctly) is that he controls and delays the supposedly live feeds to the TVs in the saloon, which means that he has access to results before they come in.  Needless to say, the Duke boys (probably aided and abetted by Daisy Duke) get the better of him in the end, and everything turns out OK (for them, not Boss Hogg).

“What can this have to do with cybersecurity?” you have every right to ask.  Well, the answer is reporting and monitoring channels.  Monitoring is important because without it, there is no way for us to check that what we believe should be happening actually is.  The opportunities for direct sensory monitoring of actions in computer-based systems are limited: if I request via a web browser that a banking application transfers funds between one account and another, the only visible effect that I am likely to see is an acknowledgement on the screen. Until I actually try to spend or withdraw that money, I realistically have no way to be assured that the transactions has taken place.

Let’s take an example from the human realm.  It is as if I have a trust relationship with somebody around the corner of a street, out of view, that she will raise a red flag at the stroke of noon, and I have a friend, standing on the corner, who will watch her, and tell me when and if she raises the flag. I may be happy with this arrangement, but only because I have a trust relationship to the friend: that friend is acting as a trusted channel for information.

The word “friend” was chosen carefully above, because there is a trust relationship already implicit in the term. The same is not true for the word “somebody”, which I used to describe the person who was to raise the flag. The situation as described above is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that he will pass the information correctly. But what if my friend is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend’s motivations in respect to correct reporting are not neutral.

The channels for reporting on actions – monitoring them – are vitally important within cybersecurity, and it is both easy and dangerous to fall into the trap of assuming that they are neutral, and that the only important one is between me and the acting party. In reality, the trust relationship that I have to a set of channels is key to the maintenance of the trust relationships that I have to the key entity that they monitor. In trust relationships involving computer systems, there are often multiple entities or components involved in actions, and these form a “chain of trust”, where each link depends on the other, and the chain is typically only as strong as the weakest of its links.  Don’t forget that.  Oh, and don’t place that bet in Boss Hogg’s betting saloon: you know he’s up to no good!

Should I back up to iCloud?

Don’t walk into this with your eyes closed.

This is a fairly easy one to answer. If the response to either of these questions is “yes”, then you probably shouldn’t.

1. Do you have any sensitive data that you would be embarrassed to be seen by any agent of the US Government?
2. Are you a non-US citizen?

This may seem somewhat inflammatory, so let’s look into what’s going on here.

It was widely reported last week that Apple has decided not to implement end-to-end encryption for back-ups from devices to Apple’s iCloud.  Apparently, the decision was made after Apple came under pressure from the FBI, who are concerned that their ability to access data from suspects will be reduced.  This article is not intended to make any judgments about either Apple or any law enforcement agencies, but I had a request from a friend (you know who you are!) for my thoughts on this.

The main problem is that Apple (I understand – I’m not an Apple device user) do a great job of integrating all the services that they offer, and making them easy to use across all of your Apple products.  iCloud is one of these services, and it’s very easy to use.  Apple users have got used to simplicity of use, and are likely to use this service by default.  I understand this, but there’s a classic three-way tug of war for pretty much all applications or services, and it goes like this: you get to choose two out of the following three properties of a system, application or service, but only two of them.

  1. security
  2. ease of use
  3. cost

Apple make things easy to use, and quite often pretty secure, and you pay for this, but the specific cost (in inconvenience, legal fees, political pressure, etc.) of making iCloud more secure seems to have outweighed the security in this situation, and led them to decide not to enable end-to-end encryption.

So, it won’t be as secure as you might like.  Do you care?  Well, if you have anything you’d be embarrassed for US government agents to know about – and beyond embarrassed, if you have anything which isn’t necessary entirely, shall we say, legal – then you should know that it’s not going to be difficult for US government agents such as the FBI to access it.  This is all very well, but there’s a catch for those who aren’t in such a position.

The catch is that the protections offered to protect the privacy of individuals, though fairly robust within the US, are aimed almost exclusively at US citizens.  I am in no sense a lawyer, but as a non-US citizen, I would have zero confidence that it would be particularly difficult for any US government agent to access any information that I had stored on any iCloud account that I held.  Think about that for a moment.  The US has different standards to some other countries around, for instance, drug use, alcohol use, sexual practices and a variety of other issues.  Even if those things are legal in your country, who’s to say that they might be used now or in the future to decide whether you should be granted a visa to the US, or even allowed entry at all.  There’s already talk of immigration officials checking your social media for questionable material – extending this to unencrypted data held in the US is far from out of the question.

So this is another of those issues where you need to make a considered decision.  But you do need to make a decision: don’t walk into this with your eyes closed, because once Apple has the data, there’s realistically no taking it back from the US government.

 

 

 

Coming to you in Japanese

We are now multi-lingual.

I have an exciting announcement, which is that starting this week, some of the articles on this blog will also be in Japanese.  My very talented Red Hat colleague Yuki Kubota showed an interest in translating some which she thought might be of interest to Japanese readers, and I jumped at the chance.  I’m very thrilled and humbled.

We’re still ironing out the process, but hopefully (if you already read Japanese), you’ll be able to read the following articles.

We’ll try to add the tag “Japanese” to each of these, as well.

So, a huge thank you to Yuki: we’d love comments – in English or Japanese!

Are you positive?

What do pregnancy tests and the Ukrainian aircraft missile strike have in common?

Not everything in life is nicely binary, much as we[1] might like it to be. There are shades of grey[2] in many aspects of life, and though humans can often cope with uncertainty, computer systems are less good at it: they generally want a “yes” or “no” answer. This means that decisions sometimes need to be made on incomplete evidence, and, well, that means that the answers aren’t always correct. There’s a whole area of computer science related to this: fuzzy logic.

Let’s look into what the options are. Assuming that we’re looking two options: “yes” (a positive) and “no” (a negative). That means that there are two ways in which the answer can be incorrect:

  1. a “yes” answer was incorrectly chosen (false positive);
  2. a “no” answer was incorrectly chosen (false negative).

An example to allow us to explore this is pregnancy. It’s generally agreed that you can’t be a little bit pregnant: if you take a test, any result it gives you needs to be either positive or negative. If you are pregnant, and a test result comes back negative, then that’s a false negative. If you are not pregnant, and a test comes back positive, that’s a false positive. The implications of a false positive or a false negative can both be pretty major – as anybody who has received one will tell you. I spent a little time online trying to find expected false positive and false negatives for pregnancy tests, but it turns out that the rates are so dependent on a variety of factors that it was difficult to find a sensible answer[3].

A tragic recent example of a false positive took place on Wednesday, 8th January 2020, when a Ukrainian International Airlines flight was shot down by an Iranian missile, killing all 176 people on board. It appears that an air defence radar system misidentified the aircraft as a cruise missile. As the radar system was looking for a positive identification of a threat, this can be counted as a false positive.

What might have been the alternative in this case? If the aircraft actually had been a cruise missile, but was identified as a civilian aircraft, this would have been a false negative, and the impact might well have been significant damage to an Iranian military installation.

Which is the most damaging? Well, in the case of the aircraft, it would seem pretty clear to most observers that the false positive would be worse, but from a military point of view, that might not be the case. Maybe the impact of a missile strike on a major military installation might be considered worse than the civilian loss of life in the other case. In this case, as in many others, a decision needs to be made as to which is most important to reduce: the chance of a false negative or the chance of a false positive? In a perfect world, of course, there would be no false results, negative or positive. The problem with many systems that take analogue[4] inputs and turn them into digital outputs in this way is that avoiding false results is very costly, and sometimes impossible. Even worse news is that reducing probability of one of the two types of false result tends to increase the probability of the other.

A classic example of this is in the use of biometrics for user identification. Fingerprints, facial recognition, iris scanning and similar techniques have to balance the likelihood of a false positive with a false negative. Which is worse: the chance that the CEO will not be able to update the payroll details, or that a rogue employee will update her details to improve her salary package?[5]

One good piece of news is that AI/ML (Artificial Intelligence/Machine Learning) is improving the performance of biometric systems and, in fact, other areas of computing where “fuzzy logic” is required. In most cases, humans are still better at reducing messy sets of information to yes/no results, but that is changing, and where multiple automated decisions need to be made, then AI/ML is worth considering.

Whenever you are dealing with “messy” data[6] which needs to be reduced to a “yes/no” or “positive/negative” binary result, you need to consider the likelihood of false positives or negatives. Not only do you need to consider the likelihood of each, but also the impact of each. Once you have understood these, you can then decide which you want to try to minimise, and what techniques you should use to do so.

We may be stuck with false results, but we need to understand what our choices are, and how we can get the best outcomes available from messy data.


1 – in talking security, but I’m sure this goes for lots of other people, too.

2. “gray” for our non-Commonwealth readers.

3. good advice seems to be to test several times over several days.

4. “analog”, I suppose – see [2].

5. this is one of the reasons that authentication systems generally use two factors from the three “something you are”, “something you know”, “something you have”.

6. most real-world data, to be honest.

5 resolutions for travellers in 2020

Enjoy the time when you’re not travelling

I’m not a big one for New Year[1] resolutions.  To give you an example, my resolution for 2019 was “not to be mocked by my wife or daughters”.  Given that one of them (my daughters, that is) is a teenager, and the other nearly so, this went about as well as you might expect.  At the beginning of 2018, I wrote a blog post with the top 5 resolutions for security folks.  However, if I re-use the same ones this time round, somebody’s bound to notice[2], so I’m going to come up with some different ones[3].  I do quite a lot of travel, so I thought I’d provide my top 5 resolutions for this year, which I hope will be useful not only for me, but also others.

(I’ve written another article that covers in more depth some of the self-care aspects of this topic which you may find helpful: Of headphones, caffeine and self-care.)

1. Travel lighter

For business trips, I’ve tended to pack a big, heavy laptop, with a big, heavy power “brick” and cable, and then lots of other charging-type cables of different sizes and lengths, and a number of different plugs to fit everything into.  Honestly, there’s just no need for much of it, so this year, I suggest that we all first take stock, and go through all of those cables and see which ones we actually need.  Maybe take one spare for each USB type, but no more.  And we only need the one plug – that nice multi-socket one with a couple of USB sockets will do fine.  And if we lose it or forget it, the hotel will probably have one we can borrow, or we can get one as we go through the next airport.

And the laptop?  Well, I’ve just got a little Chromebook.  There are a variety of these: I managed to pick up a Pixelbook second-hand, with warranty, for about 40% off, and I love it.  I’m pretty sure that I can use it for all the day-to-day tasks I need to perform while travelling, and, as a bonus, the power connection is smaller and lighter than the one for my laptop.  I’ve picked up a port extender (2 x USB C, 1 x USB A, 1 x Ethernet, 1 x HDMI), and I think I’m sorted.  I’m going to try leaving the big laptop at home, and see what happens.

2. Take time

I’m not just talking about leaving early to get to the airport – though that is my standard practice – but also about just, well, taking more time about things.  It’s easy to rush here and there, and work yourself into a state[3], or feel that you need to fill every second of every day with something work-related, when you wouldn’t do that if you were at home.  It may be stepping aside to let other people off the plane, and strolling to the ground transportation exit, rather than hurrying there, or maybe stopping for a few minutes to look at some street art or enjoy the local architecture – whatever it is, give yourself permission not to hurry and not to rush, but just breathe and let the rest of the world slip by, even if it’s just for a few seconds.

3. Look after yourself

Headphones are a key tool for help me look after myself – and one of the things I won’t be discarding as part of my “travel lighter” resolution.  Sometimes I need to take myself away from the hubbub and to chill.  But they are just a tool: I need to remember that I need to stop, and put them on, and listen to some music.  It’s really easy to get caught up in the day, and the self-importance of being the Business Traveller, and forget that I’m not superhuman (and that my colleagues don’t expect me to be).  Taking time is the starting point – and sometimes all you have time for – but at some point you need to stop completely and do something for yourself.

4. Remember you’re tired

Most of us get grumpy when we’re tired[4].  And travelling is tiring, so when you’re at the end of a long trip, or just at the beginning of one, after a long day in cars and airports and planes, remember that you’re tired, and try to act accordingly.  Smile.  Don’t be rude.  Realise that the hotel receptionist is doing their best to sort your room out, or that the person in front of you in the queue for a taxi is just as frustrated with their four children as you are (well, maybe not quite as much).  When you get home, your partner or spouse has probably been picking up the slack of all the things that you’d normally do at home, so don’t snap at them: be nice, show you care.  Whatever you’re doing, expect things to take longer: you’re not at the top of your game.  Oh, and restrict alcohol intake, and go to bed early instead.  Booze may feel like it’s going to help, but it’s really, really not.

5. Enjoy not travelling

My final resolution was going to be “take exercise”, and this still matters, but I decided that even more important is the advice to enjoy the time when you’re not travelling.  Without “down-time”, travelling becomes – for most of us at least – a heavier and heavier burden.  It’s so easy, on returning from a work trip, to head straight back into the world of emails and documents and meetings, maybe catching up over the weekend on those items that you didn’t get done because you were away.  Don’t do this – or do it very sparingly, and if you can, claw back the time over the next few days, maybe taking a little longer over a cup of tea or coffee, or stopping yourself from checking work emails one evening.  Spend time with the family[5], hang out with some friends, run a 5k, go to see a film/movie, play some video games, complete that model railway set-up you’ve been working on[7].  Whatever it is that you’re doing, let your mind and your body know that you’re not “on-the-go”, and that it’s time to recover some of that energy and be ready when the next trip starts.  And you know it will, so be refreshed, and be ready.


1 – I’m using the Western (Gregorian calendar), so this is timely.  If you’re using a different calendar, feel free to adjust.

2 – the list is literally right there if you follow the link.

3 – I considered reversing the order, but the middle one would just stay the same.

4 – I wondered if this is just me, but then remembered the stressed faces of those on aircraft, in airports and checking into hotels, and thought, “no, it’s not”.  And I am informed (frequently) by my family that this is definitely the case for me.

5 – if you have one[6].

6 – and if that’s actually a relaxing activity…

7 – don’t mock: it takes all kinds.