Lots of people in the InfoSec world are at Black Hat and Def Con in Las Vegas this week, and there are more stories out there than you can shake a stick at. I’m on holiday, and although it’s not as if I’m disinterested, I’ve decided to take the whole “not working” thing seriously, and I’m not going to blog about any of them this week.
Your router is your first point of contact with the Internet: how insecure is it?
I’ve always had a problem with the t-shirt that reads “There’s no place like 127.0.0.1”. I know you’re supposed to read it “home”, but to me, it says “There’s no place like localhost”, which just doesn’t have the same ring to it. And in this post, I want to talk about something broader: the entry-point to your home network, which for most people will be a cable or broadband router. The UK and US governments just published advice that “Russia” is attacking routers. This attack will be aimed mostly, I suspect, at organisations (see my previous post What’s a State Actor, and should I care?), rather than homes, but it’s a useful wake-up call for all of us.
What do routers do?
Routers are important: they provide the link between one network (in this case, our home network) and another one (in this case, the Internet, via our ISP’s network. In fact, for most of us, the box we think of as “the router” is doing a lot more than that. The “routing” bit is what is sounds like: it helps computers on your network to find routes to send data to computers outside the network – and vice-versa, for when you’re getting data back. But most routers will actual be doing more than that. The other purpose that many will be performing is that of a modem. Most of us  connect to the Internet via a phoneline – whether cable or standard landline – though there is a growing trend for mobile Internet to the home. Where you’re connecting via a phone line, there’s a need to convert the signals that we use for the Internet to something else and then (at the other end) back again. For those of us old enough to remember the old “dial-up” days, that’s what the screechy box next to your computer used to do.
But routers often do more things as, well. Sometimes many more things, including traffic logging, being an WiFi access point, providing a VPN for external access to your internal network, child access, firewalling and all the rest.
Routers are complex things these days, and although state actors may not be trying to get into them, other people may.
Does this matter, you ask? Well, if other people can get into your system, they have easy access to attacking your laptops, phones, network drives and the rest. They can access and delete unprotected personal data. They can plausibly pretend to be you. They can use your network to host illegal data or launch attacks on others. Basically, all the bad things.
Luckily, routers tend to come set up by your ISP, with the implication being that you can leave them, and they’ll be nice and safe.
So we’re safe, then?
Unluckily, we’re really not.
The first problem is that the ISPs are working on a budget, and it’s in their best interests to provide cheap kit which just does the job. The quality of ISP-provided routers tends to be pretty terrible. It’s also high on the list of things to try to attack by malicious actors: if they know that a particular router model will be installed in a several million homes, there’s a great incentive to find an attack, as an attack on that model will be very valuable to them.
Other problems that arise include:
- slowness to fix known bugs or vulnerabilities – updating firmware can be costly to your ISP, so they may be slow to arrive (if they do at all);
- easily-derived or default admin passwords, meaning that attackers don’t even need to find a real vulnerability – they can just log in.
Measures to take
Here’s a quick list of steps you can take to try to improve the security of your first hop to the Internet. I’ve tried to order them in terms of ease – simplest first. Before you do any of these, however, save the configuration data so that you can bring it back if you need it.
- Passwords – always, always, always change the admin password for your router. It’s probably going to be one that you rarely use, so you’ll want to record it somewhere. This is one of the few times where you might want to consider taping it to the router itself, as long as the router is in a secure place where only authorised people (you and your family) have access.
- Internal admin access only – unless you have very good reasons, and you know what you’re doing, don’t allow machines to administer the router unless they’re on your home network. There should be a setting on your router for this.
- Wifi passwords – once you’ve done 2., you need to ensure that wifi passwords on your network – whether set on your router or elsewhere – are strong. It’s easy to set a “friendly” password so that it’s easy for visitors to connect to your network, but if it’s guessed by a malicious person who happens to be nearby, the first thing they’ll do will be to look for routers on the network, and as they’re on the internal network they’ll have access to it (hence why 1 is important).
- Only turn on functions that you understand and need – as I noted above, modern routers have all sorts of cool options. Disregard them. Unless you really need them, and you actually understand what they do, and what the dangers of turning them on are, then leave them off. You’re just increasing your attack surface.
- Buy your own router – replace your ISP-supplied router with a better one. Go to your local computer store and ask for suggestions. You can pay an awful lot, but you can conversely get something fairly cheap that does the job and is more robust, performant and easy to secure than the one you have at the moment. You may also want to buy a separate modem. Generally setting up your own modem or router is simple, and you can copy the settings from the ISP-supplied one and it will “just work”.
- Firmware updates – I’d love to have this further up the list, but it’s not always easy. From time to time, firmware updates appear for your router. Most routers will check automatically, and may prompt you to update when you next log in. The problem is that failure to update correctly can cause catastrophic results, or lose configuration data that you’ll need to re-enter. But you really do need to consider doing this, and keeping a look-out of firmware updates which fix severe security issues.
- Go open source – there are some great open source router projects out there which allow you to take an existing router and replace all of the firmware/software on it with an open source alternative. You can find a list of at least some of them on Wikipedia – https://en.wikipedia.org/wiki/List_of_router_firmware_projects, and a search on “router” on Opensource.com will open your eyes to a set of fascinating opportunities. This isn’t a step for the faint-hearted, as you’ll definitely void the warranty on your existing router, but if you want to have real control, open source is always the way to go.
I’d love to pretend that once you’ve improved the security of your router, that all’s well and good, but it’s not on your home network.. What about IoT devices in your home (Alexa, Nest, Ring doorbells, smart lightbulbs, etc.?) What about VPNs to other networks? Malicious hosts via Wifi, malicious apps on your childrens phones…?
No – you won’t be safe. But, as we’ve discussed before, although there is no “secure”, that doesn’t mean that we shouldn’t raise the bar and make it harder for the Bad Folks[tm].
1 – I’m simplifying – but read on, we’ll get there.
2 -“Russian State-Sponsored Cyber Actors”
3 – or, in my parents’ case, “the Internet box”, I suspect.
4 – this is one of these cases where I don’t want comments telling me how you have a direct 1 Terabit/s connection to your local backbone, thank you very much.
5 – maybe not the entire family.
6 – your router is now a brick, and you have no access to the Internet.
Any type of even vaguely useful system will hold, manipulate or use data.
I spend a lot of my time on this blog talking about systems, because unless you understand how your systems work as a set of components, you’re never going to be able to protect and manage them. Today, however, I want to talk about security of data – the data in the systems. Any type of even vaguely useful system will hold, manipulate or use data in some way or another, and as I’m interested in security, I think it’s useful to talk about data and data security. I’ve touched on this question in previous articles, but one recent one, What’s a State Actor, and should I care? had a number of people asking me for more detail on some of the points I raised, and as one of them was the classic “C.I.A.” model around data security, I thought I’d start there.
The first point I should make is that the “CIA triad” is sometimes over-used. You really can’t reduce all of information security to confidentiality, integrity and availability – there are a number of other important principles to consider. These three arguably don’t even cover all the issues you’d want to consider around data security – what, for instance, about data correctness and consistency, for example? – but I’ve always found them to be a useful starting point, so as long as we don’t kid ourselves into believing that they’re all we need, they are useful to hold in mind. They are, to use a helpful phrase, “necessary but not sufficient”.
We should also bear in mind that for any particular system, you’re likely to have various types and sets of data, and these types and sets may have different requirements. For instance, a database may store not only key data about, for instance, museum exhibits, but will also store data about who can update the key data, and even metadata about that – this might include information about a set of role-based access controls (RBAC), and the security requirements for this will be different to the security requirements for thee key data. So, when we’re considering the data security requirements of a system, don’t assume that they will be uniform across all data sets.
Confidentiality is quite an easy one to explain. If you don’t want everybody to be able to see a set of data, then you wish it to be confidential with regards to at least some entities – whether they be people or systems entities, internal or external. There are a number of ways to implement confidentiality, the most obvious being encryption of data, but there are other approaches, of which the easiest is just denying access to data through physical, geographical or other authorisation controls.
When you might care that data is confidentiality-protected: health records, legal documents, credit card details, financial information, firewall rules, system administrator rights, passwords.
When you might not care that data is confidentiality-protected: sports records, examination results, open source code, museum exhibit information, published company financial results.
Integrity, when used as a term in this context, is slightly different to its standard usage. The property we’re looking for is not the same integrity that we expect from our politicians, but is that data has not been changed from what it should be. Data is often useless unless it can be changed – we want to update information about our museum exhibits from time to time, for instance – but who can change it, and when, are the sort of things we want to control. Equally important may be the type of changes that can be made to it: if I have a careful classification scheme for my Tudor music manuscripts, I don’t want somebody putting in binary data which means nothing to me or our visitors.
I struggled to think of any examples when you wouldn’t want to protect the integrity of your data from at least some entities, as if data can be changed willy-nilly, it seems be worthless. It did occur to me, however, that as long as you have integrity-protected records of what has been changed, you’re probably OK. That’s the model for some open source projects or collaborative writing endeavours, for example.
[Discursion – Open source projects don’t generally allow you to scribble directly onto the main “approved” store – whose integrity is actually very important. That’s why software projects – proprietary or open source – have for decades used source control systems or versioning systems. One of the success criteria for scaling an open source project is a consensus on integrity management.]
Availability is the easiest of the triad to ignore when you’re designing a system. When you create data, it’s generally useless unless the entities that need it can get to it when they need it. That doesn’t mean that all systems need to have 100% up-time, or even that particular data sets need to be available for 100% of the up-time of the system, but when you’re designing a system, you need to decide what’s going to be appropriate, and how to manage with degradation. Why degradation? Because one of the easiest ways to affect the availability of data is to slow down access to it – as described in another recent post What’s your availability? DoS attacks and more. If I’m using a mobile app to view information about museum exhibits in real-time, and it takes five minutes for me to access the description, then things aren’t any good. On the other hand, if there’s some degradation of the service, and I can only access the first paragraph of the description, with an apology for the inconvenience and a link to other information, that might be acceptable. From a different point of view, if I notice that somebody is attacking my museum system, but I can’t get into it with administrative access to lock it down or remove their access, that’s definitely bad.
As with integrity protection, it’s difficult to think of examples when availability protection isn’t important, but availability isn’t necessarily a binary condition: it may vary from time to time.
Although they’re not perfect descriptions of all the properties you need to consider when designing in data security, confidentiality, integrity and availability should at least cause you to start thinking about what your data is for, how it should be accessed, how it should be changed, and by whom.
1 – I just know that somebody’s going to come up with a counter-example.
2 – And therefore assume that you, the reader, are interested.
3 – as a nested example, which is quite nice, as we’re talking about metadata.
4 – And far too rarely get, it seems.
5 – Not a rude phrase, even if it sounds like it should be. Look it up if you don’t believe me.
Resolutions for this New Year:
- DNS (preferably DNSSEC)
- 1600dpi (mouse)
- WUXGA (1920×1200)
- I was planning to add an audio resolution, but I’m dithering a bit on this one
I’ll get my coat.
Happy New Year!
… what’s the fun in having an Internet if you can’t, well, “net” on it?
Sometimes – and I hope this doesn’t come as too much of a surprise to my readers – sometimes, there are bad people, and they do bad things with computers. These bad things are often about stopping the good things that computers are supposed to be doing* from happening properly. This is generally considered not to be what you want to happen**.
For this reason, when we architect and design systems, we often try to enforce isolation between components. I’ve had a couple of very interesting discussions over the past week about how to isolate various processes from each other, using different types of isolation, so I thought it might be interesting to go through some of the different types of isolation that we see out there. For the record, I’m not an expert on all different types of system, so I’m going to talk some history****, and then I’m going to concentrate on Linux*****, because that’s what I know best.
In the beginning
In the beginning, computers didn’t talk to one another. It was relatively difficult, therefore, for the bad people to do their bad things unless they physically had access to the computers themselves, and even if they did the bad things, the repercussions weren’t very widespread because there was no easy way for them to spread to other computers. This was good.
Much of the conversation below will focus on how individual computers act as hosts for a variety of different processes, so I’m going to refer to individual computers as “hosts” for the purposes of this post. Isolation at this level – host isolation – is still arguably the strongest type available to us. We typically talk about “air-gapping”, where there is literally an air gap – no physical network connection – between one host and another, but we also mean no wireless connection either. You might think that this is irrelevant in the modern networking world, but there are classes of usage where it is still very useful, the most obvious being for Certificate Authorities, where the root certificate is so rarely accessed – and so sensitive – that there is good reason not to connect the host on which it is stored to be connected to any other computer, and to use other means, such as smart-cards, a printer, or good old pen and paper to transfer information from it.
And then came networks. These allow hosts to talk to each other. In fact, by dint of the Internet, pretty much any host can talk to any other host, given a gateway or two. So along came network isolation to try to stop tha. Network isolation is basically trying to re-apply host isolation, after people messed it up by allowing hosts to talk to each other******.
Later, some smart alec came up with the idea of allowing multiple processes to be on the same host at the same time. The OS and kernel were trusted to keep these separate, but sometimes that wasn’t enough, so then virtualisation came along, to try to convince these different processes that they weren’t actually executing alongside anything else, but had their own environment to do their old thing. Sadly, the bad processes realised this wasn’t always true and found ways to get around this, so hardware virtualisation came along, where the actual chips running the hosts were recruited to try to convince the bad processes that they were all alone in the world. This should work, only a) people don’t always program the chips – or the software running on them – properly, and b) people decided that despite wanting to let these processes run as if they were on separate hosts, they also wanted them to be able to talk to processes which really were on other hosts. This meant that networking isolation needed to be applied not just at the host level, but at the virtual host level, as well******.
A step backwards?
Now, in a move which may seem retrograde, it occurred to some people that although hardware virtualisation seemed like a great plan, it was also somewhat of a pain to administer, and introduced inefficiencies that they didn’t like: e.g. using up lots of RAM and lots of compute cycles. These were often the same people who were of the opinion that processes ought to be able to talk to each other – what’s the fun in having an Internet if you can’t, well, “net” on it? Now we, as security folks, realise how foolish this sounds – allowing processes to talk to each other just encourages the bad people, right? – but they won the day, and containers came along. Containers allow lots of processes to be run on a host in a lightweight way, and rely on kernel controls – mainly namespaces – to ensure isolation********. In fact, there’s more you can do: you can use techniques like system call trapping to intercept the things that processes are attempting and stop them if they look like the sort of things they shouldn’t be attempting*********.
And, of course, you can write frameworks at the application layer to try to control what the different components of an application system can do – that’s basically the highest layer, and you’re just layering applications on applications at this point.
So here’s where I get to the chance to mention one of my favourite topics: systems. As I’ve said before, by “system” here I don’t mean an individual computer (hence my definition of host, above), but a set of components that work together. The thing about isolation is that it works best when applied to a system.
Let me explain. A system, at least as I’d define it for the purposes of this post, is a set of components that work together but don’t have knowledge of external pieces. Most important, they don’t have knowledge of different layers below them. Systems may impose isolation on applications at higher layers, because they provide abstractions which allow higher systems to be able to ignore them, but by virtue of that, systems aren’t – or shouldn’t be – aware of the layers below them.
A simple description of the layers – and it doesn’t always hold, partly because networks are tricky things, and partly because there are various ways to assemble the stack – may look like this.
Application (top layer) Container System trapping Kernel Hardware virtualisation Networking Host (bottom layer)
As I intimated above, this is a (gross) simplification, but the point holds that the basic rule is that you can enforce isolation upwards in the layers of the stack, but you can’t enforce it downwards. Lower layer isolation is therefore generally stronger than higher layer isolation. This shouldn’t come as a huge surprise to anyone who’s used to considering network stacks – the principle is the same – but it’s helpful to lay out and explain the principles from time to time, and the implications for when you’re designing and architecting.
Because if you are considering trust models and are defining trust domains, you need to be very, very careful about defining whether – and how – these domains spread across the layer boundaries. If you miss a boundary out when considering trust domains, you’ve almost certainly messed up, and need to start again. Trust domains are important in this sort of conversation because the boundaries between trust domains are typically where you want to be able to enforce and police isolation.
The conversations I’ve had recently basically ran into problems because what people really wanted to do was apply lower layer isolation from layers above which had no knowledge of the bottom layers, and no way to reach into the control plane for those layers. We had to remodel, and I think that we came up with some sensible approaches. It was as I was discussing these approaches that it occurred to me that it would have been a whole lot easier to discuss them if we’d started out with a discussion of layers: hence this blog post. I hope it’s useful.
*although they may well not be, because, as I’m pretty sure I’ve mentioned before on this blog, the people trying to make the computers do the good things quite often get it wrong.
**unless you’re one of the bad people. But I’m pretty sure they don’t read this blog, so we’re OK***.
***if you are a bad person, and you read this blog, would you please mind pretending, just for now, that you’re a good person? Thank you. It’ll help us all sleep much better in our beds.
****which I’m absolutely going to present in an order that suits me, and generally neglect to check properly. Tough.
*****s/Linux/GNU Linux/g; Natch.
******for some reason, this seemed like a good idea at the time.
*******for those of you who are paying attention, we’ve got to techniques like VXLAN and SR-IOV.
********kernel purists will try to convince you that there’s no mention of containers in the Linux kernel, and that they “don’t really exist” as a concept. Try downloading the kernel source and doing a search for “container” if you want some ammunition to counter such arguments.
*********this is how SELinux works, for instance.
Don’t increase the technical complexity of a process just because you’ve got a cool technology that you could throw at it.
I’m attending Open Source Summit 2017* this week in L.A., and went to an interesting “fireside chat” on blockchain moderated by Brian Behlendforf of Hyperledger, with Jairo*** of Wipro and Siva Kannan of Gem. It was a good discussion – fairly basic in terms of the technical side, and some discussion of identity in blockchain – but there was one particular part of the session that I found interesting and which I thought was worth some further thought. As in my previous post on this topic, I’m going to conflate blockchain with Distributed Ledger Technologies (DLTs) for simplicity.
Siva presented three questions to ask when considering whether a process is a good candidate for moving to the blockchain. There’s far too much bandwagon-jumping around blockchain: people assume that all processes should be blockchained. I was therefore very pleased to see this come up as a topic. I think it’s important to spend some time looking at when it makes sense to use blockchains, and when it’s not. To paraphrase Siva’s points:
- is the process time-consuming?
- is the process multi-partite, made up of multiple steps?
- is there a trust problem within the process?
I liked these as a starting point, and I was glad that there was a good conversation around what a trust problem might mean. I’m not quite sure it went far enough, but there was time pressure, and it wasn’t the main thrust of the conversation. Let’s spend a time looking at why I think the points above are helpful as tests, and then I’m going to add another.
Is the process time-consuming?
The examples that were presented were two of the classic ones used when we’re talking about blockchain: inter-bank transfer reconciliation and healthcare payments. In both cases, there are multiple parties involved, and the time it takes for completion seems completely insane for those of us used to automated processes: in the order of days. This is largely because the processes are run by central authorities when, from the point of view of the process itself, the transactions are actually between specific parties, and don’t need to be executed by those authorities, as long as everybody trusts that the transactions have been performed fairly. More about the trust part below.
Is the process multi-partite?
If the process is simple, and requires a single step or transaction, there’s very little point in applying blockchain technologies to it. The general expectation for multi-partite processes is that they involve multiple parties, as well as multiple parts. If there are only a few steps in a transaction, or very few parties involved, then there are probably easier technological solutions for it. Don’t increase the technical complexity of a process just because you’ve got a cool technology that you can throw at it******.
Is there a trust problem within the process?
Above, I used the phrase “as long as everybody trusts that the transactions have been performed fairly”******. There are three interesting words in this phrase*******: “everybody”, “trusts” and “fairly”. I’m going to go through them one by one:
- everybody: this might imply full transparency of all transactions to all actors in the system, but we don’t need to assume that – that’s part of the point of permissioned blockchains. It may be that only the actors involved in the particular process can see the details, whereas all other actors are happy that they have been completed correctly. In fact, we don’t even need to assume that the actors involved can see all the details: secure multi-party computation means that only restricted amounts of information need to be exposed********.
- trusts: I’ve posted on the topic of trust before, and this usage is a little less tight than I’d usually like. However, the main point is to ensure sufficient belief that the process meets expectations to be able to accept it.
- fair: as anyone with children knows, this is a loaded word. In this context, I mean “according to the rules agreed by the parties involved – which may include parties not included in the transaction, such as a regulatory body – and encoded into the process”.
This point about encoding rules into a process is a really, really major one, to which I intend to return at a later date, but for now let’s assume (somewhat naively, admittedly) that this is doable and risk-free.
One more rule: is there benefit to all the stakeholders?
This was a rule that I suggested, and which caused some discussion. It seems to me that there are some processes where a move to a blockchain may benefit certain parties, but not others. For example, the amount of additional work required by a small supplier of parts to a large automotive manufacturer might be such that there’s no obvious benefit to the supplier, however much benefit is manifestly applicable to the manufacturer. At least one of the panellists was strongly of the view that there will always be benefit to all parties, but I’m not convinced that the impact of implementation will always outweight such benefit.
Conclusion: blockchain is cool, but…
… it’s not a perfect fit for every process. Organisations – and collections of organisations – should always carefully consider how good a fit blockchain or DLT may be before jumping to a decision which may be more costly and less effective than they might expect from the hype.
*was “LinuxCon and ContainerCon”**.
***he has a surname, but I didn’t capture it in my notes****.
****yes, I write notes!
*****this is sometime referred to as the “hammer problem” (if all you’ve got is a hammer, then everything looks like a nail)
******actually, I didn’t: I missed out the second “as”, and so had to correct it in the first appearance of the phrase.
*******in the context of this post. They’re all interesting in the own way, I know.
********this may sound like magic. Given my grasp of the mathematics involved, I have to admit that it might as well be*********.
*********thank you Arthur C. Clarke.
“I think I’ve just been got by a phishing email…”
I attended Black Hat USA a few weeks ago in Las Vegas*. I also spent some time at B-Sides LV and DEFCON. These were my first visits to all of them, and they were interesting experiences. There were some seriously clever people talking about some seriously complex things, some of which were way beyond my level of knowledge. There were also some seriously odd things and odd people**. There was one particular speaker who did a great job, and whose keynote made me think: Alex Stamos, CSO of Facebook.
The reason that I want to talk about Stamos’ talk is that I got a phone call a few minutes back from a member of my family. It was about his iCloud account, which he was having problems accessing. Now: I don’t use Apple products***, so I wasn’t able to help. But the background was the interesting point. I’d had a call last week ago from the same family member. He’s not … techno-savvy. You know the one I’m talking about: that family member. He was phoning me on last week in something of a fluster.
“I think I’ve just been got by a phishing email,” he started.
Now: this is a win. Somebody – whether me or the media – has got him to understand what a phishing email is. I’m not saying he could spell it correctly, mind you – or that he’s not going to get hit by one – but at least he knows.
“OK,” I said.
“It said that it was from Apple, and if I didn’t change my password within 72 hours, I’d lose all of my data,” he explained.
Ah, I thought, one of those.
“So I clicked on the link and changed my password. But I realised after about 5 minutes and changed it again,” he continued.
“Where did you change it that time?” I asked.
“On the Apple site.”
“Then you’re probably OK.” I gave him some advice on things to check, and suggested ringing Apple and maybe his bank to let them know. I also gave him the Stern Talk[tm] that we’ve all given users – the one about never clicking through a link on an email, and always entering it by hand.***** He called me back a few hours later to tell me that the guy he’d spoken to at Apple had reassured him that his bank details weren’t in danger, and that a subsequent notification he’d got that someone was trying to use his account from an unidentified device was a good sign, because it meant that the extra layers of security that Apple had put in place were doing their job. He was significantly (and rightly) relieved.
“So what has this to do with Stamos’ keynote?” you’re probably asking. Well, Stamos talked about how many of the attacks and vulnerabilities that we worry about much of the time – zero days, privilege escalations, network segment isolation – make up the tiniest tip of the huge pyramid of security issues that affect our users. Most of the problems are around misuse of accounts or services. And most of the users in the world aren’t uberhackers or even script kiddies – they’re not even people like those in the audience****** – but people with sub-$100******* smartphones.
And he’s right. We need to think about these people, too. I’m not saying that we shouldn’t worry about all the complex and scary technical issues that we’re paid to understand, fix and mitigate. They are important, and if we don’t fix them, we’re in for a world of pain. But our jobs – what we get paid for – should also include thinking about the other people: people who Facebook and Apple make a great deal of money from, and who they quite rightly care about. The question is: do we, the rest of the industry? And how are we going to know that we’re thinking like a 68 year old woman in India or a 15 year old boy in Brazil? (Hint: part of the answer is around diversity in our industry.)
Apple didn’t do too bad a job, I think – though my family member is still struggling with the impact of the password reset. And the organisation I talked about in my previous post on the simple things we should do absolutely didn’t. So, from now on, I’m going to try to think a little harder about what impact the recommendations, architectures and designs I come up with might have on the “hidden users” – not the sysadmins, not the developers, not the expert users, but people like my family members. We need to think about security for them just as much as for security for people like us.
*weird place, right? And hot. Too hot.
**I walked out of one session at DEFCON after six minutes as it was getting more and more difficult to resist the temptation to approach the speaker at the podium and punch him on the nose.
***no, I’m not going to explain. I just don’t: let’s leave it at that for now, OK? I’m not judging you if you do.****
****of course I’m judging you. But you’ll be fine.
*****clearly whoever had explained about phishing attacks hadn’t done quite as good a job as I’d hoped.
******who, he seemed to assume, were mainly Good Guys & Gals[tm].
*******approximately sub-€85 or sub-£80 at time of going to press********: please substitute your favoured currency here and convert as required.
********I’m guessing around 0.0000000000000001 bitcoins. I don’t follow the conversion rate, to be brutally honest.