Defending our homes

Your router is your first point of contact with the Internet: how insecure is it?

I’ve always had a problem with the t-shirt that reads “There’s no place like 127.0.0.1”. I know you’re supposed to read it “home”, but to me, it says “There’s no place like localhost”, which just doesn’t have the same ring to it. And in this post, I want to talk about something broader: the entry-point to your home network, which for most people will be a cable or broadband router[1].  The UK and US governments just published advice that “Russia”[2] is attacking routers.  This attack will be aimed mostly, I suspect, at organisations (see my previous post What’s a State Actor, and should I care?), rather than homes, but it’s a useful wake-up call for all of us.

What do routers do?

Routers are important: they provide the link between one network (in this case, our home network) and another one (in this case, the Internet, via our ISP’s network.  In fact, for most of us, the box we think of as “the router”[3] is doing a lot more than that.  The “routing” bit is what is sounds like: it helps computers on your network to find routes to send data to computers outside the network – and vice-versa, for when you’re getting data back.  But most routers will actual be doing more than that.  The other purpose that many will be performing is that of a modem.  Most of us [4] connect to the Internet via a phoneline – whether cable or standard landline – though there is a growing trend for mobile Internet to the home.  Where you’re connecting via a phone line, there’s a need to convert the signals that we use for the Internet to something else and then (at the other end) back again.  For those of us old enough to remember the old “dial-up” days, that’s what the screechy box next to your computer used to do.

But routers often do more things as, well.  Sometimes many more things, including traffic logging, being an WiFi access point, providing a VPN for external access to your internal network, child access, firewalling and all the rest.

Routers are complex things these days, and although state actors may not be trying to get into them, other people may.

Does this matter, you ask?  Well, if other people can get into your system, they have easy access to attacking your laptops, phones, network drives and the rest.  They can access and delete unprotected personal data.  They can plausibly pretend to be you.  They can use your network to host illegal data or launch attacks on others.  Basically, all the bad things.

Luckily, routers tend to come set up by your ISP, with the implication being that you can leave them, and they’ll be nice and safe.

So we’re safe, then?

Unluckily, we’re really not.

The first problem is that the ISPs are working on a budget, and it’s in their best interests to provide cheap kit which just does the job.  The quality of ISP-provided routers tends to be pretty terrible.  It’s also high on the list of things to try to attack by malicious actors: if they know that a particular router model will be installed in a several million homes, there’s a great incentive to find an attack, as an attack on that model will be very valuable to them.

Other problems that arise include:

  • slowness to fix known bugs or vulnerabilities – updating firmware can be costly to your ISP, so they may be slow to arrive (if they do at all);
  • easily-derived or default admin passwords, meaning that attackers don’t even need to find a real vulnerability – they can just log in.

 

Measures to take

Here’s a quick list of steps you can take to try to improve the security of your first hop to the Internet.  I’ve tried to order them in terms of ease – simplest first.  Before you do any of these, however, save the configuration data so that you can bring it back if you need it.

  1. Passwords – always, always, always change the admin password for your router.  It’s probably going to be one that you rarely use, so you’ll want to record it somewhere.  This is one of the few times where you might want to consider taping it to the router itself, as long as the router is in a secure place where only authorised people (you and your family[5]) have access.
  2. Internal admin access only – unless you have very good reasons, and you know what you’re doing, don’t allow machines to administer the router unless they’re on your home network.  There should be a setting on your router for this.
  3. Wifi passwords – once you’ve done 2., you need to ensure that wifi passwords on your network – whether set on your router or elsewhere – are strong.  It’s easy to set a “friendly” password so that it’s easy for visitors to connect to your network, but if it’s guessed by a malicious person who happens to be nearby, the first thing they’ll do will be to look for routers on the network, and as they’re on the internal network they’ll have access to it (hence why 1 is important).
  4. Only turn on functions that you understand and need – as I noted above, modern routers have all sorts of cool options.  Disregard them.  Unless you really need them, and you actually understand what they do, and what the dangers of turning them on are, then leave them off.  You’re just increasing your attack surface.
  5. Buy your own router – replace your ISP-supplied router with a better one.  Go to your local computer store and ask for suggestions.  You can pay an awful lot, but you can conversely get something fairly cheap that does the job and is more robust, performant and easy to secure than the one you have at the moment.  You may also want to buy a separate modem.  Generally setting up your own modem or router is simple, and you can copy the settings from the ISP-supplied one and it will “just work”.
  6. Firmware updates – I’d love to have this further up the list, but it’s not always easy.  From time to time, firmware updates appear for your router.  Most routers will check automatically, and may prompt you to update when you next log in.  The problem is that failure to update correctly can cause catastrophic results[6], or lose configuration data that you’ll need to re-enter.  But you really do need to consider doing this, and keeping a look-out of firmware updates which fix severe security issues.
  7. Go open source – there are some great open source router projects out there which allow you to take an existing router and replace all of the firmware/software on it with an open source alternative.  You can find a list of at least some of them on Wikipedia – https://en.wikipedia.org/wiki/List_of_router_firmware_projects, and a search on “router” on Opensource.com will open your eyes to a set of fascinating opportunities.  This isn’t a step for the faint-hearted, as you’ll definitely void the warranty on your existing router, but if you want to have real control, open source is always the way to go.

Other issues…

I’d love to pretend that once you’ve improved the security of your router, that all’s well and good, but it’s not on your home network..  What about IoT devices in your home (Alexa, Nest, Ring doorbells, smart lightbulbs, etc.?)  What about VPNs to other networks?  Malicious hosts via Wifi, malicious apps on your childrens phones…?

No – you won’t be safe.  But, as we’ve discussed before, although there is no “secure”, that doesn’t mean that we shouldn’t raise the bar and make it harder for the Bad Folks[tm].

 


1 – I’m simplifying – but read on, we’ll get there.

2 -“Russian State-Sponsored Cyber Actors”

3 – or, in my parents’ case, “the Internet box”, I suspect.

4 – this is one of these cases where I don’t want comments telling me how you have a direct 1 Terabit/s connection to your local backbone, thank you very much.

5 – maybe not the entire family.

6 – your router is now a brick, and you have no access to the Internet.

Do you know what’s lurking on your system?

Every utility, every library, every executable … increases your attack surface.

I had a job once which involved designing hardening procedures for systems that we were going to be use for security-related projects.  It was fascinating.  This was probably 15 years ago, and not only were guides a little thin on the ground for the Linux distribution I was using, but what we were doing was quite niche.  At first, I think I’d assumed that I could just write a script to close down a few holes that originated from daemons[1] that had been left running for no reasons: httpd, sendmail, stuff like that.  It did involve that, of course, but then I realised there was more to do, and started to dive down the rabbit hole.

So, after those daemons, I looked at users and groups.  And then at file systems, networking, storage.  I left the two scariest pieces to last, for different reasons.  The first was the kernel.  I ended up hand-crafting a kernel, removing anything that I thought it was unlikely we’d need, and then restarting several times when I discovered that the system wouldn’t boot because the things I thought I understood were more … esoteric than I’d realised.  I’m not a kernel developer, and this was a salutary lesson in quite how skilled those folks are.  At least, at the time I was doing it, there was less code, and fewer options, than there are today.  On the other hand, I was having to hack back to a required state, and there are more cut-down kernels and systems to start with than there were back then.

The other piece that I left for last was just pruning the installed Operating System applications and associated utilities.  Again, there are cut-down options that are easier to use now than then, but I also had some odd requirements – I believe that we needed Java, for instance, which has, or had …. well let’s say a lot of dependencies.  Most modern Linux distributions[3] start off by installing lots of pieces so that you can get started quickly, without having to worry about trying to work out dependencies for every piece of external software you want to run.

This is understandable, but we need to realise when we do this that we’re making a usability/security trade-off[5].  Every utility, every library, every executable that you add to a system increases your attack surface, and increases the likelihood of vulnerabilities.

The problem isn’t just that you’re introducing vulnerabilities, but that once they’re there, they tend to stay there.  Not just in code that you need, but, even worse, in code that you don’t need.  It’s a rare, but praiseworthy hacker[6] who spends time going over old code removing dependencies that are no longer required.  It’s a boring, complex task, and it’s usually just easier to leave the cruft[7] where it is and ship a slightly bigger package for the next release.

Sometimes, code is refactored and stripped: most frequently, security-related code. This is a very Good Thing[tm], but it turns out that it’s far from sufficient.  The reason I’m writing this post is because of a recent story in The Register about the “beep” command.  This command used the little speaker that was installed on most PC-compatible motherboards to make a little noise.  It was a useful little utility back in the day, but is pretty irrelevant now that most motherboards don’t ship with the relevant hardware.  The problem[8] is that installing and using the beep command on a system allows information to be leaked to users who lack the relevant permissions.  This can be a very bad thing.  There’s a good, brief overview here.

Now, “beep” isn’t installed by default on the distribution I’m using on the system on which I’m writing this post (Fedora 27), though it’s easily installable from one of the standard repositories that I have enabled.  Something of a relief, though it’s not a likely attack vector for this machine anyway.

What, though, do I have installed on this system that is vulnerable, and which I’d never have thought to check?  Forget all of the kernel parameters which I don’t need turned on, but which have been enabled by the distribution for ease of use across multiple systems.  Forget the apps that I’ve installed and use everyday.  Forget, even the apps that I installed once to try, and then neglected to remove.  What about the apps that I didn’t even know were there, and which I never realised might have an impact on the security posture of my system?  I don’t know, and have little way to find out.

This system doesn’t run business-critical operations.  When I first got it and installed the Operating System, I decided to err towards usability, and to be ready to trash[9] it and start again if I had problems with compromise.  But that’s not the case for millions of other systems out there.  I urge you to consider what you’re running on a system, what’s actually installed on it, and what you really need.  Patch what you need, remove what you don’t.   It’s time for a Spring clean[10].


1- I so want to spell this word dæmons, but I think that might be taking my Middle English obsession too far[2].

2 – I mentioned that I studied Middle English, right?

3 – I’m most interested in (GNU) Linux here, as I’m a strong open source advocate and because it’s the Operating System that I know best[4].

4 – oh, and I should disclose that I work for Red Hat, of course.

5 – as with so many things in our business.

6 – the good type, or I’d have said “cracker”, probably.

7 – there’s even a word for it, see?

8 – beyond a second order problem that a suggested fix seems to have made the initial problem worse…

9 – physically, if needs be.

10 – In the Northern Hemisphere, at least.

Confessions of an auditor

Moving to a postive view of security auditing.

So, right up front, I need to admit that I’ve been an auditor.  A security auditor.  Professionally.  It’s time to step up and be proud, not ashamed.  I am, however, dialled into a two-day workshop on compliance today and tomorrow, so this is front and centre of my thinking right at the moment.  Hence this post.

Now, I know that not everybody has an entirely … positive view of auditors.  And particularly security auditors[1].  They are, for many people, a necessary evil[2].  The intention of this post is to try to convince you that auditing[3] is, or can and should be, a Force for Good[tm].

The first thing to address is that most people have auditors in because they’re told that they have to, and not because they want to.  This is often – but not always – associated with regulatory requirements.  If you’re in the telco space, the government space or the financial services space, for instance, there are typically very clear requirements for you to adhere to particular governance frameworks.  In order to be compliant with the regulations, you’re going to need to show that you meet the frameworks, and in order to do that, you’re going to need to be audited.  For pretty much any sensible auditing regime, that’s going to require an external auditor who’s not employed by or otherwise accountable to the organisation being audited.

The other reason that audit may happen is when a C-level person (e.g. the CISO) decides that they don’t have a good enough idea of exactly what the security posture of their organisation – or specific parts of it – is like, so they put in place an auditing regime.  If you’re lucky, they choose an industry-standard regime, and/or one which is actually relevant to your organisation and what it does.  If you’re unlucky, … well, let’s not go there.

I think that both of the reasons above – for compliance and to better understand a security posture – are fairly good ones.  But they’re just the first order reasons, and the second order reasons – or the outcomes – are where it gets interesting.

Sadly, one of key outcomes of auditing seems to be blame.  It’s always nice to have somebody else to blame when things go wrong, and if you can point to an audited system which has been compromised, or a system which has failed audit, then you can start pointing fingers.  I’d like to move away from this.

Some positive suggestions

What I’d like to see would be a change in attitude towards auditing.  I’d like more people and organisations to see security auditing as a net benefit to their organisations, their people, their systems and their processes[4].  This requires some changes to thinking – changes which many organisations have, of course, already made, but which more could make.  These are the second order reasons that we should be considering.

  1. Stop tick-box[5] auditing.  Too many audits – particularly security audits – seem to be solely about ticking boxes.  “Does this product or system have this feature?”  Yes or no.  That may help you pass your audit, but it doesn’t give you a chance to go further, and think about really improving what’s going on.  In order to do this, you’re going to need to find auditors who actually understand the systems they’re looking at, and are trained to some something beyond tick-box auditing.  I was lucky enough to be encouraged to audit in this broader way, alongside a number of tick-boxes, and I know that the people I was auditing always preferred this approach, because they told me that they had to do the other type, too – and hated it.
  2. Employ internal auditors.  You may not be able to get approval for actual audit sign-off if you use internal auditors, so internal auditors may have to operate alongside – or before – your external auditors, but if you find and train people who can do this job internally, then you get to have people with deep knowledge (or deeper knowledge, at least, than the external auditors) of your systems, people and processes looking at what they all do, and how they can be improved.
  3. Look to be proactive, not just reactive.  Don’t just pick or develop products, applications and systems to meet audit, and don’t wait until the time of the audit to see whether they’re going to pass.  Auditing should be about measuring and improving your security posture, so think about posture, and how you can improve it instead.
  4. Use auditing for risk-based management.  Last, but not least – far from least – think about auditing as part of your risk-based management planning.  An audit shouldn’t be something you do, pass[6] and then put away in a drawer.  You should be pushing the results back into your governance and security policy model, monitoring, validation and enforcement mechanisms.  Auditing should be about building, rather than destroying – however often it feels like it.

 


1 – you may hear phrases like “a special circle of Hell is reserved for…”.

2 – in fact, many other people might say that they’re an unnecessary evil.

3 – if not auditors.

4 – I’ve been on holiday for a few days: I’ve maybe got a little over-optimistic while I’ve been away.

5 – British usage alert: for US readers, you’ll probably call this a “check-box”.

6 – You always pass every security audit first time, right?

The Other CIA: Confidentiality, Integrity and Availability

Any type of even vaguely useful system will hold, manipulate or use data.

I spend a lot of my time on this blog talking about systems, because unless you understand how your systems work as a set of components, you’re never going to be able to protect and manage them.  Today, however, I want to talk about security of data – the data in the systems.  Any type of even vaguely useful system will hold, manipulate or use data in some way or another[1], and as I’m interested[2] in security, I think it’s useful to talk about data and data security.  I’ve touched on this question in previous articles, but one recent one, What’s a State Actor, and should I care? had a number of people asking me for more detail on some of the points I raised, and as one of them was the classic “C.I.A.” model around data security, I thought I’d start there.

The first point I should make is that the “CIA triad” is sometimes over-used.  You really can’t reduce all of information security to confidentiality, integrity and availability – there are a number of other important principles to consider.  These three arguably don’t even cover all the issues you’d want to consider around data security – what, for instance, about data correctness and consistency, for example? – but I’ve always found them to be a useful starting point, so as long as we don’t kid ourselves into believing that they’re all we need, they are useful to hold in mind.   They are, to use a helpful phrase, “necessary but not sufficient”.

We should also bear in mind that for any particular system, you’re likely to have various types and sets of data, and these types and sets may have different requirements.  For instance, a database may store not only key data about, for instance, museum exhibits, but will also store data about who can update the key data, and even metadata about that – this might[3] include information about a set of role-based access controls (RBAC), and the security requirements for this will be different to the security requirements for thee key data.  So, when we’re considering the data security requirements of a system, don’t assume that they will be uniform across all data sets.

Confidentiality

Confidentiality is quite an easy one to explain.  If you don’t want everybody to be able to see a set of data, then you wish it to be confidential with regards to at least some entities – whether they be people or systems entities, internal or external.  There are a number of ways to implement confidentiality, the most obvious being encryption of data, but there are other approaches, of which the easiest is just denying access to data through physical, geographical or other authorisation controls.

When you might care that data is confidentiality-protected: health records, legal documents, credit card details, financial information, firewall rules, system administrator rights, passwords.

When you might not care that data is confidentiality-protected: sports records, examination results, open source code, museum exhibit information, published company financial results.

Integrity

Integrity, when used as a term in this context, is slightly different to its standard usage. The property we’re looking for is not the same integrity that we expect[4] from our politicians, but is that data has not been changed from what it should be.  Data is often useless unless it can be changed – we want to update information about our museum exhibits from time to time, for instance – but who can change it, and when, are the sort of things we want to control.  Equally important may be the type of changes that can be made to it: if I have a careful classification scheme for my Tudor music manuscripts, I don’t want somebody putting in binary data which means nothing to me or our visitors.

I struggled to think of any examples when you wouldn’t want to protect the integrity of your data from at least some entities, as if data can be changed willy-nilly[5], it seems be worthless.  It did occur to me, however, that as long as you have integrity-protected records of what has been changed, you’re probably OK.  That’s the model for some open source projects or collaborative writing endeavours, for example.

[Discursion – Open source projects don’t generally allow you to scribble directly onto the main “approved” store – whose integrity is actually very important.  That’s why software projects – proprietary or open source – have for decades used source control systems or versioning systems.  One of the success criteria for scaling an open source project is a consensus on integrity management.]

Availability

Availability is the easiest of the triad to ignore when you’re designing a system.  When you create data, it’s generally useless unless the entities that need it can get to it when they need it.  That doesn’t mean that all systems need to have 100% up-time, or even that particular data sets need to be available for 100% of the up-time of the system, but when you’re designing a system, you need to decide what’s going to be appropriate, and how to manage with degradation.  Why degradation?  Because one of the easiest ways to affect the availability of data is to slow down access to it – as described in another recent post What’s your availability? DoS attacks and more.  If I’m using a mobile app to view information about museum exhibits in real-time, and it takes five minutes for me to access the description, then things aren’t any good.  On the other hand, if there’s some degradation of the service, and I can only access the first paragraph of the description, with an apology for the inconvenience and a link to other information, that might be acceptable.  From a different point of view, if I notice that somebody is attacking my museum system, but I can’t get into it with administrative access to lock it down or remove their access, that’s definitely bad.

As with integrity protection, it’s difficult to think of examples when availability protection isn’t important, but availability isn’t necessarily a binary condition: it may vary from time to time.

Conclusion

Although they’re not perfect descriptions of all the properties you need to consider when designing in data security, confidentiality, integrity and availability should at least cause you to start thinking about what your data is for, how it should be accessed, how it should be changed, and by whom.


1 – I just know that somebody’s going to come up with a counter-example.

2 – And therefore assume that you, the reader, are interested.

3 – as a nested example, which is quite nice, as we’re talking about metadata.

4 – And far too rarely get, it seems.

5 – Not a rude phrase, even if it sounds like it should be.  Look it up if you don’t believe me.

What’s your availability? DoS attacks and more

In security we talk about intentional degradation of availability

A colleague of mine recently asked me about protection from DoS attacks[1] for a project with which he’s involved – Denial of Service attacks.  The first thing that sprung to mind, of course, was DDoS: Distributed Denial of Service attacks, where hundreds or thousands[2] of hosts are used to send vast amounts of network traffic to – or maybe more accurately “at” – servers in the hopes of bringing the servers to their knees and stopping them providing the service for which they’re designed.  These are the attacks that get into the news, and with good reason.

There are other types of DoS however, and the more I thought about it, the more I wondered whether he – and I – should be worrying about these other DoS attacks and also considering other related types of issue which could cause problems to systems.  And because I realised it was an interesting topic, I decided to write about it[3].

I’m going to return to the classic “C.I.A.” model of computer security: Confidentiality, Integrity and Availability.  The attacks we’re talking about here are those most often overlooked: attempts to degrade the availability of a service.  There’s an overlap with the related discipline of resilience here, but I think that the key differentiator is that in security we’re generally talking about intentional degradation of availability, whereas resilience also covers (and maybe focuses on) unintentional degradation.

So, what types of availability attacks might we want to consider?

Denial of service attacks

I think it’s worth linking to Wikipedia’s pretty awesome entry “Denial of service attack” – not something I often do, but I thought it was excellent.  Although they’re not mutually exclusive at all, here are some of the key types as I’d define them:

  • Distributed DoS – where you have lots of different hosts attacking at the same time, flooding the target with traffic.  These days, this can be easily automated, and it’s possible to rent compromised machines to perform a coordinated attack.
  • Application layer – where the attack is aimed at the service, rather than at the host beneath.  This may seem like an academic distinction, but it’s not: what it really means is that the attack is performed with knowledge of the application layer.  So, for instance, if you’re attacking a web server, you might initiate lots of HTTP sessions, or if you were attacking a Kerberos server, you might request lots of authentication tickets.  These types of attacks may be quite costly to perform, but they’re also difficult to protect against, as each attack looks like a “legal” interaction with the service, and unless you’re on the look-out in a way which is typically not automated at this level, they’re difficult to avoid.
  • Host level – this is a family of attacks which go for the host and/or associated Operating System, rather than the service itself.  A classic attack would be the SYN flood, which misused the TCP protocol to use up resources on the host, thereby stopping any associated services from being able to respond.  Host attacks may be somewhat simpler to defend against, as it’s easier to invest in logic to detect them at this level (or maybe “set of layers”, if we adopt the OSI model), and to correlate responses across different hosts.  Firewalls and similar defences are also more likely to be able to be configured to help defend hosts which may be targeted.

Resource starvation

The term “resource starvation” most accurately refers[4] to situations where a process (or application) is denied sufficient CPU allocation to perform correctly.  How could this occur?  Well, it’s going to be rarer than in the DoS case, because in order to do it, you’re going to need some way to impact the underlying scheduling of the Operating System and/or virtualisation management (think hypervisor, typically).  That would normally mean that you’d need pretty low-level access to the machine, but there is a family of attacks known as “noisy neighbour”[5] where workloads – VMs or containers, typically – use up so many resources that other workloads are starved.

However, partly because of this case, I’d argue that resource starvation can usefully be associated with other types of availability attacks which occur locally to the machine hosting the targeted service, which might be related to CPU, file descriptor, network or other resources.

Generally, noisy neighbour attacks can be fairly easily mitigated by controls in the Operating System or virtualisation manager, though, of course, compromised or malicious components at this layer are very difficult to manage.

 

Dependency blocking

I’m not sure what the best term for this type of attack is, but what I’m thinking of is attacks which impact a service by reducing or removing access to external services on which they depend – remote components, if you will.  If, for instance, my web application requires access to a database, then an attack on that database – however performed – will impact my service.  As almost any kind of service will have external dependencies these days[6], this is can be a very effective attack, as it allows knowledgeable attackers to target the weakest link in the “chain” of components that make up your service.

There are mitigations against some of these attacks – caching and later reconciliation/synching being one – but identifying and defending against these sorts of attacks depends largely on considering your service as a system, and realising the types of impact degradation of the different parts might have.

 

Conclusion – managed degradation

Which leads me to a final point, which is that when considering availability attacks, understanding and planning Service degradation: actually a good thing is going to be invaluable – and when you’ve done that, you’ll definitely going to need to test it, too (If it isn’t tested, it doesn’t work).

 


1 – yes, I checked the capitalisation – he wasn’t worried about DRDOS, MS-DOS or any of those lovely 80s era command line Operating Systems.

2 – or millions or more, these days.

3 – here, for the avoidance of doubt.

4 – I believe.

5 – you know my policy on spellings by now.  I’m British, and we’ll keep it that way.

6 – unless you’re still using green-screen standalone machines to run your business, in which case either a) yikes or b) well done.

What’s a State Actor, and should I care?

The bad thing about State Actors is that they rarely adhere to an ethical code.

How do you know what security measures to put in place for your organisation and the systems that you run?  As we’ve seen previously, There are no absolutes in security, so there’s no way that you’re ever going to make everything perfectly safe.  But how much effort should you put in?

There are a number of parameters to take into account.  One thing to remember is that there are always trade-offs with security.  My favourite one is a three-way: security, cost and usability.  You can, the saying goes, have two of the three, but not the third: choose security, cost or usability.  If you want security and usability, it’s going to cost you.  If you want a cheaper solution with usability, security will be reduced.  And if you want security cheaply, it’s not going to be easily usable.  Of course, cost stands in for time spent as well: you might be able to make things more secure and usable by spending more time on the solution, but that time is costly.

But how do you know what’s an appropriate level of security?  Well, you need to think about what you’re protecting, the impact of it being:

  • exposed (confidentiality);
  • changed (integrity);
  • inaccessible (availability).

These are three classic types of control, often recorded as C.I.A.[1].  Who would be impacted?  What mitigations might you put in place? What vulnerabilities might exist in the system, and how might they be exploited?

One of the big questions to ask alongside all of these is “who exactly might be wanting to attack my systems?”  There’s a classic adage within security that “no system is secure against a sufficiently motivated and resourced attacker”.  Luckily for us, there are actually very few attackers who fall into this category.  Some examples of attackers might be:

  • insiders[3]
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists[4]
  • … and more.

Most of these will either not be that motivated, or not particularly well-resourced.  There are two types of attackers for whom that is not the case: academics and State Actors.  The good thing about academics is that they have adhere to an ethical code, and shouldn’t be trying anything against your systems without your permission.  The bad thing about State Actors is that they rarely adhere to an ethical code.

State Actors have the resources of a nation state behind them, and are therefore well-resourced.  They are also generally considered to be well-motivated – if only because they have many people available to perform attack, and those people are well-protected from legal process.

One thing that State Actors may not be, however, is government departments or parts of the military.  While some nations may choose to attack their competitors or enemies (or even, sometimes, partners) with “official” parts of the state apparatus, others may choose a “softer” approach, directing attacks in a more hands-off manner.  This help may take a variety of forms, from encouragement, logistical support, tools, money or even staff.  The intent, here, is to combine direction with plausible deniability: “it wasn’t us, it was this group of people/these individuals/this criminal gang working against you.  We’ll certainly stop them from performing any further attacks[5].”

Why should this matter to you, a private organisation or public company?  Why should a nation state have any interest in you if you’re not part of the military or government?

The answer is that there are many reasons why a State Actor may consider attacking you.  These may include:

  • providing a launch point for further attacks
  • to compromise electoral processes
  • to gain Intellectual Property information
  • to gain competitive information for local companies
  • to gain competitive information for government-level trade talks
  • to compromise national infrastructure, e.g.
    • financial
    • power
    • water
    • transport
    • telecoms
  • to compromise national supply chains
  • to gain customer information
  • as revenge for perceived slights against the nation by your company – or just by your government
  • and, I’m sure, many others.

All of these examples may be reasons to consider State Actors when you’re performing your attacker analysis, but I don’t want to alarm you.  Most organisations won’t be a target, and for those that are, there are few measures that are likely to protect you from a true State Actor beyond measures that you should be taking anyway: frequent patching, following industry practice on encryption, etc..  Equally important is monitoring for possible compromise, which, again, you should be doing anyway.  The good news is that if you might be on the list of possible State Actor targets, most countries provide good advice and support before and after the act for organisations which are based or operate within their jurisdiction.


1 – I’d like to think that they tried to find a set of initials for G.C.H.Q. or M.I.5., but I suspect that they didn’t[2].

2 – who’s “they” in this context?  Don’t ask.  Just don’t.

3 – always more common than we might think: malicious, bored, incompetent, bankrupt – the reasons for insider-related security issues are many and varied.

4 – one of those portmanteau words that I want to dislike, but find rather useful.

5 – yuh-huh, right.

Why I should have cared more about lifecycle

Every deployment is messy.

I’ve always been on the development and architecture side of the house, rather than on the operations side. In the old days, this distinction was a useful and acceptable one, and wasn’t too difficult to maintain. From time to time, I’d get involved with discussions with people who were actually running the software that I had written, but on the whole, they were a fairly remote bunch.

This changed as I got into more senior architectural roles, and particularly as I moved through some pre-sales roles which involved more conversations with users. These conversations started to throw up[1] an uncomfortable truth: not only were people running the software that I helped to design and write[3], but they didn’t just set it up the way we did in our clean test install rig, run it with well-behaved, well-structured data input by well-meaning, generally accurate users in a clean deployment environment, and then turn it off when they’re done with it.

This should all seem very obvious, and I had, of course, be on the receiving end of requests from support people who exposed that there were odd things that users did to my software, but that’s usually all it felt like: odd things.

The problem is that odd is normal.  There is no perfect deployment, no clean installation, no well-structured data, and certainly very few generally accurate users.  Every deployment is messy, and nobody just turns off the software when they’re done with it.  If it’s become useful, it will be upgraded, patched, left to run with no maintenance, ignored or a combination of all of those.  And at some point, it’s likely to become “legacy” software, and somebody’s going to need to work out how to transition to a new version or a completely different system.  This all has major implications for security.

I was involved in an effort a few years ago to describe the functionality, lifecycle for a proposed new project.  I was on the security team, which, for all the usual reasons[4] didn’t always interact very closely with some of the other groups.  When the group working on error and failure modes came up with their state machine model and presented it at a meeting, we all looked on with interest.  And then with horror.  All the modes were “natural” failures: not one reflected what might happen if somebody intentionally caused a failure.  “Ah,” they responded, when called on it by the first of the security to be able to form a coherent sentence, “those aren’t errors, those are attacks.”  “But,” one of us blurted out, “don’t you need to recover from them?”  “Well, yes,” they conceded, “but you can’t plan for that.  It’ll need to be on a case-by-case basis.”

This is thinking that we need to stamp out.  We need to design our systems so that, wherever possible, we consider not only what attacks might be brought to bear on them, but also how users – real users – can recover from them.

One way of doing this is to consider security as part of your resilience planning, and bake it into your thinking about lifecycle[5].  Failure happens for lots of reasons, and some of those will be because of bad people doing bad things.  It’s likely, however, that as you analyse the sorts of conditions that these attacks can lead to, a number of them will be similar to “natural” errors.  Maybe you could lose network connectivity to your database because of a loose cable, or maybe because somebody is performing a denial of service attack on it.  In both these cases, you may well start off with similar mitigations, though the steps to fix it are likely to be very different.  But considering all of these side by side means that you can help the people who are actually going to be operating those systems plan and be ready to manage their deployments.

So the lesson from today is the same as it so often is: make sure that your security folks are involved from the beginning of a project, in all parts of it.  And an extra one: if you’re a security person, try to think not just about the attackers, but also about all those poor people who will be operating your software.  They’ll thank you for it[6].


1 – not literally, thankfully[2].

2 – though there was that memorable trip to Singapore with food poisoning… I’ll stop there.

3 – a fact of which I actually was aware.

4 – some due entirely to our own navel-gazing, I’m pretty sure.

5 – exactly what we singularly failed to do in the project I’ve just described.

6 – though probably not in person.  Or with an actual gift.  But at least they’ll complain less, and that’s got to be worth something.