Security at conferences – a semi-humorous view

Next week, I’ll be attending and speaking at Red Hat Summit in San Francisco.   I’ve written before about how annoying I find it when people don’t stay on topic at conferences, so rest assured that I won’t be making any product pitches: in fact, I plan to hold a vote during the session to determine some of what I talk about, so if you’re attending, too, please come along and help choose.

In anticipation of the event and associated travel, I thought I’d compile a semi-humorous list of tangentially-security-related advice for anyone planning on attending a conference or associated exhibition/expo in the near future.  I’ve been to way too many in my *cough* 20+ years in the industry: here are some tips for conferences.

Oh, and before we start, if you’re at DEFCON, be more paranoid even than this, or even more paranoid than you think you need to be.  At most conferences, you don’t need to worry too much that someone might be spoofing the cell towers, for instance.  At DEFCON, well…

  • wifi – if you’re going to use wifi, use official hotel / conferences access points, rather than random ones which have names like “useme” or “theNSA” or “notRussianSpies” or “dataCollectionforFB”.  And even when you’re using the official ones, don’t trust them: use HTTPS and/or a VPN.  You know this: don’t forget it just because you’re at a conference.
  • what happens in Vegas makes it back to your boss – maybe not your family members, but definitely your boss.  I’ve been to conferences in Vegas.  I’ve seen … things.
  • bluetooth – your safest option?  Turn off bluetooth, particularly on your phone.  If you must leave it on (so that you can use your watch/headphones/other cool accessories), then never accept unsolicited pairing requests.
  • conversations – do you want to be talked to by random strangers?  Some people prefer to be left alone, and a growing number of conferences allow you to put a sticker onto your badge which will tell other attendees whether or not you’re happy to be addressed.   These are typically:
    • green: I’m so gregarious I’m probably not in a technical job, and am more likely to be in marketing
    • red: please, please don’t talk to me, or even glance in my direction
    • yellow: I’m in two minds about it.  If you’re going to offer me a job, make a pass at me or we’ve already met, then it’s probably OK.
    • (I have a serious question about this, by the way: what if you’re red/green colourblind and either very shy or very gregarious?  This approach seems seriously flawed.)
  • don’t leave your phone on the booth table – unless you want it to be stolen.  I’m always astonished by this, but see it all the time.
  • decide whether you’re going to give out your email address – for most shows, you give your email address out whenever you have your badge scanned.  So you need to decide whether you want to be scanned.  There are lots of other ways of giving out your email address, of course, and one is to drop your business card into those little glass bowls in the hopes of winning a prize.  That you never win[1].
  • getting pwned by booth staff – how do you get enough information about a company to decide whether actually to visit the booth and maybe talk to the staff?  Well, you’re going to need to approach it, and you may have to slow down in order to read the marketing messages.  There’s a set of rules that you need to be aware of around this behaviour, and it’s that staff on the booth can engage you in conversation if they catch you doing any of the following:
    • stepping on the coloured carpet tiles around the booth;
    • making eye contact[2];
    • dawdling[3].
  • languages – if you’re attending a conference in a foreign environment, you may wish to include a sticker on your badge to let people know in which languages you’re conversant.  US English is standard, but other favourites include Java, Python, UML and, in some circles, COBOL[4].
  • beware too much swag – I’ve only had to do it once, but I did once buy an extra case to take swag back in.  This was foolish.  There really is such a thing as too much swag, and as we all know, once you have more than three vaguely humorous techie t-shirts, you can rotate them through the washing[6] until you get the chance to visit another conference and pick up some new ones.
  • useful phrases – not even vaguely security-related, and this really relates to the languages point, but I was told a long time ago by a wise person[7] that you only need five phrases in the language of any foreign country[8] that you’re visiting:
    • yes;
    • no;
    • please;
    • thank you;
    • I’ll have five beers, and my colleague’s paying.

1 – except once, when I won a large drone which was really, really difficult to get home from the US and then turned out to be almost impossible to control in the windy part of the UK in which I live.

2 – do you know nothing?

3 – this is the tricky one: I reckon anything over half a second is fair game, but exact timing is culturally-specific, based on my observations.

4 – if you find yourself at a conference where lots of people are going around with stickers saying “COBOL” on them, or, more dangerous still, wearing t-shirts with “I know COBOL, and I’m not ashamed”, you have two options: a) run, fast; b) stick around, learn to converse with the natives and end up with a job for life making shockingly large amounts of money maintaining legacy banking code[5].

5 – but getting invited to a vanishingly small number of dinner parties or other social engagements.

6 – if you don’t wash your t-shirts, you’re not going to need to worry to much about [5] becoming a problem for you.

7 – I can’t remember when, exactly, or by whom, in fact, but they must have been pretty wise: it’s good advice.

8 – I include the North of England in the “foreign countries” category.

What’s an attack surface?

“Reduce your attack surface,” they say. But what is it?

“Reduce your attack surface,” they[1] say.  But what is it?  The instruction to reduce your attack surface is one of the principles of IT security, so it must be a Good Thing[tm].  The problem is that it’s not always clear what an attack surface actually is.

I’m going to go for the broadest possible description I can think of, or nearly, because I’m pretty paranoid, and because I’m not convinced that the Wikipedia definition[2] is sufficient[3].  Although I’ll throw in a few examples of how to reduce attack surfaces, the purpose of this post is really to explain what one is, rather than to help protect you – but a good understanding really is required before you start with anything else, so hopefully this will be useful.

So, here’s my start at a definition:

  • The attack surface of a system is the sum of areas where attacks could be launched against it.

That feels a little bit circular – let’s define some terms.  First of all, what’s an an “area” in this definition?  Well, I’d say that any particular component of a system may have many points of possible vulnerability – and therefore attack.  The sum of those points is an area – and the sum of the areas of the different components of a system gives us our system’s attack surface.

To understand better, we’re going to have to talk about systems – one of my favourite topics[4] – because I think it’s important to clarify a key difference between the attack surface of a component considered alone, and the area that a component adds when part of a system.  They will not generally be the same.

Here’s an example: you’re deploying an Operating System.  Let’s look at two options for deployment, and compare the attack surfaces.  In both cases, I’m going to take a fairly restricted look at points of vulnerability, excluding, for instance, human factors, as I don’t want to get bogged down in the details.

Deployment one – bare metal

You install your Operating System onto a physical machine, and plug it into the network.  What are some of the attack points?

  • your network connection
  • the physical hardware
  • services which are listening on the network connection
  • connections via USB – keyboard and mouse, for example.

There are more, but this should give us enough to do some comparisons.  I’d generally think of the attack surface as being associated with the physical bounds of the hardware, with the addition of the network port and USB connections.

How can we reduce the attack surface?  Well, we could unplug the network connection – though that might significantly reduce the efficacy of the system! – or we might take steps to reduce the number of services listening on the connection, to reduce the privilege level at which they run, or increase the authentication requirements for connecting to them.  We could reduce our surface area by using a utility such as “usbguard” to restrict USB connections, and, if we’re worried about physical access to the machine, we could put it in a locked cabinet somewhere.  These are all useful and appropriate ways to reduce our system’s attack surface.

Deployment two – a Virtual Machine

In this deployment scenario, we’re going to install the Operating System onto a Virtual Machine (VM), running on a physical host.  What does my attack surface look like now?  Well, that rather depends on how you define your system.  You could, of course, look at the wider system – the VM and the physical host – but for the purposes of this discussion, I’m going to consider that the operation of the Operating System is what we’re interested in, rather than the broader system[6].  So, what does our attack surface look like this time?  Here’s a quick list.

  • your network connection
  • the hypervisor
  • services which are listening on the network connection
  • connections via USB – keyboard and mouse, for example.

You’ll notice that “the physical hardware” is missing from this list, and that’s because it’s been replace with “the hypervisor”.  This is a little simplistic, for a few reasons, including that the hypervisor is arguably implemented via a combination of software and hardware controls, but it’s certainly different from the entire physical hardware we were talking about before, and in fact, there’s not much you can do from the point of the Virtual Machine to secure it, other than recognise its restrictions, so we might want to remove it from our list at this level.

The other entries are also somewhat different from our first scenario, although you might not realise at first glance.  First, it’s quite likely (though not certain) that your network connection may in fact be a virtual network connection provided by the hosting system, which means that some of the burden of defending it goes to the hosting system.  The same goes for the connections via USB – the hypervisor generally provides “virtual hardware” (via something like qemu, for example), which can be attached – or removed – from virtual machines.

So, you still have the services which are listening on the network connection, but it’s definitely a different attack surface from the first deployment scenario.

Now, if you take the wider view, then there’s definitely an attack surface at the physical machine level as well, and that needs to be considered – but it’s quite likely that this will be under the control of somebody completely different (such as a Cloud Service Provider – CSP).

Another quick example

When I deploy a webserver (using, for instance, Apache), I’ll need to consider a variety of attack vectors, from authentication to denial of service to storage attacks: these are part of our attack surface.  If I deploy it with a database (e.g. PostgreSQL or MySQL), the attack surface looks different, assuming that I care about the data in the database.  Whereas I might previously have been concerned to ensure that an HTTP “PUT” command didn’t overwrite or scramble a file on my filesystem, a malformed command to my database server could delete or corrupt multiple tables.  On the other hand, I might now be able to lock down some of the functions of my webserver that I no longer need to worry about filesystem attacks.  The attack surface of my webserver is different when it’s combined in a system with other components[7].

Why do I want to reduce my attack surface?

Well, this is quite an easy one.  By looking back at my earlier definition, you’ll see that the smaller a system’s attack surface, the fewer points of attack there are available to malicious actors.  That’s got to be a piece of good news.

You will, of course, never be able to reduce your attack  surface to zero (see There are no absolutes in security), but the more you reduce (and document, always document!), the better position you’ll be in.  It’s always about raising the bar to make it more difficult for malicious actors to affect you.


1 – the mythical IT Security Community, that’s who.

2 – to give one example.

3 – it only talks about data, and only about software: that’s not broad enough for me.

4 – as long-standing[4] readers of this blog will know.

5 – and long-suffering.

6 – yes, I know we can’t ignore that, but we’ll come back to it, honest.

7 – there are considerations around the attack surface of the database as well, of course.

Defending our homes

Your router is your first point of contact with the Internet: how insecure is it?

I’ve always had a problem with the t-shirt that reads “There’s no place like 127.0.0.1”. I know you’re supposed to read it “home”, but to me, it says “There’s no place like localhost”, which just doesn’t have the same ring to it. And in this post, I want to talk about something broader: the entry-point to your home network, which for most people will be a cable or broadband router[1].  The UK and US governments just published advice that “Russia”[2] is attacking routers.  This attack will be aimed mostly, I suspect, at organisations (see my previous post What’s a State Actor, and should I care?), rather than homes, but it’s a useful wake-up call for all of us.

What do routers do?

Routers are important: they provide the link between one network (in this case, our home network) and another one (in this case, the Internet, via our ISP’s network.  In fact, for most of us, the box we think of as “the router”[3] is doing a lot more than that.  The “routing” bit is what is sounds like: it helps computers on your network to find routes to send data to computers outside the network – and vice-versa, for when you’re getting data back.  But most routers will actual be doing more than that.  The other purpose that many will be performing is that of a modem.  Most of us [4] connect to the Internet via a phoneline – whether cable or standard landline – though there is a growing trend for mobile Internet to the home.  Where you’re connecting via a phone line, there’s a need to convert the signals that we use for the Internet to something else and then (at the other end) back again.  For those of us old enough to remember the old “dial-up” days, that’s what the screechy box next to your computer used to do.

But routers often do more things as, well.  Sometimes many more things, including traffic logging, being an WiFi access point, providing a VPN for external access to your internal network, child access, firewalling and all the rest.

Routers are complex things these days, and although state actors may not be trying to get into them, other people may.

Does this matter, you ask?  Well, if other people can get into your system, they have easy access to attacking your laptops, phones, network drives and the rest.  They can access and delete unprotected personal data.  They can plausibly pretend to be you.  They can use your network to host illegal data or launch attacks on others.  Basically, all the bad things.

Luckily, routers tend to come set up by your ISP, with the implication being that you can leave them, and they’ll be nice and safe.

So we’re safe, then?

Unluckily, we’re really not.

The first problem is that the ISPs are working on a budget, and it’s in their best interests to provide cheap kit which just does the job.  The quality of ISP-provided routers tends to be pretty terrible.  It’s also high on the list of things to try to attack by malicious actors: if they know that a particular router model will be installed in a several million homes, there’s a great incentive to find an attack, as an attack on that model will be very valuable to them.

Other problems that arise include:

  • slowness to fix known bugs or vulnerabilities – updating firmware can be costly to your ISP, so they may be slow to arrive (if they do at all);
  • easily-derived or default admin passwords, meaning that attackers don’t even need to find a real vulnerability – they can just log in.

 

Measures to take

Here’s a quick list of steps you can take to try to improve the security of your first hop to the Internet.  I’ve tried to order them in terms of ease – simplest first.  Before you do any of these, however, save the configuration data so that you can bring it back if you need it.

  1. Passwords – always, always, always change the admin password for your router.  It’s probably going to be one that you rarely use, so you’ll want to record it somewhere.  This is one of the few times where you might want to consider taping it to the router itself, as long as the router is in a secure place where only authorised people (you and your family[5]) have access.
  2. Internal admin access only – unless you have very good reasons, and you know what you’re doing, don’t allow machines to administer the router unless they’re on your home network.  There should be a setting on your router for this.
  3. Wifi passwords – once you’ve done 2., you need to ensure that wifi passwords on your network – whether set on your router or elsewhere – are strong.  It’s easy to set a “friendly” password so that it’s easy for visitors to connect to your network, but if it’s guessed by a malicious person who happens to be nearby, the first thing they’ll do will be to look for routers on the network, and as they’re on the internal network they’ll have access to it (hence why 1 is important).
  4. Only turn on functions that you understand and need – as I noted above, modern routers have all sorts of cool options.  Disregard them.  Unless you really need them, and you actually understand what they do, and what the dangers of turning them on are, then leave them off.  You’re just increasing your attack surface.
  5. Buy your own router – replace your ISP-supplied router with a better one.  Go to your local computer store and ask for suggestions.  You can pay an awful lot, but you can conversely get something fairly cheap that does the job and is more robust, performant and easy to secure than the one you have at the moment.  You may also want to buy a separate modem.  Generally setting up your own modem or router is simple, and you can copy the settings from the ISP-supplied one and it will “just work”.
  6. Firmware updates – I’d love to have this further up the list, but it’s not always easy.  From time to time, firmware updates appear for your router.  Most routers will check automatically, and may prompt you to update when you next log in.  The problem is that failure to update correctly can cause catastrophic results[6], or lose configuration data that you’ll need to re-enter.  But you really do need to consider doing this, and keeping a look-out of firmware updates which fix severe security issues.
  7. Go open source – there are some great open source router projects out there which allow you to take an existing router and replace all of the firmware/software on it with an open source alternative.  You can find a list of at least some of them on Wikipedia – https://en.wikipedia.org/wiki/List_of_router_firmware_projects, and a search on “router” on Opensource.com will open your eyes to a set of fascinating opportunities.  This isn’t a step for the faint-hearted, as you’ll definitely void the warranty on your existing router, but if you want to have real control, open source is always the way to go.

Other issues…

I’d love to pretend that once you’ve improved the security of your router, that all’s well and good, but it’s not on your home network..  What about IoT devices in your home (Alexa, Nest, Ring doorbells, smart lightbulbs, etc.?)  What about VPNs to other networks?  Malicious hosts via Wifi, malicious apps on your childrens phones…?

No – you won’t be safe.  But, as we’ve discussed before, although there is no “secure”, that doesn’t mean that we shouldn’t raise the bar and make it harder for the Bad Folks[tm].

 


1 – I’m simplifying – but read on, we’ll get there.

2 -“Russian State-Sponsored Cyber Actors”

3 – or, in my parents’ case, “the Internet box”, I suspect.

4 – this is one of these cases where I don’t want comments telling me how you have a direct 1 Terabit/s connection to your local backbone, thank you very much.

5 – maybe not the entire family.

6 – your router is now a brick, and you have no access to the Internet.

Do you know what’s lurking on your system?

Every utility, every library, every executable … increases your attack surface.

I had a job once which involved designing hardening procedures for systems that we were going to be use for security-related projects.  It was fascinating.  This was probably 15 years ago, and not only were guides a little thin on the ground for the Linux distribution I was using, but what we were doing was quite niche.  At first, I think I’d assumed that I could just write a script to close down a few holes that originated from daemons[1] that had been left running for no reasons: httpd, sendmail, stuff like that.  It did involve that, of course, but then I realised there was more to do, and started to dive down the rabbit hole.

So, after those daemons, I looked at users and groups.  And then at file systems, networking, storage.  I left the two scariest pieces to last, for different reasons.  The first was the kernel.  I ended up hand-crafting a kernel, removing anything that I thought it was unlikely we’d need, and then restarting several times when I discovered that the system wouldn’t boot because the things I thought I understood were more … esoteric than I’d realised.  I’m not a kernel developer, and this was a salutary lesson in quite how skilled those folks are.  At least, at the time I was doing it, there was less code, and fewer options, than there are today.  On the other hand, I was having to hack back to a required state, and there are more cut-down kernels and systems to start with than there were back then.

The other piece that I left for last was just pruning the installed Operating System applications and associated utilities.  Again, there are cut-down options that are easier to use now than then, but I also had some odd requirements – I believe that we needed Java, for instance, which has, or had …. well let’s say a lot of dependencies.  Most modern Linux distributions[3] start off by installing lots of pieces so that you can get started quickly, without having to worry about trying to work out dependencies for every piece of external software you want to run.

This is understandable, but we need to realise when we do this that we’re making a usability/security trade-off[5].  Every utility, every library, every executable that you add to a system increases your attack surface, and increases the likelihood of vulnerabilities.

The problem isn’t just that you’re introducing vulnerabilities, but that once they’re there, they tend to stay there.  Not just in code that you need, but, even worse, in code that you don’t need.  It’s a rare, but praiseworthy hacker[6] who spends time going over old code removing dependencies that are no longer required.  It’s a boring, complex task, and it’s usually just easier to leave the cruft[7] where it is and ship a slightly bigger package for the next release.

Sometimes, code is refactored and stripped: most frequently, security-related code. This is a very Good Thing[tm], but it turns out that it’s far from sufficient.  The reason I’m writing this post is because of a recent story in The Register about the “beep” command.  This command used the little speaker that was installed on most PC-compatible motherboards to make a little noise.  It was a useful little utility back in the day, but is pretty irrelevant now that most motherboards don’t ship with the relevant hardware.  The problem[8] is that installing and using the beep command on a system allows information to be leaked to users who lack the relevant permissions.  This can be a very bad thing.  There’s a good, brief overview here.

Now, “beep” isn’t installed by default on the distribution I’m using on the system on which I’m writing this post (Fedora 27), though it’s easily installable from one of the standard repositories that I have enabled.  Something of a relief, though it’s not a likely attack vector for this machine anyway.

What, though, do I have installed on this system that is vulnerable, and which I’d never have thought to check?  Forget all of the kernel parameters which I don’t need turned on, but which have been enabled by the distribution for ease of use across multiple systems.  Forget the apps that I’ve installed and use everyday.  Forget, even the apps that I installed once to try, and then neglected to remove.  What about the apps that I didn’t even know were there, and which I never realised might have an impact on the security posture of my system?  I don’t know, and have little way to find out.

This system doesn’t run business-critical operations.  When I first got it and installed the Operating System, I decided to err towards usability, and to be ready to trash[9] it and start again if I had problems with compromise.  But that’s not the case for millions of other systems out there.  I urge you to consider what you’re running on a system, what’s actually installed on it, and what you really need.  Patch what you need, remove what you don’t.   It’s time for a Spring clean[10].


1- I so want to spell this word dæmons, but I think that might be taking my Middle English obsession too far[2].

2 – I mentioned that I studied Middle English, right?

3 – I’m most interested in (GNU) Linux here, as I’m a strong open source advocate and because it’s the Operating System that I know best[4].

4 – oh, and I should disclose that I work for Red Hat, of course.

5 – as with so many things in our business.

6 – the good type, or I’d have said “cracker”, probably.

7 – there’s even a word for it, see?

8 – beyond a second order problem that a suggested fix seems to have made the initial problem worse…

9 – physically, if needs be.

10 – In the Northern Hemisphere, at least.

Confessions of an auditor

Moving to a postive view of security auditing.

So, right up front, I need to admit that I’ve been an auditor.  A security auditor.  Professionally.  It’s time to step up and be proud, not ashamed.  I am, however, dialled into a two-day workshop on compliance today and tomorrow, so this is front and centre of my thinking right at the moment.  Hence this post.

Now, I know that not everybody has an entirely … positive view of auditors.  And particularly security auditors[1].  They are, for many people, a necessary evil[2].  The intention of this post is to try to convince you that auditing[3] is, or can and should be, a Force for Good[tm].

The first thing to address is that most people have auditors in because they’re told that they have to, and not because they want to.  This is often – but not always – associated with regulatory requirements.  If you’re in the telco space, the government space or the financial services space, for instance, there are typically very clear requirements for you to adhere to particular governance frameworks.  In order to be compliant with the regulations, you’re going to need to show that you meet the frameworks, and in order to do that, you’re going to need to be audited.  For pretty much any sensible auditing regime, that’s going to require an external auditor who’s not employed by or otherwise accountable to the organisation being audited.

The other reason that audit may happen is when a C-level person (e.g. the CISO) decides that they don’t have a good enough idea of exactly what the security posture of their organisation – or specific parts of it – is like, so they put in place an auditing regime.  If you’re lucky, they choose an industry-standard regime, and/or one which is actually relevant to your organisation and what it does.  If you’re unlucky, … well, let’s not go there.

I think that both of the reasons above – for compliance and to better understand a security posture – are fairly good ones.  But they’re just the first order reasons, and the second order reasons – or the outcomes – are where it gets interesting.

Sadly, one of key outcomes of auditing seems to be blame.  It’s always nice to have somebody else to blame when things go wrong, and if you can point to an audited system which has been compromised, or a system which has failed audit, then you can start pointing fingers.  I’d like to move away from this.

Some positive suggestions

What I’d like to see would be a change in attitude towards auditing.  I’d like more people and organisations to see security auditing as a net benefit to their organisations, their people, their systems and their processes[4].  This requires some changes to thinking – changes which many organisations have, of course, already made, but which more could make.  These are the second order reasons that we should be considering.

  1. Stop tick-box[5] auditing.  Too many audits – particularly security audits – seem to be solely about ticking boxes.  “Does this product or system have this feature?”  Yes or no.  That may help you pass your audit, but it doesn’t give you a chance to go further, and think about really improving what’s going on.  In order to do this, you’re going to need to find auditors who actually understand the systems they’re looking at, and are trained to some something beyond tick-box auditing.  I was lucky enough to be encouraged to audit in this broader way, alongside a number of tick-boxes, and I know that the people I was auditing always preferred this approach, because they told me that they had to do the other type, too – and hated it.
  2. Employ internal auditors.  You may not be able to get approval for actual audit sign-off if you use internal auditors, so internal auditors may have to operate alongside – or before – your external auditors, but if you find and train people who can do this job internally, then you get to have people with deep knowledge (or deeper knowledge, at least, than the external auditors) of your systems, people and processes looking at what they all do, and how they can be improved.
  3. Look to be proactive, not just reactive.  Don’t just pick or develop products, applications and systems to meet audit, and don’t wait until the time of the audit to see whether they’re going to pass.  Auditing should be about measuring and improving your security posture, so think about posture, and how you can improve it instead.
  4. Use auditing for risk-based management.  Last, but not least – far from least – think about auditing as part of your risk-based management planning.  An audit shouldn’t be something you do, pass[6] and then put away in a drawer.  You should be pushing the results back into your governance and security policy model, monitoring, validation and enforcement mechanisms.  Auditing should be about building, rather than destroying – however often it feels like it.

 


1 – you may hear phrases like “a special circle of Hell is reserved for…”.

2 – in fact, many other people might say that they’re an unnecessary evil.

3 – if not auditors.

4 – I’ve been on holiday for a few days: I’ve maybe got a little over-optimistic while I’ve been away.

5 – British usage alert: for US readers, you’ll probably call this a “check-box”.

6 – You always pass every security audit first time, right?

The Other CIA: Confidentiality, Integrity and Availability

Any type of even vaguely useful system will hold, manipulate or use data.

I spend a lot of my time on this blog talking about systems, because unless you understand how your systems work as a set of components, you’re never going to be able to protect and manage them.  Today, however, I want to talk about security of data – the data in the systems.  Any type of even vaguely useful system will hold, manipulate or use data in some way or another[1], and as I’m interested[2] in security, I think it’s useful to talk about data and data security.  I’ve touched on this question in previous articles, but one recent one, What’s a State Actor, and should I care? had a number of people asking me for more detail on some of the points I raised, and as one of them was the classic “C.I.A.” model around data security, I thought I’d start there.

The first point I should make is that the “CIA triad” is sometimes over-used.  You really can’t reduce all of information security to confidentiality, integrity and availability – there are a number of other important principles to consider.  These three arguably don’t even cover all the issues you’d want to consider around data security – what, for instance, about data correctness and consistency, for example? – but I’ve always found them to be a useful starting point, so as long as we don’t kid ourselves into believing that they’re all we need, they are useful to hold in mind.   They are, to use a helpful phrase, “necessary but not sufficient”.

We should also bear in mind that for any particular system, you’re likely to have various types and sets of data, and these types and sets may have different requirements.  For instance, a database may store not only key data about, for instance, museum exhibits, but will also store data about who can update the key data, and even metadata about that – this might[3] include information about a set of role-based access controls (RBAC), and the security requirements for this will be different to the security requirements for thee key data.  So, when we’re considering the data security requirements of a system, don’t assume that they will be uniform across all data sets.

Confidentiality

Confidentiality is quite an easy one to explain.  If you don’t want everybody to be able to see a set of data, then you wish it to be confidential with regards to at least some entities – whether they be people or systems entities, internal or external.  There are a number of ways to implement confidentiality, the most obvious being encryption of data, but there are other approaches, of which the easiest is just denying access to data through physical, geographical or other authorisation controls.

When you might care that data is confidentiality-protected: health records, legal documents, credit card details, financial information, firewall rules, system administrator rights, passwords.

When you might not care that data is confidentiality-protected: sports records, examination results, open source code, museum exhibit information, published company financial results.

Integrity

Integrity, when used as a term in this context, is slightly different to its standard usage. The property we’re looking for is not the same integrity that we expect[4] from our politicians, but is that data has not been changed from what it should be.  Data is often useless unless it can be changed – we want to update information about our museum exhibits from time to time, for instance – but who can change it, and when, are the sort of things we want to control.  Equally important may be the type of changes that can be made to it: if I have a careful classification scheme for my Tudor music manuscripts, I don’t want somebody putting in binary data which means nothing to me or our visitors.

I struggled to think of any examples when you wouldn’t want to protect the integrity of your data from at least some entities, as if data can be changed willy-nilly[5], it seems be worthless.  It did occur to me, however, that as long as you have integrity-protected records of what has been changed, you’re probably OK.  That’s the model for some open source projects or collaborative writing endeavours, for example.

[Discursion – Open source projects don’t generally allow you to scribble directly onto the main “approved” store – whose integrity is actually very important.  That’s why software projects – proprietary or open source – have for decades used source control systems or versioning systems.  One of the success criteria for scaling an open source project is a consensus on integrity management.]

Availability

Availability is the easiest of the triad to ignore when you’re designing a system.  When you create data, it’s generally useless unless the entities that need it can get to it when they need it.  That doesn’t mean that all systems need to have 100% up-time, or even that particular data sets need to be available for 100% of the up-time of the system, but when you’re designing a system, you need to decide what’s going to be appropriate, and how to manage with degradation.  Why degradation?  Because one of the easiest ways to affect the availability of data is to slow down access to it – as described in another recent post What’s your availability? DoS attacks and more.  If I’m using a mobile app to view information about museum exhibits in real-time, and it takes five minutes for me to access the description, then things aren’t any good.  On the other hand, if there’s some degradation of the service, and I can only access the first paragraph of the description, with an apology for the inconvenience and a link to other information, that might be acceptable.  From a different point of view, if I notice that somebody is attacking my museum system, but I can’t get into it with administrative access to lock it down or remove their access, that’s definitely bad.

As with integrity protection, it’s difficult to think of examples when availability protection isn’t important, but availability isn’t necessarily a binary condition: it may vary from time to time.

Conclusion

Although they’re not perfect descriptions of all the properties you need to consider when designing in data security, confidentiality, integrity and availability should at least cause you to start thinking about what your data is for, how it should be accessed, how it should be changed, and by whom.


1 – I just know that somebody’s going to come up with a counter-example.

2 – And therefore assume that you, the reader, are interested.

3 – as a nested example, which is quite nice, as we’re talking about metadata.

4 – And far too rarely get, it seems.

5 – Not a rude phrase, even if it sounds like it should be.  Look it up if you don’t believe me.

What’s your availability? DoS attacks and more

In security we talk about intentional degradation of availability

A colleague of mine recently asked me about protection from DoS attacks[1] for a project with which he’s involved – Denial of Service attacks.  The first thing that sprung to mind, of course, was DDoS: Distributed Denial of Service attacks, where hundreds or thousands[2] of hosts are used to send vast amounts of network traffic to – or maybe more accurately “at” – servers in the hopes of bringing the servers to their knees and stopping them providing the service for which they’re designed.  These are the attacks that get into the news, and with good reason.

There are other types of DoS however, and the more I thought about it, the more I wondered whether he – and I – should be worrying about these other DoS attacks and also considering other related types of issue which could cause problems to systems.  And because I realised it was an interesting topic, I decided to write about it[3].

I’m going to return to the classic “C.I.A.” model of computer security: Confidentiality, Integrity and Availability.  The attacks we’re talking about here are those most often overlooked: attempts to degrade the availability of a service.  There’s an overlap with the related discipline of resilience here, but I think that the key differentiator is that in security we’re generally talking about intentional degradation of availability, whereas resilience also covers (and maybe focuses on) unintentional degradation.

So, what types of availability attacks might we want to consider?

Denial of service attacks

I think it’s worth linking to Wikipedia’s pretty awesome entry “Denial of service attack” – not something I often do, but I thought it was excellent.  Although they’re not mutually exclusive at all, here are some of the key types as I’d define them:

  • Distributed DoS – where you have lots of different hosts attacking at the same time, flooding the target with traffic.  These days, this can be easily automated, and it’s possible to rent compromised machines to perform a coordinated attack.
  • Application layer – where the attack is aimed at the service, rather than at the host beneath.  This may seem like an academic distinction, but it’s not: what it really means is that the attack is performed with knowledge of the application layer.  So, for instance, if you’re attacking a web server, you might initiate lots of HTTP sessions, or if you were attacking a Kerberos server, you might request lots of authentication tickets.  These types of attacks may be quite costly to perform, but they’re also difficult to protect against, as each attack looks like a “legal” interaction with the service, and unless you’re on the look-out in a way which is typically not automated at this level, they’re difficult to avoid.
  • Host level – this is a family of attacks which go for the host and/or associated Operating System, rather than the service itself.  A classic attack would be the SYN flood, which misused the TCP protocol to use up resources on the host, thereby stopping any associated services from being able to respond.  Host attacks may be somewhat simpler to defend against, as it’s easier to invest in logic to detect them at this level (or maybe “set of layers”, if we adopt the OSI model), and to correlate responses across different hosts.  Firewalls and similar defences are also more likely to be able to be configured to help defend hosts which may be targeted.

Resource starvation

The term “resource starvation” most accurately refers[4] to situations where a process (or application) is denied sufficient CPU allocation to perform correctly.  How could this occur?  Well, it’s going to be rarer than in the DoS case, because in order to do it, you’re going to need some way to impact the underlying scheduling of the Operating System and/or virtualisation management (think hypervisor, typically).  That would normally mean that you’d need pretty low-level access to the machine, but there is a family of attacks known as “noisy neighbour”[5] where workloads – VMs or containers, typically – use up so many resources that other workloads are starved.

However, partly because of this case, I’d argue that resource starvation can usefully be associated with other types of availability attacks which occur locally to the machine hosting the targeted service, which might be related to CPU, file descriptor, network or other resources.

Generally, noisy neighbour attacks can be fairly easily mitigated by controls in the Operating System or virtualisation manager, though, of course, compromised or malicious components at this layer are very difficult to manage.

 

Dependency blocking

I’m not sure what the best term for this type of attack is, but what I’m thinking of is attacks which impact a service by reducing or removing access to external services on which they depend – remote components, if you will.  If, for instance, my web application requires access to a database, then an attack on that database – however performed – will impact my service.  As almost any kind of service will have external dependencies these days[6], this is can be a very effective attack, as it allows knowledgeable attackers to target the weakest link in the “chain” of components that make up your service.

There are mitigations against some of these attacks – caching and later reconciliation/synching being one – but identifying and defending against these sorts of attacks depends largely on considering your service as a system, and realising the types of impact degradation of the different parts might have.

 

Conclusion – managed degradation

Which leads me to a final point, which is that when considering availability attacks, understanding and planning Service degradation: actually a good thing is going to be invaluable – and when you’ve done that, you’ll definitely going to need to test it, too (If it isn’t tested, it doesn’t work).

 


1 – yes, I checked the capitalisation – he wasn’t worried about DRDOS, MS-DOS or any of those lovely 80s era command line Operating Systems.

2 – or millions or more, these days.

3 – here, for the avoidance of doubt.

4 – I believe.

5 – you know my policy on spellings by now.  I’m British, and we’ll keep it that way.

6 – unless you’re still using green-screen standalone machines to run your business, in which case either a) yikes or b) well done.