Do you know what’s lurking on your system?

Every utility, every library, every executable … increases your attack surface.

I had a job once which involved designing hardening procedures for systems that we were going to be use for security-related projects.  It was fascinating.  This was probably 15 years ago, and not only were guides a little thin on the ground for the Linux distribution I was using, but what we were doing was quite niche.  At first, I think I’d assumed that I could just write a script to close down a few holes that originated from daemons[1] that had been left running for no reasons: httpd, sendmail, stuff like that.  It did involve that, of course, but then I realised there was more to do, and started to dive down the rabbit hole.

So, after those daemons, I looked at users and groups.  And then at file systems, networking, storage.  I left the two scariest pieces to last, for different reasons.  The first was the kernel.  I ended up hand-crafting a kernel, removing anything that I thought it was unlikely we’d need, and then restarting several times when I discovered that the system wouldn’t boot because the things I thought I understood were more … esoteric than I’d realised.  I’m not a kernel developer, and this was a salutary lesson in quite how skilled those folks are.  At least, at the time I was doing it, there was less code, and fewer options, than there are today.  On the other hand, I was having to hack back to a required state, and there are more cut-down kernels and systems to start with than there were back then.

The other piece that I left for last was just pruning the installed Operating System applications and associated utilities.  Again, there are cut-down options that are easier to use now than then, but I also had some odd requirements – I believe that we needed Java, for instance, which has, or had …. well let’s say a lot of dependencies.  Most modern Linux distributions[3] start off by installing lots of pieces so that you can get started quickly, without having to worry about trying to work out dependencies for every piece of external software you want to run.

This is understandable, but we need to realise when we do this that we’re making a usability/security trade-off[5].  Every utility, every library, every executable that you add to a system increases your attack surface, and increases the likelihood of vulnerabilities.

The problem isn’t just that you’re introducing vulnerabilities, but that once they’re there, they tend to stay there.  Not just in code that you need, but, even worse, in code that you don’t need.  It’s a rare, but praiseworthy hacker[6] who spends time going over old code removing dependencies that are no longer required.  It’s a boring, complex task, and it’s usually just easier to leave the cruft[7] where it is and ship a slightly bigger package for the next release.

Sometimes, code is refactored and stripped: most frequently, security-related code. This is a very Good Thing[tm], but it turns out that it’s far from sufficient.  The reason I’m writing this post is because of a recent story in The Register about the “beep” command.  This command used the little speaker that was installed on most PC-compatible motherboards to make a little noise.  It was a useful little utility back in the day, but is pretty irrelevant now that most motherboards don’t ship with the relevant hardware.  The problem[8] is that installing and using the beep command on a system allows information to be leaked to users who lack the relevant permissions.  This can be a very bad thing.  There’s a good, brief overview here.

Now, “beep” isn’t installed by default on the distribution I’m using on the system on which I’m writing this post (Fedora 27), though it’s easily installable from one of the standard repositories that I have enabled.  Something of a relief, though it’s not a likely attack vector for this machine anyway.

What, though, do I have installed on this system that is vulnerable, and which I’d never have thought to check?  Forget all of the kernel parameters which I don’t need turned on, but which have been enabled by the distribution for ease of use across multiple systems.  Forget the apps that I’ve installed and use everyday.  Forget, even the apps that I installed once to try, and then neglected to remove.  What about the apps that I didn’t even know were there, and which I never realised might have an impact on the security posture of my system?  I don’t know, and have little way to find out.

This system doesn’t run business-critical operations.  When I first got it and installed the Operating System, I decided to err towards usability, and to be ready to trash[9] it and start again if I had problems with compromise.  But that’s not the case for millions of other systems out there.  I urge you to consider what you’re running on a system, what’s actually installed on it, and what you really need.  Patch what you need, remove what you don’t.   It’s time for a Spring clean[10].


1- I so want to spell this word dæmons, but I think that might be taking my Middle English obsession too far[2].

2 – I mentioned that I studied Middle English, right?

3 – I’m most interested in (GNU) Linux here, as I’m a strong open source advocate and because it’s the Operating System that I know best[4].

4 – oh, and I should disclose that I work for Red Hat, of course.

5 – as with so many things in our business.

6 – the good type, or I’d have said “cracker”, probably.

7 – there’s even a word for it, see?

8 – beyond a second order problem that a suggested fix seems to have made the initial problem worse…

9 – physically, if needs be.

10 – In the Northern Hemisphere, at least.

What’s your availability? DoS attacks and more

In security we talk about intentional degradation of availability

A colleague of mine recently asked me about protection from DoS attacks[1] for a project with which he’s involved – Denial of Service attacks.  The first thing that sprung to mind, of course, was DDoS: Distributed Denial of Service attacks, where hundreds or thousands[2] of hosts are used to send vast amounts of network traffic to – or maybe more accurately “at” – servers in the hopes of bringing the servers to their knees and stopping them providing the service for which they’re designed.  These are the attacks that get into the news, and with good reason.

There are other types of DoS however, and the more I thought about it, the more I wondered whether he – and I – should be worrying about these other DoS attacks and also considering other related types of issue which could cause problems to systems.  And because I realised it was an interesting topic, I decided to write about it[3].

I’m going to return to the classic “C.I.A.” model of computer security: Confidentiality, Integrity and Availability.  The attacks we’re talking about here are those most often overlooked: attempts to degrade the availability of a service.  There’s an overlap with the related discipline of resilience here, but I think that the key differentiator is that in security we’re generally talking about intentional degradation of availability, whereas resilience also covers (and maybe focuses on) unintentional degradation.

So, what types of availability attacks might we want to consider?

Denial of service attacks

I think it’s worth linking to Wikipedia’s pretty awesome entry “Denial of service attack” – not something I often do, but I thought it was excellent.  Although they’re not mutually exclusive at all, here are some of the key types as I’d define them:

  • Distributed DoS – where you have lots of different hosts attacking at the same time, flooding the target with traffic.  These days, this can be easily automated, and it’s possible to rent compromised machines to perform a coordinated attack.
  • Application layer – where the attack is aimed at the service, rather than at the host beneath.  This may seem like an academic distinction, but it’s not: what it really means is that the attack is performed with knowledge of the application layer.  So, for instance, if you’re attacking a web server, you might initiate lots of HTTP sessions, or if you were attacking a Kerberos server, you might request lots of authentication tickets.  These types of attacks may be quite costly to perform, but they’re also difficult to protect against, as each attack looks like a “legal” interaction with the service, and unless you’re on the look-out in a way which is typically not automated at this level, they’re difficult to avoid.
  • Host level – this is a family of attacks which go for the host and/or associated Operating System, rather than the service itself.  A classic attack would be the SYN flood, which misused the TCP protocol to use up resources on the host, thereby stopping any associated services from being able to respond.  Host attacks may be somewhat simpler to defend against, as it’s easier to invest in logic to detect them at this level (or maybe “set of layers”, if we adopt the OSI model), and to correlate responses across different hosts.  Firewalls and similar defences are also more likely to be able to be configured to help defend hosts which may be targeted.

Resource starvation

The term “resource starvation” most accurately refers[4] to situations where a process (or application) is denied sufficient CPU allocation to perform correctly.  How could this occur?  Well, it’s going to be rarer than in the DoS case, because in order to do it, you’re going to need some way to impact the underlying scheduling of the Operating System and/or virtualisation management (think hypervisor, typically).  That would normally mean that you’d need pretty low-level access to the machine, but there is a family of attacks known as “noisy neighbour”[5] where workloads – VMs or containers, typically – use up so many resources that other workloads are starved.

However, partly because of this case, I’d argue that resource starvation can usefully be associated with other types of availability attacks which occur locally to the machine hosting the targeted service, which might be related to CPU, file descriptor, network or other resources.

Generally, noisy neighbour attacks can be fairly easily mitigated by controls in the Operating System or virtualisation manager, though, of course, compromised or malicious components at this layer are very difficult to manage.

 

Dependency blocking

I’m not sure what the best term for this type of attack is, but what I’m thinking of is attacks which impact a service by reducing or removing access to external services on which they depend – remote components, if you will.  If, for instance, my web application requires access to a database, then an attack on that database – however performed – will impact my service.  As almost any kind of service will have external dependencies these days[6], this is can be a very effective attack, as it allows knowledgeable attackers to target the weakest link in the “chain” of components that make up your service.

There are mitigations against some of these attacks – caching and later reconciliation/synching being one – but identifying and defending against these sorts of attacks depends largely on considering your service as a system, and realising the types of impact degradation of the different parts might have.

 

Conclusion – managed degradation

Which leads me to a final point, which is that when considering availability attacks, understanding and planning Service degradation: actually a good thing is going to be invaluable – and when you’ve done that, you’ll definitely going to need to test it, too (If it isn’t tested, it doesn’t work).

 


1 – yes, I checked the capitalisation – he wasn’t worried about DRDOS, MS-DOS or any of those lovely 80s era command line Operating Systems.

2 – or millions or more, these days.

3 – here, for the avoidance of doubt.

4 – I believe.

5 – you know my policy on spellings by now.  I’m British, and we’ll keep it that way.

6 – unless you’re still using green-screen standalone machines to run your business, in which case either a) yikes or b) well done.

Why I should have cared more about lifecycle

Every deployment is messy.

I’ve always been on the development and architecture side of the house, rather than on the operations side. In the old days, this distinction was a useful and acceptable one, and wasn’t too difficult to maintain. From time to time, I’d get involved with discussions with people who were actually running the software that I had written, but on the whole, they were a fairly remote bunch.

This changed as I got into more senior architectural roles, and particularly as I moved through some pre-sales roles which involved more conversations with users. These conversations started to throw up[1] an uncomfortable truth: not only were people running the software that I helped to design and write[3], but they didn’t just set it up the way we did in our clean test install rig, run it with well-behaved, well-structured data input by well-meaning, generally accurate users in a clean deployment environment, and then turn it off when they’re done with it.

This should all seem very obvious, and I had, of course, be on the receiving end of requests from support people who exposed that there were odd things that users did to my software, but that’s usually all it felt like: odd things.

The problem is that odd is normal.  There is no perfect deployment, no clean installation, no well-structured data, and certainly very few generally accurate users.  Every deployment is messy, and nobody just turns off the software when they’re done with it.  If it’s become useful, it will be upgraded, patched, left to run with no maintenance, ignored or a combination of all of those.  And at some point, it’s likely to become “legacy” software, and somebody’s going to need to work out how to transition to a new version or a completely different system.  This all has major implications for security.

I was involved in an effort a few years ago to describe the functionality, lifecycle for a proposed new project.  I was on the security team, which, for all the usual reasons[4] didn’t always interact very closely with some of the other groups.  When the group working on error and failure modes came up with their state machine model and presented it at a meeting, we all looked on with interest.  And then with horror.  All the modes were “natural” failures: not one reflected what might happen if somebody intentionally caused a failure.  “Ah,” they responded, when called on it by the first of the security to be able to form a coherent sentence, “those aren’t errors, those are attacks.”  “But,” one of us blurted out, “don’t you need to recover from them?”  “Well, yes,” they conceded, “but you can’t plan for that.  It’ll need to be on a case-by-case basis.”

This is thinking that we need to stamp out.  We need to design our systems so that, wherever possible, we consider not only what attacks might be brought to bear on them, but also how users – real users – can recover from them.

One way of doing this is to consider security as part of your resilience planning, and bake it into your thinking about lifecycle[5].  Failure happens for lots of reasons, and some of those will be because of bad people doing bad things.  It’s likely, however, that as you analyse the sorts of conditions that these attacks can lead to, a number of them will be similar to “natural” errors.  Maybe you could lose network connectivity to your database because of a loose cable, or maybe because somebody is performing a denial of service attack on it.  In both these cases, you may well start off with similar mitigations, though the steps to fix it are likely to be very different.  But considering all of these side by side means that you can help the people who are actually going to be operating those systems plan and be ready to manage their deployments.

So the lesson from today is the same as it so often is: make sure that your security folks are involved from the beginning of a project, in all parts of it.  And an extra one: if you’re a security person, try to think not just about the attackers, but also about all those poor people who will be operating your software.  They’ll thank you for it[6].


1 – not literally, thankfully[2].

2 – though there was that memorable trip to Singapore with food poisoning… I’ll stop there.

3 – a fact of which I actually was aware.

4 – some due entirely to our own navel-gazing, I’m pretty sure.

5 – exactly what we singularly failed to do in the project I’ve just described.

6 – though probably not in person.  Or with an actual gift.  But at least they’ll complain less, and that’s got to be worth something.

There are no absolutes in security

There is no “secure”.

Let’s stop using the word “secure”. There is no “secure” in IT.

I know that sounds crazy, but it’s true.

Sometimes, when I speak to colleagues and customers, there will be non-technical or non-security people there, and they ask how to get a secure system. So I explain how I’d make a system secure. It goes a bit like this.

  1. Remove any non-critical USB connections: in particular external or “thumb” drives.
  2. Turn off all bluetooth.
  3. Turn off all wifi.
  4. Remove any network cables.
  5. Remove any other USB connections, including mouse or keyboard.
  6. Disconnect any monitors.
  7. Disconnect any other cables that are connected to the system.
  8. Yes, that includes the power cable.
  9. Now take out any hard drives – SSD, HDD or other.
  10. Destroy them. My preferred method is to gouge tracks in all spinning media, break the heads, bash all pieces with a hammer and then throw them into Mount Doom, but any other volcano[1] will do. Thermite lances are probably acceptable. You should do the same with all other components that you removed in earlier steps.
  11. Destroy the motherboard, including all chips and RAM.
  12. Tip all remaining pieces down a well.
  13. Pour concrete down the well.[2]
  14. You probably now have a secure which is about as secure as you’re going to get.

Yes, it’s a bit extreme, but the point is that all of the components there are possible threat vectors or information leakage channels.

Can we design and operate a system where we manage and mitigate the risks of threats and information leakage? Yes. That’s where we improve the security of a system. Is that a secure system? No, it’s not. What we’ve done is raise the bar, but we’ve not made it absolutely secure.

Part of the problem is that there’s just no way, these days[4], that any single person can be certain of the security of all parts of a system: they are just too many, and too complex. You may understand the application layer, but what about the virtualisation layer, for instance? I presented a simplified layer diagram in my post Isolationism a few months back, in which I listed the host as the bottom layer, but that was, of course, just asking for trouble. Along came Meltdown and Spectre, and now it’s clear (as if we didn’t know it already) that you should never ignore the fact that you can’t even trust the silicon you’re running on to do the thing you think it ought.

None of this, however, stops people and companies telling you that they’ll “secure your perimeter”, or provide you with “secure systems”. And it annoys me[5]. “We’ll help you secure your perimeter” isn’t too bad, but anything that suggests that you can have “secure systems” smacks to me of marketing – bad marketing.

So here you go: please stop using the word “secure” as an unqualified adjective or verb. We’re grown-ups, now, and we know it’s not real. So let’s not pretend.

Now – where was that well-cover? I need to deal with little Tommy.


1 – terrestrial/Middle Earth. I’m not sure about volcano temperatures on other planets or in the Undying Lands across the Western Sea.

2 – it should probably therefore be a disused well. Check there are no animals down there first[3]. In fact, before you throw anything down there.

3 – what’s that, Lassie? Little Tommy’s down the well? Well, I wonder whether little Tommy is waiting for us to throw the components down there so that he can do bad things. Bad Tommy.

4 – I’d like to think that maybe there was, once, in the distant past, but I’m probably kidding myself.

5 – you might be surprised at the number of things that annoy me[6].

6 – unless you’re my wife, in which case you probably won’t be[7].

7 – surprised. Or, in fact, reading this article.

3 tests for NOT moving to blockchain 

How to tell when you can avoid the hype.

So, there’s this thing called “blockchain” which is quite popular…

You know that already, of course.  I keep wondering if we’ve hit “peak hype” for blockchain and related technologies yet, but so far there’s no sign of it.  As usual for this blog, when I’m talking about blockchain, I’m going to include DLTs – Distributed Ledger Technologies – which are, by some tight definitions of the term, not really blockchains at all.  I’m particularly interested, from a professional point of view, in permissioned blockchains.  You can read more about how that’s defined in my previous post Is blockchain a security topic? – the key point here is that I’m interested in business applications of blockchain beyond cryptocurrency[1].

And, if the hype is to be believed – and some of it probably should be[2] – then there is an almost infinite set of applications for blockchain.  That’s probably correct, but that doesn’t mean that they’re all good applications for blockchain.  Some, in fact, are likely to be very bad applications for blockchain.

The hype associated with blockchain, however, means that businesses are rushing to embrace this new technology[3] without really understanding what they’re doing.  The drivers towards this move are arguably three-fold:

  1. you can, if you try, make almost any application with multiple users which stores data into a blockchain-enable application;
  2. there are lots of conferences and “gurus” telling people that if they don’t embrace blockchain now, they’ll go out of business within six months[4];
  3. it’s not easy technology to understand fully, and lots of the proponents “on-the-ground” within organisations are techies.

I want to unpack that last statement before I get a hail of trolls flaming me[5].  I have nothing against techies – I’m one myself – but one of our characteristics tends to be enormous enthusiasm about new things (“shinies”) that we understand, but whose impact on the business we don’t always fully grok[6]. That’s not always a positive for business leaders.

The danger, then, is that the confluence of those three drivers may lead to businesses deciding to start moving to blockchain applications without fully understanding whether that’s a good idea.  I wrote in another previous post (Blockchain: should we all play?) about some tests that you can apply to decide whether a process is a good fit for blockchain and when it’s not.  They were useful, but the more I think about it, the more I’m convinced that we need some simple tests to tell us when we should definitely not move a process or an application to a blockchain.  I present my three tests.  If your answer any of these questions is “yes”, then you almost certainly don’t need a blockchain.

Test 1 – does it have a centralised controller or authority?

If the answer is “yes”, then you don’t need a blockchain.

If, for instance, you’re selling, I don’t know, futons, and you have a single ordering system, then you have single authority for deciding when to send out a futon.  You almost certainly don’t need to make this a blockchain.  If you are a purveyor of content that has to pass through a single editorial and publishing process, they you almost certainly don’t need to make this a blockchain.

The lesson is: blockchains really don’t make sense unless the tasks required in the process execution – and the trust associated with those tasks – is distributed between multiple entities.

Test 2 – could it work fine with a standard database?

If the answer to this question is “yes”, then you don’t need a blockchain.

This question and the previous one are somewhat intertwined, but don’t need to be.  There are applications where you have distributed processes, but need to store information centrally, or centralised authorities but distributed data, where one may be yes, but the other “no”.  But if this is question is a “yes”, then use a standard database.

Databases are good at what they do, they are cheaper in terms of design and operation than running a blockchain or distributed ledger, and we know how to make them work.  Blockchains are about letting everybody[8] see and hold data, but the overheads can be high, and the implications costly.

Test 3 – is adoption going to be costly, or annoying, to some stakeholders?

If the answer to this question is “yes”, then you don’t need a blockchain.

I’ve heard assertions that blockchains always benefit all users.  This is a patently false.  If you are creating an application for a process, and changing the way that your stakeholders interact with you and it, you need to consider whether that change is in their best interests.  It’s very easy to create and introduce an application, blockchain or not, which reduces business friction for the owner of the process, but increases it for other stakeholders.

If I make engine parts for the automotive industry, it may benefit me immensely to be able to track and manage the parts on a blockchain.  I may be able to see at a glance who’s supplied what, when, and the quality of the steel used in the ball-bearings.  On the other hand, if I’m a ball-bearing producer, and I have an established process which works for the forty companies to whom I sell ball-bearings, then adopting a new process for just one of them, with associated changes to my method of work, new systems and new storage and security requirements is unlikely to be in my best interests: it’s going to be both costly and annoying.

Conclusion

Tests are guidelines: they’re not fixed in stone.  One of these tests looks like a technical test (the database one), but is really as much about business roles and responsibilities as the other two.  All of them, hopefully, can be used as a counter-balance to the three drivers I mentioned.

 


1 – which, don’t get me wrong, is definitely interesting and a business application – it’s just not what I’m going to talk about in this post.

2 – the trick is knowing which bits.  Let me know if you work out how, OK?

3 – it’s actually quite a large set of technologies, to be honest.

4 – which is patently untrue, unless the word “they” refers there to the conferences and gurus, in which case it’s probably correct.

5 – which may happen anyway due to my egregious mixing of metaphors.

6 – there’s a word to love.  I’ve put it in to exhibit my techie credentials[7].

7 – and before you doubt them, yes, I’ve read the book, in both cut and uncut versions.

8 – within reason.

Moving to DevOps, what’s most important? 

Technology, process or culture? (Clue: it’s not the first two)

You’ve been appointed the DevOps champion in your organisation: congratulations.  So, what’s the most important issue that you need to address?

It’s the technology – tools and the toolchain – right?  Everybody knows that unless you get the right tools for the job, you’re never going to make things work.  You need integration with your existing stack – though whether you go with tight or loose integration will be an interesting question – a support plan (vendor, 3rd party or internal), and a bug-tracking system to go with your source code management system.  And that’s just the start.

No!  Don’t be ridiculous: it’s clearly the process that’s most important.  If the team doesn’t agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you’ll never be able institute a consistent, repeatable working pattern.

In fact, although both the technology and the process are important, there’s a third component which is equally important, but typically even harder to get right: culture.  Yup, it’s that touch-feely thing that we techies tend to struggle with[1].

Culture

I was visiting a medium-sized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO.  We were ushered into the CEO’s office and waited for a while as the two of them finished participating in the daily stand-up.  They apologised for being a minute or two late, but far from being offended, I was impressed.  Here was an organisation where the culture of participation was clearly infused all the way up to the top.

Not that culture can be imposed from the top – nor can you rely on it percolating up from the bottom[3] – but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it.  If you can get management to buy into the process – and to be seen to buy in – you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it.

So let’s say that management believes that you should give DevOps a go.  Where do you start?

Developers, tick?[5]

Developers may well be your easiest target group.  Developers are often keen to try new things, and to find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies.  DevOps has arguably been mainly driven by the development community. But you shouldn’t assume that all developers will be keen to embrace this change.  For some, the way things have always been done – your Rick Parfitts of dev, if you will[7] – is fine.  Finding ways to help them work efficiently in the new world is part of your job, not just theirs.  If you have superstar developers who aren’t happy with change, you risk alienating them and losing them if you try to force them into your brave new world.  What’s worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren’t going to change if it makes their lives more difficult and reduces their productivity.

Maybe you’re not going to be able to move all the systems and people to DevOps immediately.  Maybe you’re going to need to choose which apps start with, and who will be your first DevOps champions.  Maybe it’s time to move slowly.

Not maybe: definitely

No – I lied.  You’re definitely going to need to move slowly.  Trying to change everything at once is a recipe for disaster.

This goes for all elements of the change – which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose – bar one.  For all of those elements, if you try to move everything in one go, you will fail.  You’ll fail for a number of reasons.  You’ll fail for reasons I can’t imagine, and, more importantly, for reasons you can’t imagine, but some of the reasons will include:

  • people – most people – don’t like change;
  • technologies don’t like change (you can’t just switch and expect everything to work still);
  • applications don’t like change (things worked before, or at least failed in known ways: you want to change everything in one go?  Well, they’ll all fail in new and exciting[9] ways;
  • users don’t like change;
  • use cases don’t like change.

The one exception

You noticed that, above, I wrote “bar one”, when discussing which elements you shouldn’t choose to change all in one go?  Well done.

What’s that exception?  It’s the initial team.  When you choose your initial application to change, and you’re thinking about choosing the team to make that change, select the members carefully, and select a complete set.  This is important.  If you choose just developers, just test folks, or just security folks, or just ops folks, or just management, then you won’t actually have proved anything at all.  If you leave out one functional group from your list, you won’t actually have proved anything at all.  Well, you might have proved to a small section of your community that it kind of works, but you’ll have missed out on a trick.  And that trick is that if you choose keen people from across your functional groups, it’s much harder to fail.

Say that your first attempt goes brilliantly.  How are you going to convince other people to replicate your success and adopt DevOps?  Well, the company newsletter, of course.  And that will convince how many people, exactly?  Yes, that number[12].  If, on the other hand, you have team members from across the functional parts or the organisation, then when you succeed, they’ll tell their colleagues, and you’ll get more buy-in next time.

If, conversely, it fails, well, if you’ve chosen your team wisely, and they’re all enthusiastic, and know that “fail often, fail fast” is good, then they’ll be ready to go again.

So you need to choose enthusiasts from across your functional groups.  They can work on the technologies and the process, and once that’s working, it’s the people who will create that cultural change.  You can just sit back and enjoy.  Until the next crisis, of course.


1 – OK, you’re right.  It should be “with which we techies tend to struggle”[2]

2 – you thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn’t you?  Read it again: I put “tend to”.  That’s the best you’re getting.

3 – is percolating a bottom-up process?  I don’t drink coffee[4], so I wouldn’t know.

4 – do people even use percolators to make coffee anymore?  Feel free to let me know in the comments. I may pretend interest if you’re lucky.

5 – for US readers (and some other countries, maybe?), please substitute “tick” for “check” here[6].

6 – for US techie readers, feel free to perform “s/tick/check/;”.

7 – this is a Status Quo[8] reference for which I’m extremely sorry.

8 – for Millennial readers, please consult your favourite online reference engine or just roll your eyes and move on.

9 – for people who say, “but I love excitement”, trying being on call at 2am on a Sunday morning at end of quarter when your Chief Financial Officer calls you up to ask why all of last month’s sales figures have been corrupted with the letters “DEADBEEF”[10].

10 – for people not in the know, this is a string often used by techies as test data because a) it’s non-numerical; b) it’s numerical (in hexadecimal); c) it’s easy to search for in debug files and d) it’s funny[11].

11 – though see [9].

12 – it’s a low number, is all I’m saying.

If it isn’t tested, it doesn’t work

Testing isn’t just coming up with tests for desired use cases.

Huh.  Shouldn’t that title be “If it isn’t tested, it’s not going to work”?

No.

I’m asserting something slightly different here – in fact, two things.  The first can be stated thus:

“In order for a system to ‘work’ correctly, and to defined parameters, test cases for all plausible conditions must be documented, crafted – and passed – before the system is considered to ‘work’.”

The second is a slightly more philosophical take on the question of what a “working system” is:

“An instantiated system – including software, hardware, data and wetware[1] components – may be considered to be ‘working’ if both its current state, and all known plausible future states from the working state have been anticipated, documented and appropriately tested.”

Let’s deal with these one by one, starting with the first[3].

Case 1 – a complete test suite

I may have given away the basis for my thinking by the phrasing in the subtitle above.  What I think we need to be looking for, when we’re designing a system, is what we should be doing ensuring that we have a test case for every plausible condition.  I considered “possible” here, but I think that may be going too far: for most systems, for instance, you don’t need to worry too much about meteor strikes.  This is an extension of the Agile methodology dictum: “a feature is not ‘done’ until it has a test case, and that test case has been passed.”  Each feature should be based on a use case, and a feature is considered correctly implemented when the test cases that are designed to test that feature are all correctly passed.

It’s too easy, however, to leave it there.  Defining features is, well not easy, but something we know how to do.  “When a user enters enters a valid username/password combination, the splash-screen should appear.”  “When a file has completed writing, a tick should appear on the relevant icon.”  “If a user cancels the transaction, no money should be transferred between accounts.”  The last is a good one, in that it deals with an error condition.  In fact, that’s the next step beyond considering test cases for features that implement functionality to support actions that are desired: considering test cases to manage conditions that arise from actions that are undesired.

The problem is that many people, when designing systems, only consider one particular type of undesired action: accidental, non-malicious action.  This is the reason that you need to get security folks[4] in when you’re designing your system, and the related test cases.  In order to ensure that you’re reaching all plausible conditions, you need to consider intentional, malicious actions.  A system which has not considered these and test for these cannot, in my opinion, be said truly to be “working”.

Case 2 – the bigger systems picture

I write fairly frequently[5] about the importance of systems and systems thinking, and one of the interesting things about a system, from my point of view, is that it’s arguably not really a system until it’s up and running: “instantiated”, in the language I used in my definition above.

Case 2 dealt, basically, with test cases and the development cycle.  That, by definition, is before you get to a fully instantiated system: one which is operating in the environment for which it was designed – you really, really hope – and is in situ.  Part of it may be quiescent, and that is hopefully as designed, but it is instantiated.

A system has a current state; it has a set of defined (if not known[7]) past states; and a set of possible future states that it can reach from there.  Again, I’m not going to insist that all possible states should be considered, for the same reasons I gave above, but I think that we do need to talk about all known plausible future states.

These types of conditions won’t all be security-related.  Many of them may be more appropriately thought of as to do with assurance or resilience.  But if you don’t get the security folks in, and early in the planning process, then you’re likely to miss some.

Here’s how it works.  If I am a business owner, and I am relying on a system to perform the tasks for which it was designed, then I’m likely to be annoyed if some IT person comes to me and says “the system isn’t working”.  However, if, in response to my question, “and did it fail due to something we had considered in our design and deployment of the system” is “yes”, then I’m quite lightly to move beyond annoyed to a state which, if we’re honest, the IT person could easily have considered, nay predicted, and which is closer to “incandescent” than “contented”[8].

Because if we’d considered a particular problem  – it was “known”, and “plausible” – then we should have put in place measures to deal with it. Some of those will be preventative measures, to stop the bad thing happening in the first place, and others will be mitigations, to deal with the effects of the bad thing that happened.  And there may also be known, plausible states for which we may consciously decide not to prepare.  If I’m a small business owner in Weston-super-mare[9], then I may be less worried about industrial espionage than if I’m a multi-national[10].  Some risks aren’t worth the bother, and that’s fine.

To be clear: the mitigations that we prepare won’t always be technical.  Let’s say that we come up with a scenario where an employee takes data from the system on a USB stick and gives it to a competitor.  It may be that we can’t restrict all employees from using USB sticks with the system, so we have to rely on legal recourse if that happens.  If, in that case, we call in the relevant law enforcement agency, then the system is working as designed if that was our plan to deal with this scenario.

Another point is that not all future conditions can be reached from the current working state, and if they can’t, then it’s fair to decide not to deal with them.  Once a TPM is initialised, for instance, taking it back to its factory state basically requires to reset it, so any system which is relying on it has also been reset.

What about the last bit of my definition?  “…[A]nticipated, documented and appropriately tested.”  Well, you can’t test everything fully.  Consider that the following scenarios are all known and plausible for your system:

  • a full power-down for your entire data centre;
  • all of your workers are incapacitate by a ‘flu virus;
  • your main sysadmin is kidnapped;
  • an employee takes data from the system on a USB stick and gives it to a competitor.

You’re really not going to want to test all of these.  But you can at least perform paper exercises to consider what steps you should take, and also document them.  You might ensure that you know which law enforcement agency to call, and what the number is, for instance, instead of actually convincing an employee to leak information to a competitor and then having them arrested[11].

Conclusion

Testing isn’t just coming up with tests for desired use cases.  It’s not even good enough just to prepare for accidental undesired use cases on top of that.  We need to consider malicious use cases, too.   And testing in development isn’t good enough either: we need to test with live systems, in situ.  Because if we don’t, something, somewhere, is going to go wrong.

And you really don’t want to be the person telling your boss that, “well, we thought it might, but we never tested it.”

 

 


1 – “wetware” is usually defined as human components of a system (as here), but you might have non-human inputs (from animals or aliens), or even from fauna[2], I suppose.

2 – “woodware”?

3 – because I, for one, need a bit of a mental run-up to the second one.

4 – preferably the cynical, suspicious types.

5 – if not necessarily regularly: people often confuse the two words.  A regular customer may only visit once a year, but always does it on the same day, whereas a frequent customer may visit on average once a week, but may choose a different day each week.[6]

6 – how is this relevant?  It’s not.

7 – yes, I know: Schrödinger’s cat, quantum effects, blah, blah.

8 – Short version: if the IT person says “it broke, and it did it in a way we had thought of before”, then I’m going to be mighty angry.

9 – I grew up nearby.  Windy, muddy, donkeys.

10 – which might, plausibly, also be based in Weston-super-mare, though I’m not aware of any.

11 – this is, I think, probably at least bordering on the unethical, and might get you in some hot water with your legal department, and possibly some other interested parties[12].

12 – your competitor might be pleased, though, so there is that.