I’m not going to write a post today. Please remember those who have lost people they love, the injured and those who care for and about them. And if you struggle to be positive, remember the helpers: all those who came, and are coming, to their aid.
…”am I safe from this ransomware thing?”
As you may have noticed*, there was somewhat of a commotion over the past week when the WannaCrypt ransomware infection spread across the world, infecting all manner of systems**, most notably, from my point of view, many NHS systems. This is relevant to me because I’m UK-based, and also because I volunteer for the local ambulance service as a CFR. And because I’m a security professional.
I’m not going to go into the whys and wherefores of the attack, of the importance of keeping systems up to date, the morality of those who spread ransomware***, how to fund IT security, or the politics of patch release. All of these issues have been dealt with very well elsewhere. Instead, I’m going to discuss talking to people.
I’m slightly hopeful that this most recent attack is going to have some positive side effects. Now, in computing, we’re generally against side effects, as they usually have negative unintended consequences, but on Monday, I got a call from my Dad. I’m aware that this is the second post in a row to mention my family, but it turns out that my Dad trusts me to help him with his computing needs. This is somewhat laughable, since he uses a Mac, which employs an OS of which I have almost no knowledge****, but I was pleased that he even called to ask a question about it. The question was “am I safe from this ransomware thing?” The answer, as he’d already pretty much worked out was, “yes”, and he was also able to explain that he was unsurprised, because he knew that Macs weren’t affected, and because he keeps it up to date, and because he keeps backups.
Somebody, somewhere (and it wasn’t me on this occasion) had done something right: they had explained, in terms that my father could understand, not only the impact of an attack, but also what to do to keep yourself safe (patching), what systems were most likely to be affected (not my Dad’s Mac), and what do to in mitigation (store backups). The message had come through the media, but the media, for a change, seemed to have got it correct.
I’ve talked before about the importance of informing our users, and allowing them to make choices. I think we need to be honest, as well, about when things aren’t going well, when we (singularly, or communally) have made a mistake. We need to help them to take steps to protect themselves, and when that fails, to help them clear things up.
And who was it that made the mistake? The NSA, for researching vulnerabilities, or for letting them leak? Whoever it was leaked them? Microsoft, for not providing patches? The sysadmins, for not patching? The suits, for not providing money for upgrades? The security group, putting sufficient controls in place to catch and contain the problem? The training organisation for not training the users enough? The users, for ignoring training and performing actions which allowed the attack to happen?
Probably all of the above. But, in most of those cases, talking about the problem, explaining what to do, and admitting when we make a mistake, is going to help improve things, not bring the whole world crashing down around us. Talking, in other words, to “real” people (not just ourselves and each other*****): getting out there and having discussions.
Sometimes a lubricant can help: tea, beer, biscuits******. Sometimes you’ll even find that “real” people are quite friendly. Talk to them. In words they understand. But remember that even the best of them will nod off after 45 minutes or so of our explaining our passion to them. They’re only human, after all.
*unless you live under a rock.
**well, Windows systems, anyway.
****this is entirely intentional: the less I know about their computing usage, the easier it is for me to avoid providing lengthy and painful (not to mention unpaid) support services to my close family.
*****and our machines. Let’s not pretend we don’t do that.
******probably not coffee: as a community, we almost certainly drink enough of that as it is.
I trust my brother and my sister with my life.
Academic discussions about trust abound*. Particularly in the political and philosophical spheres, the issue of how people trust in institutions, and when and where they don’t, is an important topic of discussion, particularly in the current political climate. Trust is also a concept which is very important within security, however, and not always well-defined or understood. It’s central,to my understanding of what security means, and how I discuss it, so I’m going to spend this post trying to explain what I mean by “trust”.
Here’s my definition of trust, and three corollaries.
- “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
- My first corollary**: “Trust is always contextual.”
- My second corollary:” One of the contexts for trust is always time”.
- My third corollary: “Trust relationships are not symmetrical.”
Why do we need this set of definitions? Surely we all know what trust is?
The problem is that whilst humans are very good at establishing trust with other humans (and sometimes betraying it), we tend to do so in a very intuitive – and therefore imprecise – way. “I trust my brother” is all very well as a statement, and may well be true, but such a statement is always made contextually, and that context is usually implicit. Let me provide an example.
I trust my brother and my sister with my life. This is literally true for me, and you’ll notice that I’ve already contextualised the statement already: “with my life”. Let’s be a little more precise. My brother is a doctor, and my sister a trained scuba diving professional. I would trust my brother to provide me with emergency medical aid, and I would trust my sister to service my diving gear****. But I wouldn’t trust my brother to service my diving gear, nor my sister to provide me with emergency medical aid. In fact, I need to be even more explicit, because there are times which I would trust my sister in the context of emergency medical aid: I’m sure she’d be more than capable of performing CPR, for example. On the other hand, my brother is a paediatrician, not a surgeon, so I’d not be very confident about allowing him to perform an appendectomy on me.
Let’s look at what we’ve addressed. First, we dealt with my definition:
- the entities are me and my siblings;
- the actions ranged from performing an emergency appendectomy to servicing my scuba gear;
- the expectation was actually fairly complex, even in this simple example: it turns out that trusting someone “with my life” can mean a variety of things from performing specific actions to remedy an emergency medical conditions to performing actions which, if neglected or incorrectly carried out, could cause death in the future.
We also addressed the first corollary:
- the contexts included my having a cardiac arrest, requiring an appendectomy, and planning to go scuba diving.
Let’s add time – the second corollary:
- my sister has not recently renewed her diving instructor training, so I might feel that I have less trust in her to service my diving gear than I might have done five years ago.
The third corollary is so obvious in human trust relationships that we often ignore it, but it’s very clear in our examples:
- I’m neither a doctor nor a trained scuba diving instructor, so my brother and my sister trust me neither to provide emergency medical care nor to service their scuba gear.******
What does this mean to us in the world of IT security? It means that we need to be a lot more precise about trust, because humans come to this arena with a great many assumptions. When we talk about a “trusted platform”, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). The context of what we mean for a “trusted platform” is likely to be very different between a mobile phone, a military installation and an IoT gateway. And that trust may erode over time (are patches applied? is there a higher likelihood that an attacker my have compromised the platform a day, a month or a year after the workload was provisioned to it?).
We should also never simply say, following the third corollary, that “these entities trust each other”. A web server and a browser may have established trust relationships, for example, but these are not symmetrical. The browser has probably established with sufficient assurance for the person operating it to give up credit card details that the web server represents the provider of particular products and services. The web server has probably established that the browser currently has permission to access the account of the user operating it.
Of course, we don’t need to be so explicit every time we make such a statement. We can explain these relationships in definitions of documents, but we must be careful to clarify what the entities, the expectations, the actions, the contexts and possible changes in context. Without this, we risk making dangerous assumptions about how these entities operate and what breakdowns in trust mean and could entail.
*Which makes me thinks of rabbits.
**I’m hoping that we can all agree on these – otherwise we may need to agree on a corollary bypass.***
****I’m a scuba diver, too. At least in theory.*****
*****Bringing up children is expensive and time-consuming, it turns out.
******I am, however, a trained CFR, so I hope they’d trust me to perform CPR on them.
… given that I probably wouldn’t be writing this blog if I weren’t paid by my employer, I don’t feel to bad about mentioning our Summit…
I get paid*. By Red Hat. Not to write these blog posts, but to do a job of work. The musings which make up this blog aren’t necessarily directly connected to the views of my employer (see standard disclaimer), but given that I try to conduct my work with integrity and to blog about stuff I believe in, there’s a pretty good correlation**.
Anyway, given that I probably wouldn’t be writing this blog if I weren’t paid by my employer, I don’t feel to bad about mentioning our Summit this week. It’s my first one, and I’m involved in two sessions: one on moving to a hybrid (or multi-) cloud model, and one about the importance of systems security and what impact Open Source has on it (and vice versa). There’s going to be a lot of security-related content, presented by some fantastic colleagues, partners and customers, and, depending on quite how busy things are, I’m hoping to post snippets here.
In the meantime, I’d like to invite anybody who reads this blog and will be attending the Red Hat Summit to get in touch – it would be great to meet you.
*which is, in my view, a Good Thing[tm].
**see one of my favourite xkcd cartoons.
…here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running.
Well, not all the time, obviously*. But bear with me: we spend most of our time ensuring that all of our systems are up and secure and working as expected, because that’s what we hope for, but there’s a real argument for not only finding out what happens when they don’t, and not just planning for when they don’t, but also planning for how they shouldn’t. Let’s start by examining some techniques for how we might do that.
Part 1 – planning
There’s a story** that the oil company Shell, in the 1970’s, did some scenario planning that examined what were considered, at the time, very unlikely events, and which allowed it to react when OPEC’s strategy surprised most of the rest of the industry a few years later. Sensitivity modelling is another technique that organisations use at the financial level to understand what impact various changes – in order fulfilment, currency exchange or interest rates, for instance – make to the various parts of their business. Yet another is war gaming, which the military use to try to understand what will happen when failures occur: putting real people and their associated systems into situations and watching them react. And Netflix are famous for taking this a step further in the context of the IT world and having a virtual Chaos Monkey (a set of processes and scripts) which they use to bring down parts of their systems in real time to allow them to understand how resilient they the wider system is.
So that gives us four approaches that are applicable, with various options for automation:
- scenario planning – trying to understand what impact large scale events might have on your systems;
- sensitivity planning – modelling the impact on your systems of specific changes to the operating environment;
- wargaming – putting your people and systems through simulated events to see what happens;
- real outages – testing your people and systems with actual events and failures.
Actually going out of your way to sabotage your own systems might seem like insane behaviour, but it’s actually a work of genius. If you don’t plan for failure, what are you going to do when it happens?
So let’s say that you’ve adopted all of these practices****: what are you going to do with the information? Well, there are some obvious things you can do, such as:
- removing discovered weaknesses;
- improving resilience;
- getting rid of single points of failure;
- ensuring that you have adequately trained staff;
- making sure that your backups are protected, but available to authorised entities.
I won’t try to compile an exhaustive list, because there are loads books and articles and training courses about this sort of thing, but there’s another, maybe less obvious, course of action which I believe we must take, and that’s plan for managed degradation.
Part 2 – managed degradation
What do I mean by that? Well, it’s simple. We***** are trained and indoctrinated to take the view that if something fails, it must always “fail to safe” or “fail to secure”. If something stops working right, it should stop working at all.
There’s value in this approach, of course there is, and we’re paid****** to ensure everything is secure, right? Wrong. We’re actually paid to help keep the business running, and here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running. Crazy, right? “The business” want to keep making money and servicing customers even if things aren’t perfectly secure! Don’t they know the risks?
And the answer to that question is “no”. They don’t know the risks. And that’s our real job: we need to explain the risks and the mitigations, and allow a balancing act to take place. In fact, we’re always making those trade-offs and managing that balance – after all, the only truly secure computer is one with no network connection, no keyboard, no mouse and no power connection*******. But most of the time, we don’t need to explain the decisions we make around risk: we just take them, following best industry practice, regulatory requirements and the rest. Nor are the trade-offs usually so stark, because when failure strikes – whether through an attack, accident or misfortune – it’s often a pretty simple choice between maintaining a particular security posture and keeping the lights on. So we need to think about and plan for some degradation, and realise that on occasion, we may need to adopt a different security posture to the perfect (or at least preferred) one in which we normally operate.
How would we do that? Well, the approach I’m advocating is best described as “managed degradation”. We allow our systems – including, where necessary our security systems – to degrade to a managed (and preferably planned) state, where we know that they’re not operating at peak efficiency, but where they are operating. Key, however, is that we know the conditions under which they’re working, so we understand their operational parameters, and can explain and manage the risks associated with this new posture. That posture may change, in response to ongoing events, and the systems and our responses to those events, so we need to plan ahead (using the techniques I discussed above) so that we can be flexible enough to provide real resiliency.
We need to find modes of operation which don’t expose the crown jewels******** of the business, but do allow key business operations to take place. And those key business operations may not be the ones we expect – maybe it’s more important to be able to create new orders than to collect payments for them, for instance, at least in the short term. So we need to discuss the options with the business, and respond to their needs. This planning is not just security resiliency planning: it’s business resiliency planning. We won’t be able to consider all the possible failures – though the techniques I outlined above will help us to identify many of them – but the more we plan for, the better we will be at reacting to the surprises. And, possibly best of all, we’ll be talking to the business, informing them, learning from them, and even, maybe just a bit, helping them understand that the job we do does have some value after all.
*I’m assuming that we’re the Good Guys/Gals**.
**Maybe less story than MBA*** case study.
***There’s no shame in it.
****Well done, by the way.
*****The mythical security community again – see past posts.
*******Preferably at the bottom of a well, encased in concrete, with all storage already removed and destroyed.
********Probably not the actual Crown Jewels, unless you work at the Tower of London.
… a basic grounding in cryptography is vital …
I am, by many measures, almost uniquely badly qualified* to talk about IT security, given that my degree is in English Literature and Theology (I did two years of each, finishing with the latter), and the only other formal university qualification I have is an MBA. Neither of these seem to be great starting points for a career in IT security. Along the way, admittedly, I did pick up a CISSP qualification and took an excellent SANS course on Linux and UNIX security, but that’s pretty much it. I should also point out in my defence that I was always pretty much a geek at school***, learning Pascal and Assembly to optimise my Mandelbrot set generator**** and spending countless hours trying to create simple stickman animations.
The rest of it was learnt on the job, at seminars, meetings, from colleagues or from books. What prompted me to write this particular post was a post over at IT Security guru, 9 out of 10 IT Security Pros Surveyed Favour Experience over Qualifications – FireMon, a brief analysis of a survey disclosed on Firemon’s site.
This cheered me, I have to say, given my background, but it also occurred to me that I sometimes get asked what advice I have for people who are interested in getting involved in IT Security. I’m wary providing a one-size-fits-all answer, but there’s one action, and three books, that I tend to suggest, so I thought I’d share them here, in case they’re useful to anyone.
- get involved in an Open Source project, preferably related to security. Honestly, this is partly because I’m passionate about Open Source, but also because it’s something that I know I and others look for on an CV*****. You don’t even need to be writing code, necessarily: there’s a huge need for documentation, testing, UI design, evangelism****** and the rest, but it’s great exposure, and can give you a great taster of what’s going on. You can even choose a non-security project, but considering getting involved in security-related work for that project.
Three books******* to give you a taste of the field, and a broad grounding:
- Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson. I learned more about security systems from this book than any other. I think it gives a very good overview of the field from a point of view that makes sense to me. There’s deep technical detail in here, but you don’t need to understand all of it on first reading in order to get a lot of benefit.
- Practical Cryptography, by Bruce Schneier. Schneier has been in the field of security for a long time (many of his books are worth reading, as is his monthly email, CRYPTO-GRAM), and this book is a follow-up to his classic “Applied Cryptography”. In Practical Cryptography, he admitted that security was more than just mathematics, and that the human element is also important. This book goes into quite a lot of technical depth, but again, you don’t have to follow all of it to benefit.
- Cryptonomicon, by Neal Stephenson. This is a (very long!) work of fiction, but it has a lot of security background and history in it, and also gives a good view into the mindset of how many security people think – or used to think! I love it, and re-read it every few years.
I’m aware that the second and third are unashamedly crypto-related (though there’s a lot more general security in Cryptonomicon than the title suggests), and I make no apology for that. I think that a basic grounding in cryptography is vital for anyone wishing to make a serious career in IT Security. You don’t need to understand the mathematics, but you do need to understand, if not how to use crypto correctly, then at least the impact of using it incorrectly********.
So, that’s my lot. If anyone has other suggestions, feel free to post them in comments. I have some thoughts on some more advanced books around architecture which I may share at some point, but I wanted to keep it pretty simple for now.
*we could almost stop the sentence here**, to be honest.
**or maybe the entire article.
***by which I mean “before university”. When Americans ask Brits “are you at school?”, we get upset if we’ve already started university (do we really look that young?).
****the Pascal didn’t help, because BBC BASIC was so fast already, and floating point was so difficult in Assembly that I frankly gave up.
*****”Curriculum Vitae”. If you’re from North America, think “Resumé”, but it’s Latin, not French.
******I know quite a lot about evangelism, given my degree in Theology, but that’s a story for another time.
*******All of these should be available from a decent library. If your university/college/town/city library doesn’t have these, I’d lobby for them. You should also be able to find them online. Please consume them legally: authors deserve to be paid for their work.
********Spoiler: it’s bad. Very bad.
There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth.
Writing code is hard. Writing secure code is harder: much harder. And before you get there, you need to think about design and architecture. When you’re writing code to implement security functionality, it’s often based on architectures and designs which have been pored over and examined in detail. They may even reflect standards which have gone through worldwide review processes and are generally considered perfect and unbreakable*.
However good those designs and architectures are, though, there’s something about putting things into actual software that’s, well, special. With the exception of software proven to be mathematically correct**, being able to write software which accurately implements the functionality you’re trying to realise is somewhere between a science and an art. This is no surprise to anyone who’s actually written any software, tried to debug software or divine software’s correctness by stepping through it. It’s not the key point of this post either, however.
Nobody*** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible. It is for this reason that code review is a core principle of software development. And luckily – in my view, at least – much of the code that we use these days in our day-to-day lives is Open Source, which means that anybody can look at it, and it’s available for tens or hundreds of thousands of eyes to review.
And herein lies the problem. There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. A dangerous myth. The problems with this view are at least twofold. The first is the “if you build it, they will come” fallacy. I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it****. In the same way, the number of Open Source projects was (maybe) once so small that there was a good chance that people might look at and review your code. Those days are past – long past. Second, for many areas of security functionality – crypto primitives implementation is a good example – the number of suitably qualified eyes is low.
Don’t think that I am in any way suggesting that the problem is any lesser in proprietary code: quite the opposite. Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased. “Proprietary code is more secure” is less myth, more fake news. I completely understand why companies like to keep their security software secret – and I’m afraid that the “it’s to protect our intellectual property” line is too often a platitude they tell themselves, when really, it’s just unsafe to release it. So for me, it’s Open Source all the way when we’re looking at security software.
So, what can we do? Well, companies and other organisations that care about security functionality can – and have, I believe a responsibility to – expend resources on checking and reviewing the code that implements that functionality. That is part of what Red Hat, the organisation for whom I work, is committed to doing. Alongside that, we, the Open Source community, can – and are – finding ways to support critical projects and improve the amount of review that goes into that code*****. And we should encourage academic organisations to train students in the black art of security software writing and review, not to mention highlighting the importance of Open Source Software.
We can do better – and we are doing better. Because what we need to realise is that the reason the “many eyes hypothesis” is a myth is not that many eyes won’t improve code – they will – but that we don’t have enough expert eyes looking. Yet.
* Yeah, really: “perfect and unbreakable”. Let’s just pretend that’s true for the purposes of this discussion.
** …and which still relies on the design and architecture actually to do what you want – or think you want – of course, so good luck.
*** nobody who’s actually written more than about 5 lines of code (or more than 6 characters of Perl)
**** I added one. They came. It was like some sort of magic.