Turtles – and chains of trust

There’s a story about turtles that I want to tell.

One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3].  This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.

As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:

A learned gentleman[5] is giving a public lecture.  He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology.  At the end of the lecture, an elderly lady comes up to him.

“Young man,” she says.

“Yes,” says he.

“That was a very interesting lecture,” she continues.

“I’m glad you enjoyed it,” he replies.

“You are, however, completely wrong.”

“Really?” he says, somewhat taken aback.

“Yes.  All that rubbish about the Earth hovering in space, circling the sun.  Everybody knows that the Earth sits on the back of a turtle.”

The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”

The elderly lady looks at him with a look of disdain.  “What a ridiculous question!  It’s turtles all the way down, of course!”

The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point.  If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware.  Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.

Principles for chains of trust

When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up.  This idea of a chain of trust is very important, and very pervasive, within security.  It allows for some important principles:

  • there has to be a root of trust somewhere (the “bottom turtle”);
  • the chain is only as strong as its weakest link (and attackers will find it);
  • be explicit about each of the links in the chain;
  • realise that some of the links in the chain may change (e.g. if software is updated);
  • be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
  • simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.

Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar).  In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting.  Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank.  In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.

And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust.  Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation?  Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used?  Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.

Chains of trust are everywhere we look.  Some are short, and some are long.  In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low.  This may be due to a variety of factors, including time, expertise and visibility.  But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.


1 – citizens of the US of A,

2 – have a look on a stock photography site like Pixabay if you don’t believe me.

3 – tortoises are land-based, turtles are aquatic, I believe.

4 – Wikipedia has a good article explaining both the concept and the story’s etymology.

5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.

6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!

7 – or is it?  You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle.  For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.

Building Evolutionary Architectures – for security and for open source

Consider the fitness functions, state them upfront, have regular review.

Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.

https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it.  Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.

How, then, did I come across the book?  I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting.  The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.

Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly.  The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.

The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”.  Crucially, these are properties of the architecture – or system – rather than features or specific functionality.  Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests.  Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring.  There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.

Frankly, the above might be enough to recommend the book, but there’s more.  They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:

  1. you may not meet all your fitness functions at the beginning;
  2. applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
  3. your architecture is likely to change over time;
  4. your requirements, and therefore the priority that you give to each fitness function, will change over time;
  5. that even if your fitness functions remain the same, the ways in which you need to monitor them may change.

All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.

As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book.  At times, this felt a little contrived, but the mechanism is generally helpful.  There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts.  Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance.  Relating these to the impact of “architectural coupling” was particularly helpful, in my view.

There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective.  There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.

Why is this a book for security folks?

The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security.  Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix.  The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function.  Taking this as a starting point for a project will help you do two things:

  • avoid focussing on features and functionality, and look at the bigger picture;
  • consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.

Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time.  A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.

Why is this a book for open source folks?

Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles.  This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project.  One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:

  • still relevant;
  • correctly prioritised at this stage in the project;
  • actually being met.

If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.

Open source projects have – quite rightly – various different models of use and intended users.  One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended.  Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out.  One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project.  It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.

Trust & choosing open source

Your impact on open source can be equal to that of others.

A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”.  Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of  very important issues including (unsurprisingly, given the title), a number of issues around trust.  It defines transitive trust thus:

“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”

It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.

At no point in the document does it mention open source software.  To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.

My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source.  It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises.  I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).

What has this to do with trust, specifically transitive trust?  Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it.  Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.

What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software.  This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself.  What are we doing when we say “I trust open source software”?  We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.

There’s actually a lot going on here, some of which is very interesting:

  • we are trusting architects and designers to design software to meet our use cases and requirements;
  • we are trusting developers to implement code well, to those designs;
  • we are trusting developers to review each others’ code;
  • we are trusting documentation folks to document the software correctly;
  • we are trusting testers to write, run and check tests which are appropriate to my use cases;
  • we are trusting those who deploy the code to run in it ways which are similar to my use cases;
  • we are trusting those who deploy the code to report bugs;
  • we are trusting those who receive bug reports to fix them as expected.

There’s more, of course, but that’s definitely enough to get us going.  Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.

In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low.  There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.

That’s because, for open source, I can be any of the entities outlined above.  I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter.  When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive.  You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.

Why, then, would anybody buy or license open source software from a vendor?  Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above.  There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.

Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.


1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal.  You may need help.

2 – see 1.

3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.

What’s an HSM?

HSMs are not right for every project, but form an important part of our armoury.

Another week, another TLA[1].  This time round, it’s Hardware Security Module: an HSM.  What, then, is an HSM, what is it used for, and why should I care?  Before we go there, let’s think a bit about keys: specifically, cryptographic keys.

The way that most cryptography works these days is that the algorithms to implement a particular primitive[3] are public, and it’s generally accepted that it doesn’t matter whether you know what the algorithm is, or how it works, as it’s the security of the keys that matters.  To give an example: I plan to encrypt a piece of data under the AES algorithm[4], which allows for a particular type of (symmetric) encryption.  There are two pieces of data which are fed into the algorithm:

  1. the data you want to encrypt (the cleartext);
  2. a key that you’ve chosen to encrypt it.

Out comes one piece of data:

  1. the encrypted text (the ciphertext).

In order to decrypt the ciphertext, you feed that and the key into the AES algorithm, and the original cleartext comes out.  Everything’s great – until somebody gets hold of the key.

This is where HSMs come in.  Keys are vital, and they are vulnerable:

  • at creation time – if I can trick you into creating a key some of whose bits I can guess, I increase my chances of being able to decrypt your ciphertext;
  • during use – while you’re doing the encryption or decryption of your data, your key will be in memory, which means that if I can snoop into that memory, I can get it (see also below for information on “side channel attacks”;
  • while stored – unless you protect your key while it’s “at rest”, and waiting to be used, I may have opportunities to get it.
  • while being transferred – if you store your keys somewhere different to the place in which you’re using it, I may have an opportunity to intercept it as it moves to the place it will be used.

HSMs can help in one way or another with all of these pieces, but why do we need them?  The key reason is that there are times when you can’t be certain that the system(s) you are using for creating, using, storing and transferring keys are as secure as you’d like.  If the keys we’re talking about are for encrypting a few emails between you and your spouse, well, you might find it embarrassing if they were compromised, but if these keys are ones from which, say, you derive all of the credit cards chip keys for an entire bank, then you have a rather larger problem.  When it comes down to it, somebody with sufficient privilege on a standard computing system can look at any part of memory – unless there’s a TEE[5] (Oh, how I love my TEE (or do I?)) – and if they can look at the memory, they can see the key.

Worse than this, there are occasions when even if you can’t see into memory, you might be able to derive enough information about a key – or the ciphertext or cleartext – to be able to mount an attack on it.  Attacks of this type are generally called “side channel attacks”, and you can think of them as a little akin to being able to work out the number of cylinders and valves a car[6] engine has by listening to it through the bonnet[7].  The engine leaks information about itself, even though it’s not designed with that in mind.  HSMs are (generally) good at preventing both types of attacks: it’s what they’re designed to do.

Here, then, is a definition:

An HSM is piece of hardware with protected storage which can perform cryptographic operations attached to a system – via a network connection or other connection such as PCI – and which has physical protection from various attacks, from side attacks to somebody physically levering open the case and attaching wires to important components so that they can read the electrical signals.

Many HSMs undergo testing to get certification against certain standards such as “FIPS 140” to show their ability to withstand various types of attack.

Here are the main uses for HSMs.

Key creation

Creation of keys is, as alluded to above, a very important operation, and one where side attacks have proved very effective in the past.  HSMs can provide safe(r) key generation, and ensure appropriate levels of randomness (entropy) for the required strength of key.

Key storage

HSMs are typically designed so that if somebody tries to break into them, they will delete any keys which are stored within them, so they’re a good place to store your keys.

Cryptographic operations

Rather than putting your keys at risk by transferring them to another system, and away from the safety of the HSM, why not move the cleartext to the HSM (encrypted under a transport key, preferably), get the HSM to do the encryption with the keys that it already holds, and then send the ciphertext back (encrypted under a transport key[8])?  This reduces opportunities for attacks during transport and during use, and is a key use for HSMs.

General computing operations

Not all HSMs support this use (almost all will support the others), but if you have sensitive operations with lots of keys and algorithms – which, in the case of AI/ML, for instance, may be sensitive (unlike the cryptographic primitives we were talking about before), then it is possible to write applications specifically to run on an HSM.  This is not a simple undertaking, however, as the execution environment provided is likely to be constrained.  It is difficult to do “right”, and easy to make mistakes which may leave you with a significantly less secure environment than you had thought.

Conclusion – should I use HSMs?

HSMs are excellent as roots of trust for PKI [9] projects and similar.  Using them can be difficult, but most these days should provide a PKCS#11 interface which simplifies the most common operations.  If you have sensitive key or cryptographic requirements, designing HSM use into your system can be a sensible step, but knowing how best to use them must be part of the architecture and design stages, well before implementation.  You should also take into account that operation of HSMs must be managed very carefully, from provisioning through everyday use to de-provisioning.  Use of an HSM in the cloud may make sense, but they are expensive and do not scale particularly well.

HSMs, then, are suited to very particular use cases of highly sensitive data and operations – it is no surprise that their deployment is most common within military, government and financial settings. HSMs are not right for every project, by any means, but form an important part of our armoury for the design and operation of sensitive systems.


1 – Three Letter Acronym[2]

2 – keep up, or we’ll be here for some time.

3 – cryptographic building block.

4 – let’s pretend there’s only one type of AES for the purposes of this example.  In fact, there are a number of nuances around this example which I’m going to gloss over, but which shouldn’t be important for the point I’m making.

5 – Trusted Execution Environment.

6 – automobile, for our North American friends.

7 – hood.  Really, do we have to do this every time?

8 – why do you need to encrypt something that’s already encrypted?  Because you shouldn’t use the same key for two different operations.

9 – Public Key Infrastructure.

Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.

16 ways in which security folks are(n’t) like puppies

Following the phenomenal[1] success of my ground-breaking[2] article 16 ways in which users are(n’t) like kittens, I’ve decided to follow up with a yet more inciteful[3] article on security folks. I’m using the word “folks” to encompass all of the different types of security people who “normal”[4] people think of “those annoying people who are always going on about that security stuff, and always say ‘no’ when we want to do anything interesting, important, urgent or business-critical”. I think you’ll agree that “folks” is a more accessible shorthand term, and now that I’ve made it clear who we’re talking about, we can move away from that awkwardness to the important[5] issue at hand.

As with my previous article on cats, I’d like my readers to pretend that this is a carefully researched article, and not one that I hastily threw together at short notice because I got up a bit late today.

Note 1: in an attempt to make security folks seem a little bit useful and positive, I’ve sorted the answers so that the ones where security folks turn out actually to share some properties with puppies appear at the end. But I know that I’m not really fooling anyone.

Note 2: the picture (credit: Miriam Bursell) at the top of this article is of my lovely basset hound Sherlock, who’s well past being a puppy. But any excuse to post a picture of him is fair game in my book. Or on my blog.

Research findings

Hastily compiled table

Property Security folks Puppies
Completely understand and share your priorities
No No
Everybody likes them
No Yes
Generally fun to be around No Yes
Generally lovable No Yes
Feel just like a member of the family No Yes
Always seem very happy to see you No Yes
Are exactly who you want to see at the end of a long day No Yes
Get in the way a lot when you’re in a hurry
Yes Yes
Make a lot of noise about things you don’t care about Yes Yes
Don’t seem to do much most of the time
Yes Yes
Constantly need cleaning up after
Yes Yes
Forget what you told them just 10 minutes ago Yes Yes
Seem to spend much of their waking hours eating or drinking Yes Yes
Wake you up at night to deal with imaginary attackers Yes Yes
Can turn bitey and aggressive for no obvious reason
Yes Yes
Have tickly tummies Yes[6] Yes

1 – relatively.

2 – no, you’re right: this is just hype.

3 – this is almost impossible to prove, given quite how uninciteful the previous one was.

4 – i.e., non-security.

5 – well, let’s pretend.

6 – well, I know I do.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.