Security is (only) subjective

What aspects of security does it provide?

This article covers ground covered in more detail within (but is not quite an excerpt from) my forthcoming book on Trust in Computing and the Cloud for Wiley.

In 1985, the US Department of Defense [sic] published the “Orange Book”[1], officially named Trusted Computer System Evaluation Criteria. It was a guide to how to create a “trusted system”, and was hugely influential within the IT and security industry as a whole. Eight years later, in 1993, Dorothy Denny published a devastating critique of the Orange Book called A New Paradigm for Trusted Systems[1]. It is a brilliant step-by-step analysis of why the approach taken by the DoD was fundamentally flawed. Denning starts:

“The current paradigm for trusted computer systems holds that trust is a property of a system. It is a property that can be formally modeled, specified, and verified. It can be designed into a system using a rigorous design methodology.”

Later, she explains why this just doesn’t work in the real world:

“The current paradigm of treating trust as a property is inconsistent, with the way trust is actually established in the world. It is not a property, but rather an assessment that is based on experience and shared through networks of people in the world-wide market. It is a declaration made by an observer, rather than a property of the observed.”

Demolishing the idea that trust is an inherent property of a system, and making it relational instead, changed the way that systems designed for security would be considered (and ushered in a new approach by the US Government and associated organisations, known was Common Criteria). Denning was writing about trust, but very similar issues exist around the concept of “security”. Too often, security is considered an inherent or intrinsic property of a system: “it’s secure”, someone will say, or “this fix will secure your computer”. It isn’t, and it won’t.

The first problem with such statements is that it’s not clear what “secure” means. There are a number of properties associated with systems that are relevant to security: three of the ones most often quoted are confidentiality, integrity and availability (which I discuss in more detail in the post The Other CIA: Confidentiality, Integrity and Availability). Specifying which of these you’re interested in removes the temptation just to say that something is “secure”, and if someone says, “it provides security”, we’re now in a position to start asking what that assertion actually means. Which aspects of security does it provide?

I also don’t think it makes sense to say that a system is “confidential” or “available” (there’s no obvious equivalent adjective for integrity – “integral” means something rather different): what we may be able to say is that it exhibits properties associated with confidentiality, integrity and availability, or better, that it has measures associated with it which are designed and intended to provide confidentiality, integrity and availability. These measures can be listed, examined and evaluated, hopefully against well-defined criteria.

This seems like a much better approach: not only have we addressed the suggestion that there is such as thing as “security” that we can apply to a system, but following Denning, we have also challenged the suggestion that it is inherent to – or in – a system. Instead, we have introduced the alternative approach of describing security-related properties which can be subjected to scrutiny by the users of the system. This allows the type of relational understanding of security that Denning was proposing, but it also raises the possibility of differing parties having different views of the security (or not) of a system, depending on who they are, and how it is going to be used.

It turns out, when you think about it, that this makes a lot of sense. A laptop which provides sufficient confidentiality, integrity and availability protection for the computing needs of my retired uncle may not provide sufficient protections for the uses to which an operative of a government security service might put it[3]. Equally, a system which a telecommunications company runs in a physically protected data centre may well be considered to have appropriate security protections, whereas the same system, attached to a pole somewhere on a residential street, might not. The measures applied to provide the protections associated with the properties (e.g. 128 bit AES encryption for the confidentiality) may be objectively specifiable, but the extent to which they provide “security” is not, because they are relative to specific requirements.

One last point, and it’s one which regular readers of my blog will be unsurprised to see: how can you assess the applicability of a system’s security properties to your requirements if it is not open? Open source helps significantly with security. Yes, there are assessment regimes to say that systems meet certain criteria – and sometimes these can be very helpful – but they are generally broad criteria, and difficult to apply to your specific use cases. Equally, most are just a starting point, and many such certified systems will require “exceptions” to be met in order to function in the real world, exceptions which require significant expertise to understand, judge and apply safely (that is, with appropriate levels of risk). If the system you want to use is open, then you, a party who you trust, or the wider community can evaluate the appropriateness of controls and measures, and make an informed decision about whether a system’s security properties are what you need. Without open source, this is impossible.


1 – it had an orange cover.

2 – Denning, Dorothy E. (1993) A New Paradigm for Trusted Systems [online]. Available at: https://www.researchgate.net/publication/234793347_A_New_Paradigm_for_Trusted_Systems [Accessed 3 Apr. 2020]

3 – I’m assuming that my uncle isn’t an operative of a government security service[4].

4 – or at least that his security needs are reduced in retirement[5].

5 – that is, if he has really retired…

Author: Mike Bursell

Long-time Open Source and Linux bod, distributed systems security, etc.. CEO of Profian. マイク・バーゼル: オープンソースとLinuxに長く従事。他にも分散セキュリティシステムなども手がける。現在Profianのチーフセキュリティアーキテクト

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: