Win a copy of my book!

What’s better than excerpts? That’s right: the entire book.

As regular readers of this blog will know, I’ve got a book coming out with Wiley soon. It’s called “Trust in Computer Systems and the Cloud”, and the publisher’s blurb is available here. We’ve now got to the stage where we’ve completed not only the proof-reading for the main text, but also the front matter (acknowledgements, dedication, stuff like that), cover and “praise page”. I’d not heard the term before, but it’s where endorsements of the book go, and I’m very, very excited by the extremely kind comments from a variety of industry leaders which you’ll find quoted there and, in some cases, on the cover. You can find a copy of the cover (without endorsement) below.

Trust book front cover (without endorsement)

I’ve spent a lot of time on this book, and I’ve written a few articles about it, including providing a chapter index and summary to let you get a good idea of what it’s about. More than that, some of the articles here actually contain edited excerpts from the book.

What’s better than excerpts, though? That’s right: the entire book. Instead of an article today, however, I’m offering the opportunity to win a copy of the book. All you need to do is follow this blog (with email updates, as otherwise I can’t contact you), and when it’s published (soon, we hope – the March date should be beaten), I’ll choose one lucky follower to receive a copy.

No Wiley employees, please, but other than that, go for it, and I’ll endeavour to get you a copy as soon as I have any available. I’ll try to get it to you pretty much anywhere in the world, as well. So far, it’s only available in English, so apologies if you were hoping for an immediate copy in another language (hint: let me know, and I’ll lobby my publisher for a translation!).

Organisational suppleness

Growing the ability to react to the unexpected is a valuable skill.

“In preparing for battle I have always found that plans are useless but planning is indispensable.”

Dwight D. Eisenhower

Much of this blog is about security – cybersecurity – in one way or another, but on occasion I do try to take a broader view. Cybersecurity is often modelled or described in military terms, talking about “fighting battles”, “wars of attrition” and “arms races” with “attackers”. These can be useful metaphors (and it’s why I started this article with a quote from a general), but there is a broader set of responsibilities that many of us in the sector need to consider, which is the continued (and hopefully healthy) functioning of our businesses and organisations. In particular, I like to talk about risk and how it relates not just to security, but to how businesses work and plan. One theme that I’ve visited before is that known or planned degradation of a service is often significantly better than failure, or even planned closure (see Service degradation: actually a good thing). My argument is that there are many occasions where keeping a service or business function running, albeit at reduced capacity, or with reductions in known capabilities, allows for better continuity than just stopping it.

Keeping a service running requires work. You can’t just hope that everything is installed and will run as you expect: what happens when your administrator is ill, your fibre-optic cable gets severed by a back hoe, or a DDoS attack is directed at you? You need to plan and practice what to do in these situations. What I’d like to explore in this article goes somewhat beyond the expectation of that planning in three directions. Let’s call them scenario coverage, muscle memory and organisational suppleness.

Scenario coverage

The first, and most obvious of the three directions, is about understanding eventualities. The more scenarios that we model and practice, the more we reduce our risk, simply because we have reduced the number of unknown eventualities in the probability space. There is a actually a side benefit to modelling lost of scenarios, which is that the more situations you consider, the more will come to mind. Every situation involves sets of choices or probabilities – “after the door closes, will it lock or not?” or “if the coolant fails, will the system turn off or burst into flames?” – and the more scenarios you consider, the more questions will arise. This can be daunting – and it’s almost impossible to consider every eventuality – but the more options are covered, the better your opportunities to mitigate the various risks they present.

Muscle memory

Muscle memory is what comes with training and practice. Assuming that you are including your teams in the scenario planning

And I’m assuming here that the planning isn’t solely a paper exercise. Theoretical planning, while useful, only goes so far, for a couple of important reasons:

  • systems will always fails in unexpected ways
  • people will do unexpected things.

What the first of these means is that however much you assume that your back-up generator will kick in if there’s a power outage, until you test it, you can’t be sure that it will. The second of these relates to the fact that however much you tell people what to do, when it actually comes to the doing of it, they’re unlikely to as you expect. This is likely to be even worse if there’s been no training, and you’re just assuming that person X will know how to operate a fire extinguisher, or that team Y will, of course, exit the building in an orderly manner via exit Z (rather than find fourteen different exits, or not even leave the building at all).

For both of these reasons, getting people together to work through possible scenarios, and then, where possible, actually practising what to do, means that you have a higher assurance that when one of the situations you’ve considered does arrive, that they will know what to do, and act as you expect.

Organisational suppleness

While you cannot, as we’ve noted, plan for every eventuality or know exactly how an employee or team will react when things go wrong, there is another benefit to involving a broad group of people in your scenario planning and training. This is that their very involvement gives them practice in dealing with uncertainty, working out how they will react, and giving them experience in how those around them will act. While I may not know exactly what to do if the payroll system goes down an hour before it is due to run, if I have worked with colleagues on scenarios where the sales processing system fails, I’ve got a better chance of making some sensible choices about who to contact, initial steps to take and information to collect than if this is the first time I’ve ever seen anything like it. Likewise, we may not have modelled our response to a physical failure of one of our network links, but our shared experience of practising our response to a DDoS attack means that we have an idea of what to do.

And it is not just having an idea of what to do that is important, but also having gathered and practised the cognitive skills associated with investigating failures, collating data, sharing information and working with others to ameliorate the situation which allows a team or an organisation to respond better to new, maybe unexpected situations. We can think of this as suppleness, as it means that rather than just failing, or cracking, an organisation can react as a tree does to strong winds, or a gymnast does to a new exercise. Growing the ability to react to the unexpected is a valuable skill for an organisation, and knowing that it is supple allows its leaders to plan with more certainty and mitigate more risk.

Trust book – chapter index and summary

I thought it might be interesting to provide the chapter index and a brief summary of each chapter addresses.

In a previous article, I presented the publisher’s blurb for my upcoming book with Wiley, Trust in Computer Systems and the Cloud. I thought it might be interesting, this time around, to provide the chapter index of the book and to give a brief summary of what each chapter addresses.

While it’s possible to read many of the chapters on their own, I haved tried to maintain a logical progression of thought through the book, building on earlier concepts to provide a framework that can be used in the real world. It’s worth noting that the book is not about how humans trust – or don’t trust – computers (there’s a wealth of literature around this topic), but about how to consider the issue of trust between computing systems, or what we can say about assurances that computing systems can make, or can be made about them. This may sound complex, and it is – which is pretty much why I decided to write the book in the first place!

  • Introduction
    • Why I think this is important, and how I came to the subject.
  • Chapter 1 – Why Trust?
    • Trust as a concept, and why it’s important to security, organisations and risk management.
  • Chapter 2 – Humans and Trust
    • Though the book is really about computing and trust, and not humans and trust, we need a grounding in how trust is considered, defined and talked about within the human realm if we are to look at it in our context.
  • Chapter 3 – Trust Operations and Alternatives
    • What are the main things you might want to do around trust, how can we think about them, and what tools/operations are available to us?
  • Chapter 4 – Defining Trust in Computing
    • In this chapter, we delve into the factors which are specific to trust in computing, comparing and contrasting them with the concepts in chapter 2 and looking at what we can and can’t take from the human world of trust.
  • Chapter 5 – The Importance of Systems
    • Regular readers of this blog will be unsurprised that I’m interested in systems. This chapter examines why systems are important in computing and why we need to understand them before we can talk in detail about trust.
  • Chapter 6 – Blockchain and Trust
    • This was initially not a separate chapter, but is an important – and often misunderstood or misrepresented – topic. Blockchains don’t exist or operate in a logical or computational vacuum, and this chapter looks at how trust is important to understanding how blockchains work (or don’t) in the real world.
  • Chapter 7 – The Importance of Time
    • One of the important concepts introduced earlier in the book is the consideration of different contexts for trust, and none is more important to understand than time.
  • Chapter 8 – Systems and Trust
    • Having introduced the importance of systems in chapter 5, we move to considering what it means to have establish a trust relationship from or to a system, and how the extent of what is considered part of the system is vital.
  • Chapter 9 – Open Source and Trust
    • Another topc whose inclusion is unlikely to surprise regular readers of this blog, this chapter looks at various aspects of open source and how it relates to trust.
  • Chapter 10 – Trust, the Cloud, and the Edge
    • Definitely a core chapter in the book, this addresses the complexities of trust in the modern computing environments of the public (and private) cloud and Edge networks.
  • Chapter 11 – Hardware, Trust, and Confidential Computing
    • Confidential Computing is a growing and important area within computing, but to understand its strengths and weaknesses, there needs to be a solid theoretical underpinning of how to talk about trust. This chapter also covers areas such as TPMs and HSMs.
  • Chapter 12 – Trust Domains
    • Trust domains are a concept that allow us to apply the lessons and frameworks we have discussed through the book to real-world situations at large scale. They also allow for modelling at the business level and for issues like risk management – introduced at the beginning of the book – to be considered more explicitly.
  • Chapter 13 – A World of Explicit Trust
    • Final musings on what a trust-centric (or at least trust-inclusive) view of the world enables and hopes for future work in the field.
  • References
    • List of works cited within the book.

Trust book preview

What it means to trust in the context of computer and network security

Just over two years ago, I agreed a contract with Wiley to write a book about trust in computing. It was a long road to get there, starting over twenty years ago, but what pushed me to commit to writing something was a conference I’d been to earlier in 2019 where there was quite a lot of discussion around “trust”, but no obvious underlying agreement about what was actually meant by the term. “Zero trust”, “trusted systems”, “trusted boot”, “trusted compute base” – all terms referencing trust, but with varying levels of definition, and differing understanding if what was being expected, by what components, and to what end.

I’ve spent a lot of time thinking about trust over my career and also have a major professional interest in security and cloud computing, specifically around Confidential Computing (see Confidential computing – the new HTTPS? and Enarx for everyone (a quest) for some starting points), and although the idea of a book wasn’t a simple one, I decided to go for it. This week, we should have the copy-editing stage complete (technical editing already done), with the final stage being proof-reading. This means that the book is close to down. I can’t share a definitive publication date yet, but things are getting there, and I’ve just discovered that the publisher’s blurb has made it onto Amazon. Here, then, is what you can expect.


Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity 

Trust in Computer Systems and the Cloud delivers an insightful and practical new take on what it means to trust in the context of computer and network security and the impact on the emerging field of Confidential Computing. Author Mike Bursell’s experience, ranging from Chief Security Architect at Red Hat to CEO at a Confidential Computing start-up grounds the reader in fundamental concepts of trust and related ideas before discussing the more sophisticated applications of these concepts to various areas in computing. 

The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security. 

  • Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals 
  • A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs. 
  • A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring 

Perfect for security architects at the CISSP level or higher, Trust in Computer Systems and the Cloud is also an indispensable addition to the libraries of system architects, security system engineers, and master’s students in software architecture and security. 

Track and trace failure: a systems issue

The problem was not Excel, but that somebody used the wrong tools in a system.

Like many other IT professionals in the UK – and across the world, having spoken to some other colleagues in other countries – I was first surprised and then horrified as I found out more about the failures of the UK testing and track and trace systems. What was most shocking about the failure is not that it was caused by some alleged problem with Microsoft Excel, but that anyone thought this was a problem due to Excel. The problem was not Excel, but that somebody used the wrong tools in a system which was not designed properly, tested properly, or run properly. I have written many words about systems, and one article in particular seems relevant: If it isn’t tested, it doesn’t work. In it, I assert that a system cannot be said to work properly if it has not been tested, as a fully working system requires testing in order to be “working”.

In many software and hardware projects, in order to complete a piece of work, it has to meet one or more of a set of tests which allow it to be described as “done”. These tests may be actual software tests, or documentation, or just checks done by members of the team (other than the person who did the piece of work!), but the list needs to be considered and made part of the work definition. This “done” definition is as much part of the issue being addressed, functionality added or documentation being written as the actual work done itself.

I find it difficult to believe that there was any such definition for the track and trace system. If there was, then it was not, I’m afraid, defined by someone who is an expert in distributed or large-scale systems. This may not have been the fault of the person who chose Excel for the task of recording information, but it is the fault of the person who was in charge of the system, because Excel is not, and never was, a fit application for what it was being used for. It does not have the scalability characteristics, the integrity characteristics or the versioning characteristics required. This is not the fault of Microsoft, any more than it would be fault of Porsche if a 911T broke down because its owner filled with diesel fuel, rather than petrol[1]. Any competent systems architect or software engineer, qualified to be creating such a system, would have known this: the only thing that seems possible is that whoever put together the system was unqualified to do so.

There seem to be several options here:

  1. the person putting together the system did not know they were unqualified;
  2. the person putting together the system realised that they were unqualified, but did not feel able to tell anyone;
  3. the person putting together the systems realised that they were unqualified, but lied.

In any of the above, where was the oversight? Where was the testing? Where were the requirements? This was a system intended to safeguard the health of millions – millions – of people.

Who can we blame for this? In the end, the government needs to take some large measure of responsibility: they commissioned the system, which means that they should have come up with realistic and appropriate requirements. Requirements of this type may change over the life-cycle of a project, and there are ways to manage this: I recommend a useful book in another article, Building Evolutionary Architectures – for security and for open source. These are not new problems, and they are not unsolved problems: we know how to do this as a profession, as a society.

And then again, should we blame someone? Normally, I’d consider this a question out of scope for this blog, but people may die because of this decision – the decision not to define, design, test and run a system which was fit for purpose. At the very least, there are people who are anxious and worried about whether they have Covid-19, whether they need to self-isolate, whether they may have infected vulnerable friends or family. Blame is a nasty thing, but if it’s about holding people to account, then that’s what should happen.

IT systems are important. Particularly when they involve people’s health, but in many other areas, too: banking, critical infrastructure, defence, energy, even retail and entertainment, where people’s jobs will be at stake. It is appropriate for those of us with a voice to speak out, to remind the IT community that we have a responsibility to society, and to hold those who commission IT systems to account.


1 – or “gasoline” in some geographies.

What’s a Trusted Compute Base?

Tamper-evidence, auditability and measurability are three important properties.

A few months ago, in an article called “Turtles – and chains of trust“, I briefly mentioned Trusted Compute Bases, or TCBs, but then didn’t go any deeper.  I had a bit of a search across the articles on this blog, and realised that I’ve never gone into this topic in much detail, which feels like a mistake, so I’m going to do it now.

First of all, let’s think about computer systems.  When I talk about systems, I’m being both quite specific (see Systems security – why it matters) and quite broad (I don’t just mean computer that sits on your desk or in a data centre, but include phones, routers, aircraft navigation devices – pretty much anything that has a set of chips inside it).  There are surely some systems that you don’t rely on too much to do important things, but in most cases, you’re going to care if they go wrong, or, more relevant to this discussion, if they get compromised.  Even the most benign of systems – a smart light-bulb, for instance – can become a nightmare if compromised.  Even if you don’t particularly care whether you can continue to use it in the way it was intended, there are still worries about its misuse in the case of compromise:

  1. it may become a “jumping off point” for malicious attacks into your network or other systems;
  2. it may be used as part of a botnet, piggybacking on your network to attack other systems (leading to sanctions against your legitimate systems from outside);
  3. it may be used as part of a botnet, using up resources such as network bandwidth, storage or electricity (leading to resource constraints or increased charges).

For any systems dealing with sensitive data – anything from your messages to loved ones on your phone through intellectual property secrets for a manufacturing organisation through to National Security data for government department – these issues are compounded.  In order to protect your system, you can’t just say “this system is secure” (lovely as that would be).  What can you do to start making statement about the general security of a system?

The stack

Systems consist of multiple components, and modern computing systems are typically composed from multiple layers (one of my favourite xkcd comics, Stack, shows some of them).  What’s relevant from the point of view of this article is that, on the whole, the different layers of the stack start up – boot up – from the bottom upwards.  This means, following the “bottom turtle” rule (see the Turtles article referenced above), that we need to ensure that the bottom layer is as secure as possible.  In fact, in order to build a system in which we can have assurance that it will behave as expected and designed (in other words, a system in which we can have a trust relationship), we need to build a Trusted Compute Base.  This should have at least the following set of properties: tamper-evidence, auditability and measurability, all of which are related to each other.

Tamper-evidence

We want to know if the TCB – on which we are building everything else – has a problem.  Specifically, we need a set of layers or components that we are pretty sure have not been compromised, or which, if compromised, will be tamper-evident:

  • fail in expected ways,
  • refuse to start, or
  • flag that they have been compromised.

It turns out that this is not easy, and typically becomes more difficult as you move up the stack – partly because you’re adding more layers, and partly because those layers tend to get more complex.

Our TCB should have the properties listed above (around failure, refusing to start or compromise-flagging), and be as small as possible.  This seems the wrong way around: surely you would want to ensure that as much of your system was trusted as possible?  In fact, what you want is a small, easily measurable and easily auditable TCB on which you can build the rest of your system – from which you can build a “chain of trust” to the other parts of your system about which you care.  Auditability and measurability are the other two properties that you want in a TCB, and these two properties are why open source is a very useful tool in your toolkit when building a TCB.

Auditability (and open source)

Auditability means that you – or someone else who you trust to do the job – can look into the various components of the TCB and assure yourself that they have been written, compiled and  are executing properly.  As I explained in Of projects, products and (security) community, the person may not always be you, or even someone in your organisation, but if you’re using widely deployed open source software, the rest of the community can be doing that auditing for you, which is a win for you and – if you contribute your knowledge back into the community – for everybody else as well.

Auditability typically gets harder the further you go down the stack – partly because you’re getting closer and closer to bits – ones and zeros – and to actual electrons, and partly because there is very little truly open source hardware out there at the moment.  However, the more that we can see and audit of the TCB, the more confidence we can have in it as a building block for the rest of our system.

Measurability (and open source)

The other thing you want to be able to do is ensure that your TCB really is your TCB.  Tamper-evidence is related to this, but that’s a run-time property only (for software components, at least).  Being able to measure when you provision your system and then to check that what you originally loaded is still what you think it should be when you boot it is a very important property of a TCB.  If what you’re running is open source, you can check it yourself, against your own measurements and those of the community, and if changes are made – by you or others – those changes can be checked (as part of auditing) and then propagated through measurement checking to the rest of the community.  Equally important – and much more difficult – is run-time measurability.  This turns out to be very difficult to do, although there are some techniques emerging which are beginning to get traction – for now, we tend to rely on tamper-evidence, which is easier in hardware than software.

Summary

Trusted Compute Bases (TCBs) are a key concept in building systems that we hope will behave in ways we expect – or allow us to find out when they are not.  Tamper-evidence, auditability and measurability are three important properties that they should display, and it turns out that open source is an important factor in helping us ensure two of those.

 

 

 

How to be a no-shame generalist

There is no shame in being a generalist, and knowing when you need to consult a specialist.

There comes a time in any person’s life[1] when they realise that they’re not going to be able to do all the things they might like to do to a high level of expertise.  I used to kid myself that I could do anything if I tried hard enough and practised enough, but then I tried juggling.  It turns out that I’m never going to be able to juggle.  Not just juggle expertly.  I mean juggle at all.  My trying to juggle – with only one ball, let alone more than one – is so amusing that my family realised years ago that it was a great party trick.  “Daddy,” they’ll say, “show everyone your juggling.  It’s really funny.”  “But I can’t juggle,” I retort.  “Yes,” they respond, “that’s what’s funny[2].”

I’m also never going to be able to draw or do any art with any competence.

Or play any racquet sport with any level of skill.

Or do any gardening, painting or DIY-based household jobs with any degree of expertise[3].

Some people will retort that any old fool can be taught to do x activity (usually, it’s juggling, actually), but not only do I not believe this, but also, to be honest, there just isn’t enough time in the day to learn all the things I’d kind of like to try.

What has all this to do with security?

Specialism and education

Well, I’ve posted before that I’m a systems person, and the core of thinking about systems is that you need to look at the big picture.  In order to do that, you need to be a generalist.  There’s a phrase[5] in English: “Jack of all trades, master of none”, which is often used to condemn those who know a little about many things and are seen to dabble in them without a full understanding of any of them.  Interestingly, this version may be an abbreviation of the original, more positive:

Jack of all trades, master of none,
though oftentimes better than master of one.

The core inference, though, is that generalists aren’t as useful as specialists.  I don’t believe this.

In many educational systems, there’s a tendency to push students towards narrower and narrower fields of study.  For some, this is just what is needed, but for others – “systems people”, “synthesists” and “generalists” – this isn’t the best way to harness their talents, at least in the long term.  We need people who can see the big picture, who can take a wider view, and look beyond a single blocking issue to realise that the answer to a problem may not be a better implementation of an authentication library, but a change in the authorisation mechanism being used at the component level, for instance.

There are dangers to following this approach too far, however:

  1. it can lead to disparagement of specialists and their skills, even to a distrust of experts;
  2. it can lead to arrogance on the part of generalists.

We see the first in desperately concerning trends such as politicians thinking they know more than economists or climate scientists, anti-vaxxers ignoring the benefits of vaccination, and idiocy around chem-trails, flat-earth beliefs and moon landing conspiracies.  It happens in the world of work, as well, I’m sad to say.  There is a particular type of MBA recipient, for instance, who believes that the completion of the course and award of the degree confers on them some sort of superhuman ability to know what is is best for all organisations in all circumstances[6].

Specialise first

To come back to the world of security, my recommendation is that even if you know that your skills and interests are leading you to a career as a generalist, then you need to become a specialist first, in at least area.  You may not become an expert in that field, but you need to know it well.  Better still, strive for at least a level of competence in several fields – an ability to converse knowledgeably with true experts and to understand at least why they are making the choices and recommendations that they are.

And that leads us to the key point here: if you become a generalist, you need to acknowledge lack of expertise: it must become your modus operandi, your métier, your way of working.  You need to recognise that your strength is not in your knowing many things, but in knowing what you don’t know, and when it is time to call in the specialists.

I’m not a cryptographer, but I know enough about cryptography to realise when it’s time to call in an expert.  I’m not an expert on legal issues around cryptography, either, but know when to call on a lawyer.  Nor am I an expert on block storage, blockchain consensus, quantum key exchange protocols, CPU scheduling or compression algorithms.  The same will go for many areas which I may be called on to touch as part of my job.  I hope to have enough training and expertise within related fields – or the ability to gain it – to be able to ask sensible questions, but sometimes even that won’t be true, and the best (and most productive) interaction will be to say “I don’t know about this: please explain it to me, or at least tell me what the options are.”  This seems to me to be particularly important for security folks: there are so many overlapping disciplines, and getting one piece wrong means that your defence in depth strategy just got a whole lot shallower.

Being too lazy to look things up, too arrogant to listen to others or too short-sighted to realise that there are areas in which we are not expert are things of which we should be ashamed.

But there is no shame in being a generalist, and knowing when you need to consult a specialist.


1 – I’m extrapolating horribly here, but it’s true for me so I’m assuming it’s a universal truth.

2 – apparently the look on my face, and the things I do with my tongue, are a sight to behold.

3 – I’m constantly trying to convince my wife of these, and although she’s sceptical about some, we’re now agreed that I shouldn’t be allowed access to any power tools again if we want avoid further trips to the Accident and Emergency department at the hospital[4].

4 – it’s not only power tools.  I once nearly removed my foot with a wallpaper stripper.  I still have the scar nearly 25 years later.

5 – somewhat gendered, for which I apologise.

6 – disclaimer – I have an MBA, and met many talented and humble people on my course (and have met many since) who don’t suffer from this predicament.

Immutability: my favourite superpower

As a security guy, I approve of defence in depth.

I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time.  I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well.  You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora.  There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.

What does “immutable” mean?  It means that it can’t be changed.  To be more accurate, in a software context, it generally means that something can’t be changed during run-time.

Important digression – constant immutability

I realised as I wrote that final sentence that it might be a little misleading.  Many  programming languages have the concept of “constants”.  A constant is a variable (or set, or data structure) which is constant – that is, not variable.  You can assign a value to a constant, and generally expect it not to change.  But – and this depends on the language you are using – it may be that the constant is not immutable.  This seems to go against common sense[1], but that’s just the way that some languages are designed.  The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.

Operating System immutability

In Silverblue’s case, it’s the operating system that’s immutable.  You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem.  This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced.  It’s not impossible[2], but the risk is significantly lower.

How do you update your system, then?  Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that.  Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system.  This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages.  If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing.  Another environment?  Another image.

We’re more interested in the security properties that this offers us, however.  Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration.  From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.

Container immutability

This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”).  One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.

From our point of view, however, what’s great about containers is that you can choose to use them immutably.  In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern.  The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required.  The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t.  Remember: immutable software gives better knowability, and improves your resistance to run-time compromise.  Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.

This brings us to two of the reasons that you should never run containers with root privilege:

  • there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
  • there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.

Double immutability with Silverblue

I mentioned above that Silverblue runs applications in containers.  This means that you have two levels of security provided as default when you run applications on a Silverblue system:

  1. the operating system immutability;
  2. the container immutability.

As a security guy, I approve of defence in depth, and this is a classic example of that property.  I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.


1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.

2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.

3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.

Building Evolutionary Architectures – for security and for open source

Consider the fitness functions, state them upfront, have regular review.

Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.

https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it.  Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.

How, then, did I come across the book?  I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting.  The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.

Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly.  The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.

The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”.  Crucially, these are properties of the architecture – or system – rather than features or specific functionality.  Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests.  Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring.  There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.

Frankly, the above might be enough to recommend the book, but there’s more.  They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:

  1. you may not meet all your fitness functions at the beginning;
  2. applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
  3. your architecture is likely to change over time;
  4. your requirements, and therefore the priority that you give to each fitness function, will change over time;
  5. that even if your fitness functions remain the same, the ways in which you need to monitor them may change.

All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.

As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book.  At times, this felt a little contrived, but the mechanism is generally helpful.  There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts.  Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance.  Relating these to the impact of “architectural coupling” was particularly helpful, in my view.

There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective.  There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.

Why is this a book for security folks?

The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security.  Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix.  The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function.  Taking this as a starting point for a project will help you do two things:

  • avoid focussing on features and functionality, and look at the bigger picture;
  • consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.

Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time.  A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.

Why is this a book for open source folks?

Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles.  This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project.  One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:

  • still relevant;
  • correctly prioritised at this stage in the project;
  • actually being met.

If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.

Open source projects have – quite rightly – various different models of use and intended users.  One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended.  Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out.  One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project.  It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.

5 (Professional) development tips for security folks

… write a review of “Sneakers” or “Hackers”…

To my wife’s surprise[1], I’m a manager these days.  I only have one report, true, but he hasn’t quit[2], so I assume that I’ve not messed this management thing up completely[2].  One of the “joys” of management is that you get to perform performance and development (“P&D”) reviews, and it’s that time of year at the wonderful Red Hat (my employer).  In my department, we’re being encouraged (Red Hat generally isn’t in favour of actually forcing people to do things) to move to “OKRs”, which are “Objectives and Key Results”.  Like any management tool, they’re imperfect, but they’re better than some.  You’re supposed to choose a small number of objectives (“learn a (specific) new language”), and then have some key results for each objective that can be measured somehow (“be able to check into a hotel”, “be able to order a round of drinks”) after a period of time (“by the end of the quarter”).  I’m simplifying slightly, but that’s the general idea.

Anyway, I sometimes get asked by people looking to move into security for pointers to how to get into the field.  My background and route to where I am is fairly atypical, so I’m very sensitive to the fact that some people won’t have taken Computer Science at university or college, and may be pursuing alternative tracks into the profession[3].  As a service to those, here are a few suggestions as to what they can do which take a more “OKR” approach than I provided in my previous article Getting started in IT security – an in/outsider’s view.

1. Learn a new language

And do it with security in mind.  I’m not going to be horribly prescriptive about this: although there’s a lot to be said for languages which are aimed a security use cases (Rust is an obvious example), learning any new programming language, and thinking about how it handles (or fails to handle) security is going to benefit you.  You’re going to want to choose key results that:

  • show that you understand what’s going on with key language constructs to do with security;
  • show that you understand some of what the advantages and disadvantages of the language;
  • (advanced) show how to misuse the language (so that you can spot similar mistakes in future).

2. Learn a new language (2)

This isn’t a typo.  This time, I mean learn about how other functions within your organisations talk.  All of these are useful:

  • risk and compliance
  • legal (contracts)
  • legal (Intellectual Property Rights)
  • marketing
  • strategy
  • human resources
  • sales
  • development
  • testing
  • UX (User Experience)
  • IT
  • workplace services

Who am I kidding?  They’re all useful.  You’re learning somebody else’s mode of thinking, what matters to them, and what makes them tick.  Next time you design something, make a decision which touches on their world, or consider installing a new app, you’ll have another point of view to consider, and that’s got to be good.  Key results might include:

  • giving a 15 minute presentation to the group about your work;
  • arranging a 15 minute presentation to your group about the other group’s work;
  • (advanced) giving a 15 minute presentation yourself to your group about the other group’s work.

3. Learning more about cryptography

So much of what we do as security people comes down to or includes some cryptography.  Understanding how it should be used is important, but equally, being able to understand how it shouldn’t be used is something we should all understand.  Most important, from my point of view, however, is to know the limits of your knowledge, and to be wise enough to call in a real cryptographic expert when you’re approaching those limits.  Different people’s interests and abilities (in mathematics, apart from anything else) vary widely, so here is a broad list of different possible key results to consider:

  • learn when to use asymmetric cryptography, and when to use symmetric cryptography;
  • understand the basics of public key infrastructure (PKI);
  • understand what one-way functions are, and why they’re important;
  • understand the mathematics behind public key cryptography;
  • understand the various expiry and revocation options for certificates, their advantages and disadvantages.
  • (advanced) design a protocol using cryptographic primitives AND GET IT TORN APART BY AN EXPERT[4].

4. Learn to think about systems

Nothing that we manage, write, design or test exists on its own: it’s all part of a larger system.  That system involves nasty awkwardnesses like managers, users, attackers, backhoes and tornadoes.  Think about the larger context of what you’re doing, and you’ll be a better security person for it.  Here are some suggestions for key results:

  • read a book about systems, e.g.:
    • Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson;
    • Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design, ed. Diomidis Spinellis and Georgios Gousios;
    • Building Evolutionary Architectures: Support Constant Change by Neal Ford, Rebecca Parsons & Patrick Kua[5].
  • arrange for the operations folks in your organisation to give a 15 minute presentation to your group (I can pretty much guarantee that they think about security differently to you – unless you’re in the operations group already, of course);
  • map out a system you think you know well, and then consider all the different “external” factors that could negatively impact its security;
  • write a review of “Sneakers” or “Hackers”, highlighting how unrealistic the film[6] is, and how, equally, how right on the money it is.

5. Read a blog regularly

THIS blog, of course, would be my preference (I try to post every Tuesday), but getting into the habit of reading something security-related[7] on a regular basis means that you’re going to keep thinking about security from a point of view other than your own (which is a bit of a theme for this article).  Alternatively, you can listen to a podcast, but as I don’t have a podcast myself, I clearly can’t endorse that[8].  Key results might include:

  • read a security blog once a week;
  • listen to a security podcast once a month;
  • write an article for a site such as (the brilliant) OpenSource.com[9].

Conclusion

I’m aware that I’ve abused the OKR approach somewhat by making a number of the key results non-measureable: sorry.  Exactly what you choose will depend on you, your situation, how long the objectives last for, and a multitude of other factors, so adjust for your situation.  Remember – you’re trying to develop yourself and your knowledge.


1 – and mine.

2 – yet.

3 – yes, I called it a profession.  Feel free to chortle.

4 – the bit in CAPS is vitally, vitally important.  If you ignore that, you’re missing the point.

5 – I’m currently reading this after hearing Dr Parsons speak at a conference.  It’s good.

6 – movie.

7 – this blog is supposed to meet that criterion, and quite often does…

8 – smiley face.  Ish.

9 – if you’re interested, please contact me – I’m a community moderator there.