Author: Mike Bursell
Open source and – well, bad people
For most people writing open source, it – open source software – seems like an unalloyed good. You write code, and other nice people, like you, get to add to it, test it, document it and use it. Look what good it can do to the world! Even the Organisation-Formerly-Known-As-The-Evil-Empire has embraced open source software, and is becoming a happy and loving place, supporting the community and both espousing and proselytising the Good Thing[tm] that is open source. Many open source licences are written in such a way that it’s well-nigh impossible for an organisation to make changes to open source and profit from it without releasing the code they’ve changed. The very soul of open source – the licence – is doing our work for us: improving the world.
And on the whole, I’d agree. But when I see uncritical assumptions being peddled – about anything, frankly – I start to wonder. Because I know, and you know, when you think about it, that not everybody who uses open source is a good person. Crackers (that’s “bad hackers”) use open source. Drug dealers use open source. People traffickers use open source. Terrorists use open source. Maybe some of them contribute patches and testing and documentation – I suppose it’s even quite likely that a few actually do – but they are, by pretty much anyone’s yardstick, not good people. These are the sorts of people you probably shrug your shoulders about and say, “well, there’s only a few of them compared to all the others, and I can live with that”. You’re happy to continue contributing to open source because many more people are helped by it than harmed by it. The alternative – not contributing to open source – would fail to help as many people, and so the first option is the lesser of two evils and should be embraced. This is, basically, a utilitarian argument – the one popularised by John Stuart Mill: “classical utilitarianism”[1]. This is sometimes described as:
“Actions are right in proportion as they tend to promote overall human happiness.”
I certainly hope that open source does tend to promote overall human happiness. The problem is that criminals are not the only people who will be using open source – your open source – code. There will be businesses whose practices are shady, governments that oppress their detractors, police forces that spy on the citizens they watch. This is your code, being used to do bad things.
But what even are bad things? This is one of the standard complaints about utilitarian philosophies – it’s difficult to define objectively what is good, and, by extension, what is bad. We (by which I mean law-abiding citizens in most countries) may be able to agree that people trafficking is bad, but there are many areas that we could call grey[2]:
- tobacco manufacturers;
- petrochemical and fracking companies;
- plastics manufacturers;
- organisations who don’t support LGBTQ+ people;
- gun manufacturers.
There’s quite a range here, and that’s intentional. Also the last example is carefully chosen. One of the early movers in what would become the open source movement is Eric Raymond (known to one and all by his initials “ESR”), who is a long-standing supporter of gun rights[3]. He has, as he has put it, “taken some public flak in the hacker community for vocally supporting firearms rights”. For ESR, “it’s all about freedom”. I disagree, although I don’t feel the need to attack him for it. But it’s clear that his view about what constitutes good is different to mine. I take a very liberal view of LGBTQ+ rights, but I know people in the open source community who wouldn’t take the same view. Although we tend to characterise the open source community as liberal, this has never been a good generalisation. According to the Jargon File (later published as “The Hacker’s Dictionary”, the politics of the average hacker are:
Vaguely liberal-moderate, except for the strong libertarian contingent which rejects conventional left-right politics entirely. The only safe generalization is that hackers tend to be rather anti-authoritarian; thus, both conventional conservatism and ‘hard’ leftism are rare. Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.
This may be somewhat out of date, but it still feels that this description would resonate with many of the open source community who self-consciously consider themselves as part of that community. Still, it’s clear that we, as a community, are never going to be able to agree on what counts as a “good use” of open source code by a “good” organisation. Even if we could, the chances of anybody being able to create a set of licences that would stop the people that might be considered bad are fairly slim.
I still think, though, that I’m not too worried. I think that we can extend the utilitarian argument to say that the majority of use of open source software would be considered good by most open source contributors, or at least that the balance of “good” over “bad” would be generally considered to lean towards the good side. So – please keep contributing: we’re doing good things (whatever they might be).
1 – I am really not an ethicist or a philosopher, so apologies if I’m being a little rough round the edges here.
2 – you should be used to this by now: UK spelling throughout.
3 – “Yes, I cheerfully refer to myself as a gun nut.” – Eric’s Gun Nut Page
First aid – are you ready?
Your using the defibrillator is the best chance that the patient has of surviving.
Disclaimer: I am not a doctor, nor a medical professional. I will attempt not to give specific medical or legal advice in this article: please check your local medical and legal professionals before embarking on any course of action about which you are unsure.
This is, generally, a blog about security – that is, information security or cybersecurity – but I sometimes blog about other things. This is one of those articles. It’s still about security, if you will – the security and safety of those around you. Here’s how it came about: I recently saw a video on LinkedIn about a restaurant manager performing Abdominal Thrusts (it’s not called the Heimlich Manoeuvre any more due to trademarking) on a choking customer, quite possibly saving his life.
And I thought: I’ve done that.
And then I thought: I’ve performed CPR, and used a defibrillator, and looked after people who were drunk or concussed, and helped people having a diabetic episode, and encouraged a father to apply an epipen[1] to a confused child suffering from anaphylactic shock, and comforted a schoolchild who had just had an epileptic fit, and attended people in more than one car crash (typically referred to as an “RTC”, or “Road Traffic Collision” in the UK these days[2]).
And then I thought: I should tell people about these stories. Not to boast[3], but because if you travel a lot, or you commute to work, or you have a family, or you work in an office, or you ever go out to a party, or you play sports, or engage in hobby activities, or get on a plane or train or boat or drive anywhere, then there’s a decent chance that you may come across someone who needs your help, and it’s good – very good – if you can offer them some aid. It’s called “First Aid” for a reason: you’re not expected to know everything, or fix everything, but you’re the first person there who can provide aid, and that’s the best the patient can expect until professionals arrive.
Types of training
There are a variety of levels of first aid training that might be appropriate for you. These include:
- family and children focussed;
- workplace first aid;
- hobby, sports and event first aid;
- ambulance and local health service support and volunteering.
There’s an overlap between all of these, of course, and what you’re interested in, and what’s available to you, will vary based on your circumstances and location. There may be other constraints such as age and physical ability or criminal background checks: these will definitely be dependent on your location and individual context.
I’m what’s called, in the UK, a Community First Responder (CFR). We’re given some specific training to help provide emergency first aid in our communities. What exactly you do depends on your local ambulance trust – I’m with the East of England Ambulance Service Trust, and I have a kit with items to allow basic diagnosis and treatment which includes:
- a defibrillator (AED) and associated pads, razors[4], shears, etc.
- a tank of oxygen and various masks
- some airway management equipment whose name I can never remember
- glucogel for diabetic treatment
- a pulsoximeter for heartrate and blood oxygen saturation measurement
- gloves
- bandages, plasters[6]
- lots of forms to fill in
- some other bits and pieces.
I also have a phone and a radio (not all CFRs get a radio, but our area is rural and has particularly bad mobile phone reception.
I’m on duty as I type this – I work from home, and my employer (the lovely Red Hat) is cool with my attending emergency calls in certain circumstances – and could be called out at any moment to an emergency in about a 10 mile/15km radius. Among the call-outs I’ve attended are cardiac arrests (“heart attacks”), fits, anaphylaxis (extreme allergic reactions), strokes, falls, diabetics with problems, drunks with problems, major bleeding, patients with difficulty breathing or chest pains, sepsis, and lots of stuff which is less serious (and which has maybe been misreported). The plan is that if it’s considered a serious condition, it looks like I can get there before an ambulance, or if the crew is likely to need more hands to help (for treating a full cardiac arrest, a good number of people can really help), then I get dispatched. I drive my own car, I’m not allowed sirens or lights, I’m not allowed to break the speed limit or go through red lights and I don’t attend road traffic collisions. I volunteer whatever hours fit around my job and broader life, I don’t get paid, and I provide my own fuel and vehicle insurance. I get anywhere from zero to four calls a day (but most often zero or one).
There are volunteers in other fields who attend events, provide sports or hobby first aid (I did some scuba diving training a while ago), and there are all sorts of types of training for workplace first aid. Most workplaces will have designated first aiders who can be called on if there’s a problem.
The minimum to know
The people I’ve just noted above – the trained ones – won’t always be available. Sometimes, you – with no training – will be the first on scene. In most jurisdictions, if you attempt first aid, the law will look kindly on you, even if you don’t get it all perfect[7]. In some jurisdictions, there’s actually an expectation that you’ll step in. What should you know? What should you do?
Here’s my view. It’s not the view of a professional, and it doesn’t take into account everybody’s circumstances. Again, it’s my view, and it’s that you should consider enough training to be able to cope with two of the most common – and serious – medical emergencies.
- Everybody should know how to deal with a choking patient.
- Everybody should know how do to CPR (Cardiopulmonary resuscitation) – chest compressions, at minimum, but with artificial respiration if you feel confident.
In the first of these cases, if someone is choking, and they continue to fail to breathe, they will die.
In the second of these cases, if someone’s heart has stopped beating, they are dead. Doing nothing means that they stay that way. Doing something gives them a chance.
There are videos and training available on the Internet, or provided by many organisations.
The minimum to try
If you come across somebody who is in cardiac arrest, call the emergency services. Dispatch someone (if you’re not alone) to try to find a defibrillator (AED) – the emergency services call centre will often help with this, or there’s an app called “GoodSam” which will locate one for you.
Use the defibrillator.
They are designed for untrained people. You open it up, and it will talk to you. Do what it says.
Even if you don’t feel confident giving CPR, use a defibrillator.
I have used a defibrillator. They are easy to use.
Use that defibrillator.
The defibrillator is not the best chance that the patient has of surviving: your using the defibrillator is the best chance that the patient has of surviving.
Conclusion
Providing first aid for someone in a serious situation doesn’t always work. Sometimes people die. In fact, in the case of a cardiac arrest (heart attack), the percentage of times that CPR is successful is low – even in a hospital setting, with professionals on hand. If you have tried, you’ve given them a chance. It is not your fault if the outcome isn’t perfect. But if you hadn’t tried, there was no chance.
Please respect and support professionals, as well. They are often busy and concerned, and may not have the time to thank you, but your help is appreciated. We are lucky, in our area, that the huge majority of EEAST ambulance personnel are very supportive of CFRs and others who help out in an emergency.
If this article has been interesting to you, and you are considering taking some training, then get to the end of the post, share it via social media(!), and then search online for something appropriate to you. There are many organisations who will provide training – some for free – and many opportunities for volunteering. You know that if a member of your family needed help, you would hope that somebody was capable and willing to provide it.
Final note – if you have been affected by anything in this article, please find some help, whether professional or just with friends. Many of the medical issues I’ve discussed are distressing, and self care is important (it’s one of the things that EEAST takes seriously for all its members, including its CFRs).
1 – a special adrenaline-administering device (don’t use somebody else’s – they’re calibrated pretty carefully to an individual).
2 – calling it an “accident” suggests it was no-one’s fault, when often, it really was.
3 – well, maybe a little bit.
4 – to shave hairy chests – no, really.
5 – to cut through clothing. And nipples chains, if required. Again, no, really.
6 – “Bandaids” for our US cousins.
7 – please check your local jurisdiction’s rules on this.
Immutability: my favourite superpower
As a security guy, I approve of defence in depth.
I’m a recent but dedicated convert to Silverblue, which I run on my main home laptop and which I’ll be putting onto my work laptop when I’m due a hardware upgrade in a few months’ time. I wrote an article about Silverblue over at Enable Sysadmin, and over the weekend, I moved the laptop that one of my kids has over to it as well. You can learn more about Silverblue over at the main Silverblue site, but in terms of usability, look and feel, it’s basically a version of Fedora. There’s one key difference, however, which is that the operating system is mounted read-only, meaning that it’s immutable.
What does “immutable” mean? It means that it can’t be changed. To be more accurate, in a software context, it generally means that something can’t be changed during run-time.
Important digression – constant immutability
I realised as I wrote that final sentence that it might be a little misleading. Many programming languages have the concept of “constants”. A constant is a variable (or set, or data structure) which is constant – that is, not variable. You can assign a value to a constant, and generally expect it not to change. But – and this depends on the language you are using – it may be that the constant is not immutable. This seems to go against common sense[1], but that’s just the way that some languages are designed. The bottom line is this: if you have a variable that you intend to be immutable, check the syntax of the programming language you’re using and take any specific steps needed to maintain that immutability if required.
Operating System immutability
In Silverblue’s case, it’s the operating system that’s immutable. You install applications in containers (of which more later), using Flatpak, rather than onto the root filesystem. This means not only that the installation of applications is isolated from the core filesystem, but also that the ability for malicious applications to compromise your system is significantly reduced. It’s not impossible[2], but the risk is significantly lower.
How do you update your system, then? Well, what you do is create a new boot image which includes any updated packages that are needed, and when you’re ready, you boot into that. Silverblue provides simple tools to do this: it’s arguably less hassle than the standard way of upgrading your system. This approach also makes it very easy to maintain different versions of an operating system, or installations with different sets of packages. If you need to test an application in a particular environment, you boot into the image that reflects that environment, and do the testing. Another environment? Another image.
We’re more interested in the security properties that this offers us, however. Not only is it very difficult to compromise the core operating system as a standard user[3], but you are always operating in a known environment, and knowability is very much a desirable property for security, as you can test, monitor and perform forensic analysis from a known configuration. From a security point of view (let alone what other benefits it delivers), immutability is definitely an asset in an operating system.
Container immutability
This isn’t the place to describe containers (also known as “Linux containers” or, less frequently or accurately these days, “Docker containers) in detail, but they are basically collections of software that you create as images and then run workloads on a host server (sometimes known as a “pod”). One of the great things about containers is that they’re generally very fast to spin up (provision and execute) from an image, and another is that the format of that image – the packaging format – is well-defined, so it’s easy to create the images themselves.
From our point of view, however, what’s great about containers is that you can choose to use them immutably. In fact, that’s the way they’re generally used: using mutable containers is generally considered an anti-pattern. The standard (and “correct”) way to use containers is to bundle each application component and required dependencies into a well-defined (and hopefully small) container, and deploy that as required. The way that containers are designed doesn’t mean that you can’t change any of the software within the running container, but the way that they run discourages you from doing that, which is good, as you definitely shouldn’t. Remember: immutable software gives better knowability, and improves your resistance to run-time compromise. Instead, given how lightweight containers are, you should design your application in such a way that if you need to, you can just kill the container instance and replace it with an instance from an updated image.
This brings us to two of the reasons that you should never run containers with root privilege:
- there’s a temptation for legitimate users to use that privilege to update software in a running container, reducing knowability, and possibly introducing unexpected behaviour;
- there are many more opportunities for compromise if a malicious actor – human or automated – can change the underlying software in the container.
Double immutability with Silverblue
I mentioned above that Silverblue runs applications in containers. This means that you have two levels of security provided as default when you run applications on a Silverblue system:
- the operating system immutability;
- the container immutability.
As a security guy, I approve of defence in depth, and this is a classic example of that property. I also like the fact that I can control what I’m running – and what versions – with a great deal more ease than if I were on a standard operating system.
1 – though, to be fair, the phrases “programming language” and “common sense” are rarely used positively in the same sentence in my experience.
2 – we generally try to avoid the word “impossible” when describing attacks or vulnerabilities in security.
3 – as with many security issues, once you have sudo or root access, the situation is significantly degraded.
Learn to hack online – h4x0rz and pros
Removing these videos hinders defenders much more significantly than it impairs the attackers.
Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos. According to The Register, the policy first appeared on the 5th April 2019:
“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”
Now, I can see why they’ve done this: it’s basic backside-covering. YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content. The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly. I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.
Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.
Hacking[3] videos are different, though. The most important point is that they have a legitimate didactic function: in other words, they’re useful for teaching. Nor do I think that there’s a public outcry from groups wanting them banned. In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that. Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge. This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks. IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.
“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest. Well, yes, some will. But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.
And there is an imbalance between attackers and defenders: this move exacerbates it. Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems. By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.
I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams. We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.
We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round. To be clear: the (good) many need access to these videos to protect against the (malicious) few. Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.
1 – it’s pronounced “few-ROAR-ray”. And “NEEsh”. And “CLEEK”[2].
2 – yes, I should probably calm down.
3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.
Turtles – and chains of trust
There’s a story about turtles that I want to tell.
One of the things that confuses Brits is that many Americans[1] don’t know the difference between tortoises and turtles[2], whereas we (who have no species of either type which are native to our shores) seem to no have no problem differentiating them[3]. This is the week when Americans[1] like to bash us Brits over the little revolution they had a couple of centuries ago, so I don’t feel too bad about giving them a little hassle about this.
As it happens, there’s a story about turtles that I want to tell. It’s important to security folks, to the extent that you may hear a security person just say “turtles” to a colleague in criticism of a particular scheme, which will just elicit a nod of agreement: they don’t like it. There are multiple versions of this story[4]: here’s the one I tell:
A learned gentleman[5] is giving a public lecture. He has talked about the main tenets of modern science, such as the atomic model, evolution and cosmology. At the end of the lecture, an elderly lady comes up to him.
“Young man,” she says.
“Yes,” says he.
“That was a very interesting lecture,” she continues.
“I’m glad you enjoyed it,” he replies.
“You are, however, completely wrong.”
“Really?” he says, somewhat taken aback.
“Yes. All that rubbish about the Earth hovering in space, circling the sun. Everybody knows that the Earth sits on the back of a turtle.”
The lecturer, spotting a hole in her logic, replies, “But madam, what does the turtle sit on?”
The elderly lady looks at him with a look of disdain. “What a ridiculous question! It’s turtles all the way down, of course!”
The problem with the elderly lady’s assertion, of course, is one of infinite regression: there has to be something at the bottom. The reason that this is interesting to security folks is that they know that systems need to have a “bottom turtle” at some point. If you are to trust a system, it needs to sit on something: this is typically called the “TCB”, or Trusted Compute Base, and, in most cases, needs to be rooted in hardware. Even saying “rooted in hardware” is not enough: exactly what hardware you trust, and where, depends on a number of factors, including what you know about your hardware supply chain; what you feel about motherboards; what your security posture is; how realistic it is that State Actors might try to attack you; how deeply you want to delve into the hardware stack; and, ultimately, just how paranoid you are.
Principles for chains of trust
When you are building a system which you need to have some trust in, you will typically talk about the chain of trust, from the bottom up. This idea of a chain of trust is very important, and very pervasive, within security. It allows for some important principles:
- there has to be a root of trust somewhere (the “bottom turtle”);
- the chain is only as strong as its weakest link (and attackers will find it);
- be explicit about each of the links in the chain;
- realise that some of the links in the chain may change (e.g. if software is updated);
- be aware that once you have lost trust in a chain, you need to rebuild it from at least the layer below the one in which you have lost trust;
- simple chains (with no “joins” with other chains of trust) are much, much simpler to validate and monitor than more complex ones.
Software/hardware systems are not the only place in which you will encounter chains of trust: in fact, you come across them every time you make a TLS[6] connection to a web site (you know: that green padlock icon in the address bar). In this case, there’s a chain (sometimes short, sometimes long) of certificates from a “root CA” (a trusted party that your browser knows about) down to the organisation (or department or sub-organisation) running the web site to which you’re connecting. Assuming that each link in the chain trusts the next link to be who they say they are, the chain of signatures (turned into a certificate) can be checked, to give an indication, at least, that the site you’re visiting isn’t a spoof one by somebody pretending to be, for example, your bank. In this case, the bottom turtle is the root CA[7], and its manifestation in the chain of trust is its root certificate.
And chains of trust aren’t restricted to the world of IT, either: supply chains care a lot about chains of trust. Can you be sure that the diamond in the ring you bought from your local jewellery store, who got it from an artisan goldsmith, who got it from a national diamond chain, did not originally come from a “blood diamond” nation? Can you be sure that the replacement part for your car, which you got from your local independent dealership, is an original part, and can the manufacturer be sure of the quality of the materials they used? Blockchains are offering some interesting ways to help track these sorts of supply chains, and can even be applied to supply chains in software.
Chains of trust are everywhere we look. Some are short, and some are long. In most cases, there will be a need to employ transitive trust – I need to believe that whoever created my browser checked the root CA, just as you need to believe that your local dealership verified that the replacement part came from the right place – because the number of links that we can verify ourselves is typically low. This may be due to a variety of factors, including time, expertise and visibility. But the more we are aware of the fact that there is a chain of trust in any particular situation, the more we can make conscious decision about the amount of trust we should put in it, rather than making assumptions about the safety, security or validation of something we are buying or using.
1 – citizens of the US of A,
2 – have a look on a stock photography site like Pixabay if you don’t believe me.
3 – tortoises are land-based, turtles are aquatic, I believe.
4 – Wikipedia has a good article explaining both the concept and the story’s etymology.
5 – the genders of the protagonists are typically as I tell, which tells you a lot about the historical context, I’m afraid.
6 – this used to be “SSL”, but if you’re still using SSL, you’re in trouble: it’s got lots of holes in it!
7 – or is it? You could argue that the HSM that (hopefully) houses the root CA, or the processes that protect it, could be considered the bottom turtle. For the purposes of this discussion, however, the extent of the “system” is the certificate chain and its signers.
Building Evolutionary Architectures – for security and for open source
Consider the fitness functions, state them upfront, have regular review.
Ford, N., Parsons, R. & Kua, P. (2017) Building Evolution Architectures: Support Constant Change. Sebastapol, CA: O’Reilly Media.
https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/
This is my first book review on this blog, I think, and although I don’t plan to make a habit of it, I really like this book, and the approach it describes, so I wanted to write about it. Initially, this article was simply a review of the book, but as I got into it, I realised that I wanted to talk about how the approach it describes is applicable to a couple of different groups (security folks and open source projects), and so I’ve gone with it.
How, then, did I come across the book? I was attending a conference a few months ago (DeveloperWeek San Diego), and decided to go to one of the sessions because it looked interesting. The speaker was Dr Rebecca Parsons, and I liked what she was talking about so much that I ordered this book, whose subject was the topic of her talk, to arrive at home by the time I would return a couple of days later.
Building Evolutionary Architectures is not a book about security, but it deals with security as one application of its approach, and very convincingly. The central issue that the authors – all employees of Thoughtworks – identifies is, simplified, that although we’re good at creating features for applications, we’re less good at creating, and then maintaining, broader properties of systems. This problem is compounded, they suggest, by the fast and ever-changing nature of modern development practices, where “enterprise architects can no longer rely on static planning”.
The alternative that they propose is to consider “fitness functions”, “objectives you want your architecture to exhibit or move towards”. Crucially, these are properties of the architecture – or system – rather than features or specific functionality. Tests should be created to monitor the specific functions, but they won’t be your standard unit tests, nor will they necessarily be “point in time” tests. Instead, they will measure a variety of issues, possibly over a period of time, to let you know whether your system is meeting the particular fitness functions you are measuring. There’s a lot of discussion of how to measure these fitness functions, but I would have liked even more: from my point of view, it was one of the most valuable topics covered.
Frankly, the above might be enough to recommend the book, but there’s more. They advocate strongly for creating incremental change to meet your requirements (gradual, rather than major changes) and “evolvable architectures”, encouraging you to realise that:
- you may not meet all your fitness functions at the beginning;
- applications which may have met the fitness functions at one point may cease to meet them later on, for various reasons;
- your architecture is likely to change over time;
- your requirements, and therefore the priority that you give to each fitness function, will change over time;
- that even if your fitness functions remain the same, the ways in which you need to monitor them may change.
All of these are, in my view, extremely useful insights for anybody designing and building a system: combining them with architectural thinking is even more valuable.
As is standard for modern O’Reilly books, there are examples throughout, including a worked fake consultancy journey of a particular company with specific needs, leading you through some of the practices in the book. At times, this felt a little contrived, but the mechanism is generally helpful. There were times when the book seemed to stray from its core approach – which is architectural, as per the title – into explanations through pseudo code, but these support one of the useful aspects of the book, which is giving examples of what architectures are more or less suited to the principles expounded in the more theoretical parts. Some readers may feel more at home with the theoretical, others with the more example-based approach (I lean towards the former), but all in all, it seems like an appropriate balance. Relating these to the impact of “architectural coupling” was particularly helpful, in my view.
There is a useful grounding in some of the advice in Conway’s Law (“Organizations [sic] which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”) which led me to wonder how we could model open source projects – and their architectures – based on this perspective. There are also (as is also standard these days) patterns and anti-patterns: I would generally consider these a useful part of any book on design and architecture.
Why is this a book for security folks?
The most important thing about this book, from my point of view as a security systems architect, is that it isn’t about security. Security is mentioned, but is not considered core enough to the book to merit a mention in the appendix. The point, though, is that the security of a system – an embodiment of an architecture – is a perfect example of a fitness function. Taking this as a starting point for a project will help you do two things:
- avoid focussing on features and functionality, and look at the bigger picture;
- consider what you really need from security in the system, and how that translates into issues such as the security posture to be adopted, and the measurements you will take to validate it through the lifecycle.
Possibly even more important than those two points is that it will force you to consider the priority of security in relation to other fitness functions (resilience, maybe, or ease of use?) and how the relative priorities will – and should – change over time. A realisation that we don’t live in a bubble, and that our priorities are not always that same as those of other stakeholders in a project, is always useful.
Why is this a book for open source folks?
Very often – and for quite understandable and forgiveable reasons – the architectures of open source projects grow organically at first, needing major overhauls and refactoring at various stages of their lifecycles. This is not to say that this doesn’t happen in proprietary software projects as well, of course, but the sometimes frequent changes in open source projects’ emphasis and requirements, the ebb and flow of contributors and contributions and the sometimes, um, reduced levels of documentation aimed at end users can mean that features are significantly prioritised over what we could think of as the core vision of the project. One way to remedy this would be to consider the appropriate fitness functions of the project, to state them upfront, and to have a regular cadence of review by the community, to ensure that they are:
- still relevant;
- correctly prioritised at this stage in the project;
- actually being met.
If any of the above come into question, it’s a good time to consider a wider review by the community, and maybe a refactoring or partial redesign of the project.
Open source projects have – quite rightly – various different models of use and intended users. One of the happenstances that can negatively affect a project is when it is identified as a possible fit for a use case for which it was not originally intended. Academic software which is designed for accuracy over performance might not be a good fit for corporate research, for instance, in the same way that a project aimed at home users which prioritises minimal computing resources might not be appropriate for a high-availability enterprise roll-out. One of the ways of making this clear is by being very clear up-front about the fitness functions that you expect your project to meet – and, vice versa, about the fitness functions you are looking to fulfil when you are looking to select a project. It is easy to focus on features and functionality, and to overlook the more non-functional aspects of a system, and fitness functions allow us to make some informed choices about how to balance these decisions.
Trust & choosing open source
Your impact on open source can be equal to that of others.
A long time ago, in a standards body far, far away, I was involved in drafting a document about trust and security. That document rejoices in the name ETSI GS NFV-SEC 003: Network Functions Virtualisation (NFV);NFV Security; Security and Trust Guidance[1], and section 5.1.6.3[2] talks about “Transitive trust”. Convoluted and lengthy as the document is, I’m very proud of it[3], and I think it tackles a number of very important issues including (unsurprisingly, given the title), a number of issues around trust. It defines transitive trust thus:
“Transitive trust is the decision by an entity A to trust entity B because entity C trusts it.”
It goes on to disambiguate transitive trust from delegated trust, where C knows about the trust relationship.
At no point in the document does it mention open source software. To be fair, we were trying to be even-handed and show no favour towards any type of software or vendors – many of the companies represented on the standards body were focused on proprietary software – and I wasn’t even working for Red Hat at the time.
My move to Red Hat, and, as it happens, generally away from the world of standards, has led me to think more about open source. It’s also led me to think more about trust, and how people decide whether or not to use open source software in their businesses, organisations and enterprises. I’ve written, in particular, about how, although open source software is not ipso facto more secure than proprietary software, the chances of it being more secure, or made more secure, are higher (in Disbelieving the many eyes hypothesis).
What has this to do with trust, specifically transitive trust? Well, I’ve been doing more thinking about how open source and trust are linked together, and distributed trust is a big part of it. Distributed trust and blockchain are often talked about in the same breath, and I’m glad, because I think that all too often we fall into the trap of ignoring the fact that there definitely trust relationships associated with blockchain – they are just often implicit, rather than well-defined.
What I’m interested in here, though, is the distributed, transitive trust as a way of choosing whether or not to use open source software. This is, I think, true not only when talking about non-functional properties such as the security of open source but also when talking about the software itself. What are we doing when we say “I trust open source software”? We are making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable.
There’s actually a lot going on here, some of which is very interesting:
- we are trusting architects and designers to design software to meet our use cases and requirements;
- we are trusting developers to implement code well, to those designs;
- we are trusting developers to review each others’ code;
- we are trusting documentation folks to document the software correctly;
- we are trusting testers to write, run and check tests which are appropriate to my use cases;
- we are trusting those who deploy the code to run in it ways which are similar to my use cases;
- we are trusting those who deploy the code to report bugs;
- we are trusting those who receive bug reports to fix them as expected.
There’s more, of course, but that’s definitely enough to get us going. Of course, when we choose to use proprietary software, we’re trusting people to do that, but in this case, the trust relationship is much clearer, and much tighter: if I don’t get what I expect, I can choose another vendor, or work with the original vendor to get what I want.
In the case of open source software, it’s all more nebulous: I may be able to identify at least some of the entities involved (designers, software engineers and testers, for example), but the amount of power that I as a consumer of the software have over their work is likely to be low. There’s a weird almost-paradox here, though: you can argue that for proprietary software vendors, my power over the direction of the software is higher (I’m paying them or not paying them), but my direct visibility into what actually goes on, and my ability to ensure that I get what I want is reduced when compared to the open source case.
That’s because, for open source, I can be any of the entities outlined above. I – or those in my organisation – can be architect, designer, document writer, tester, and certainly deployer and bug reporter. When you realise that your impact on open source can be equal to that of others, the distributed trust becomes less transitive. You understand that you have equal say in the creation, maintenance, requirements and quality of the software which you are running to all the other entities, and then you become part of a network of trust relationships which are distributed, but at less of a remove to that which you’ll experience when buying proprietary software.
Why, then, would anybody buy or license open source software from a vendor? Because that way, you can address other risks – around support, patching, training, etc. – whilst still enjoying the benefits of the distributed trust network that I’ve outlined above. There’s a place for those who consume directly from the source, but it doesn’t mean the risk appetite of all software consumers – including those who are involved in the open source community themselves.
Trust is a complex issue, and the ways in which we trust other things and other people is complex, too (you’ll find a bit of an introduction in Of different types of trust), but I think it’s desperately important that we examine and try to understand the sorts of decisions we make, and why we make them, in order to allow us to make informed choices around risk.
1 – if you’ve not been involved in standards creation, this may fill you with horror, but if you have so involved, this sort of title probably feels normal. You may need help.
2 – see 1.
3 – I was one of two “rapporteurs”, or editors of the document, and wrote a significant part of it, particularly the sections around trust.
What’s an HSM?
HSMs are not right for every project, but form an important part of our armoury.
Another week, another TLA[1]. This time round, it’s Hardware Security Module: an HSM. What, then, is an HSM, what is it used for, and why should I care? Before we go there, let’s think a bit about keys: specifically, cryptographic keys.
The way that most cryptography works these days is that the algorithms to implement a particular primitive[3] are public, and it’s generally accepted that it doesn’t matter whether you know what the algorithm is, or how it works, as it’s the security of the keys that matters. To give an example: I plan to encrypt a piece of data under the AES algorithm[4], which allows for a particular type of (symmetric) encryption. There are two pieces of data which are fed into the algorithm:
- the data you want to encrypt (the cleartext);
- a key that you’ve chosen to encrypt it.
Out comes one piece of data:
- the encrypted text (the ciphertext).
In order to decrypt the ciphertext, you feed that and the key into the AES algorithm, and the original cleartext comes out. Everything’s great – until somebody gets hold of the key.
This is where HSMs come in. Keys are vital, and they are vulnerable:
- at creation time – if I can trick you into creating a key some of whose bits I can guess, I increase my chances of being able to decrypt your ciphertext;
- during use – while you’re doing the encryption or decryption of your data, your key will be in memory, which means that if I can snoop into that memory, I can get it (see also below for information on “side channel attacks”;
- while stored – unless you protect your key while it’s “at rest”, and waiting to be used, I may have opportunities to get it.
- while being transferred – if you store your keys somewhere different to the place in which you’re using it, I may have an opportunity to intercept it as it moves to the place it will be used.
HSMs can help in one way or another with all of these pieces, but why do we need them? The key reason is that there are times when you can’t be certain that the system(s) you are using for creating, using, storing and transferring keys are as secure as you’d like. If the keys we’re talking about are for encrypting a few emails between you and your spouse, well, you might find it embarrassing if they were compromised, but if these keys are ones from which, say, you derive all of the credit cards chip keys for an entire bank, then you have a rather larger problem. When it comes down to it, somebody with sufficient privilege on a standard computing system can look at any part of memory – unless there’s a TEE[5] (Oh, how I love my TEE (or do I?)) – and if they can look at the memory, they can see the key.
Worse than this, there are occasions when even if you can’t see into memory, you might be able to derive enough information about a key – or the ciphertext or cleartext – to be able to mount an attack on it. Attacks of this type are generally called “side channel attacks”, and you can think of them as a little akin to being able to work out the number of cylinders and valves a car[6] engine has by listening to it through the bonnet[7]. The engine leaks information about itself, even though it’s not designed with that in mind. HSMs are (generally) good at preventing both types of attacks: it’s what they’re designed to do.
Here, then, is a definition:
An HSM is piece of hardware with protected storage which can perform cryptographic operations attached to a system – via a network connection or other connection such as PCI – and which has physical protection from various attacks, from side attacks to somebody physically levering open the case and attaching wires to important components so that they can read the electrical signals.
Many HSMs undergo testing to get certification against certain standards such as “FIPS 140” to show their ability to withstand various types of attack.
Here are the main uses for HSMs.
Key creation
Creation of keys is, as alluded to above, a very important operation, and one where side attacks have proved very effective in the past. HSMs can provide safe(r) key generation, and ensure appropriate levels of randomness (entropy) for the required strength of key.
Key storage
HSMs are typically designed so that if somebody tries to break into them, they will delete any keys which are stored within them, so they’re a good place to store your keys.
Cryptographic operations
Rather than putting your keys at risk by transferring them to another system, and away from the safety of the HSM, why not move the cleartext to the HSM (encrypted under a transport key, preferably), get the HSM to do the encryption with the keys that it already holds, and then send the ciphertext back (encrypted under a transport key[8])? This reduces opportunities for attacks during transport and during use, and is a key use for HSMs.
General computing operations
Not all HSMs support this use (almost all will support the others), but if you have sensitive operations with lots of keys and algorithms – which, in the case of AI/ML, for instance, may be sensitive (unlike the cryptographic primitives we were talking about before), then it is possible to write applications specifically to run on an HSM. This is not a simple undertaking, however, as the execution environment provided is likely to be constrained. It is difficult to do “right”, and easy to make mistakes which may leave you with a significantly less secure environment than you had thought.
Conclusion – should I use HSMs?
HSMs are excellent as roots of trust for PKI [9] projects and similar. Using them can be difficult, but most these days should provide a PKCS#11 interface which simplifies the most common operations. If you have sensitive key or cryptographic requirements, designing HSM use into your system can be a sensible step, but knowing how best to use them must be part of the architecture and design stages, well before implementation. You should also take into account that operation of HSMs must be managed very carefully, from provisioning through everyday use to de-provisioning. Use of an HSM in the cloud may make sense, but they are expensive and do not scale particularly well.
HSMs, then, are suited to very particular use cases of highly sensitive data and operations – it is no surprise that their deployment is most common within military, government and financial settings. HSMs are not right for every project, by any means, but form an important part of our armoury for the design and operation of sensitive systems.
1 – Three Letter Acronym[2]
2 – keep up, or we’ll be here for some time.
3 – cryptographic building block.
4 – let’s pretend there’s only one type of AES for the purposes of this example. In fact, there are a number of nuances around this example which I’m going to gloss over, but which shouldn’t be important for the point I’m making.
5 – Trusted Execution Environment.
6 – automobile, for our North American friends.
7 – hood. Really, do we have to do this every time?
8 – why do you need to encrypt something that’s already encrypted? Because you shouldn’t use the same key for two different operations.
9 – Public Key Infrastructure.
Enarxのアナウンス
うまくいけば、この記事は2019年のボストンでのRed Hat Summitのデモに合わせて、投稿されるはずです。
同僚のNathaniel McCallumが行うデモはEnarxを早期具体化したものです。Enarxはここ数ヶ月Red Hatで私たち数名が行ってきたプロジェクトで、世界に公表する準備ができました。コードがあり、デモがあり、GitHubのレポジトリがあり、ロゴもあります。他にはプロジェクトとして何が必要でしょうか。
それは、人です。
何が課題なのか
クラウドやオンプレミス上でシステム(ホスト)でソフトウェア(ワークロード)を実行する場合、たくさんのレイヤーがあります。意識することはほとんどありませんが、事実存在するのです。通常のクラウドの仮想アーキテクチャのレイヤーの例を以下にあげます。
異なった色はレイヤーセット違ったレイヤーやレイヤーセットを「所有」している異なったレイヤーを表しています。

下記は似ていますが、通常のクラウドコンテナアーキテクチャを描いたものです。
上記のように色によって違ったレイヤーやレイヤーセットの「所有者」を表しています。

これらの所有者は違ったタイプのもので、ハードウェアベンダー、CSP(クラウドサービスプロバイダ)、OEMだったり、OSベンダ、ミドルウェアベンダー、アプリケーションベンダーだったりします。ワークロードの所有者です。それぞれホスト上で実行するワークロードにとって、レイヤーは実際異なっています。もし同じだったとしても、レイヤーインスタンスのバージョンは違っているかもしれませんし、BIOSバージョン、ブートローダー
、カーネルバージョンが違うかもしれません。
さて、色々な意味で皆さんはこれらには懸念を持っていないかもしれないですね。CSPがレイヤーやバージョンの違いを自分たちの方法で気にならないようにしてくれているかもしれません。けれどここはセキュリティを語るブログです。セキュリティに携わる人のためのブログです。つまり、このブログを読んでくださっているかたは気にする、と言うことです。
私たちが気にしたければいけないと言う理由は、バージョンやレイヤーの違いだけではなく、色々なエンティティです。と言うのも機密性の高いワークロードをこのようなスタック上で安心して、信頼して実行しなければいけません。
それぞれのレイヤーとその所有者を信頼し、それらが行いたいことを実行するだけでなく傷つかないようにしなければいけません。これは機密性の高いワークロードを実行する際にはとても大きなことです。
Enarxとは?
Enarxはそのようなレイヤー全ての信頼性を担保する際の問題を明確にするのを目的としたプロジェクトです。
ワークロードを実行する人にレイヤーとその所有者を減らして、信頼すべきものを最小限に抑えようとしたいと考えています。
TEE(Trusted Execution Environments https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/ 参照)を使って、以下のようなアーキテクチャに似たものを提供する予定です。

このような世界ではCPUとファームウェア、そしてミドルウェアを信用する必要があります。
このミドルウェアがEnarxの部分です。
でもそのほかのレイヤー部分を全て信頼する必要はないのです。アプリケーションの統一性と信頼性を保証するためにTEEを活用するからです。Enarxプロジェクトは、TEEの証明書を提供することで、信頼できる正しいTEE上で実行していることがわかるようにするものです。コードはオープンソース、監査可能です。それによってアプリケーションの直下にあるレイヤーが信頼できるようになります。
最初のコードは公開されていて、(この時点ではAMDのSEV TEEで動きます)皆さんにお知らせできる程度には動きます。
覚えておいてくださいね、あなたのアプリケーションがセキュリティ要件に合致しているかどうかはあなたの責任です🙂
もっと知りたい場合は?
一番簡単な方法は Enarx github https://github.com/enarx をチェックすることです。
今はコードだけですが、これからどんどん情報を足していきます。
が、今しばらくお待ちください。数人しかこのプロジェクトには今時点ではいないのです。ブログは私たちのToDoリストにあったので、私はそこから始めた次第です。
私たちは、コミュニティの方々にプロジェクトに参加していただきたいと思っています。今はとても低いレイヤーでさらに知識を得ていくために尽力しています。もちろん特定のハードウェアも必要になります。そうそう、早期ブートやKVMの低層のハッカーなあなたの参加が特に必要です。
記事にコメントいただければもちろんお答えします。
元の記事:
https://aliceevebob.com/2019/05/07/announcing-enarx/
2019年5月7日 Mike Bursell
タグ:セキュリティ、Enarx、オープンソース、クラウド