The importance of hardware End of Life

Security considerations are important when considering End of Life.

Linus Torvald’s announcement this week that Itanium support is “orphaned” in the Linux kernel means that we shouldn’t expect further support for it in the future, and possibly that support will be dropped in the future. In 2019, floppy disk support was dropped from the Linux kernel. In this article, I want to make the case that security considerations are important when considering End of Life for hardware platforms and components.

Dropping support for hardware which customers aren’t using is understandable if you’re a proprietary company and can decide what platforms and components to concentrate on, but why do so in open source software? Open source enthusiasts are likely to be running old hardware for years – sometimes decades after anybody is still producing it. There’s a vibrant community, in fact, of enthusiasts who enjoying resurrecting old hardware and getting it running (and I mean really old: EDSAC (1947) old), some of whom enjoy getting Linux running on it, and some of whom enjoy running it on Linux – by which I mean emulating the old hardware by running it on Linux hardware. It’s a fascinating set of communities, and if it’s your sort of thing, I encourage you to have a look.

But what about dropping open source software support (which tends to centre around Linux kernel support) for hardware which isn’t ancient, but is no longer manufactured and/or has a small or dwindling user base? One reason you might give would be that the size of the kernel for “normal” users (users of more recent hardware) is impacted by support for old hardware. This would be true if you had to compile the kernel with all options in it, but Linux distributions like Fedora, Ubuntu, Debian and RHEL already pare down the number of supported systems to something which they deem sensible, and it’s not that difficult to compile a kernel which cuts that down even further – my main home system is an AMD box (with AMD graphics card) running a kernel which I’ve compiled without most Intel-specific drivers, for instance.

There are other reasons, though, for dropping support for old hardware, and considering that it has met its End of Life. Here are three of the most important.

Resources

My first point isn’t specifically security related, but is an important consideration: while there are many volunteers (and paid folks!) working on the Linux kernel, we (the community) don’t have an unlimited number of skilled engineers. Many older hardware components and architectures are maintained by teams of dedicated people, and the option exists for communities who rely on older hardware to fund resources to ensure that they keep running, are patched against security holes, etc.. Once there ceases to be sufficient funding to keep these types of resources available, however, hardware is likely to become “orphaned”, as in the case of Itanium.

There is also a secondary impact, in that however modularised the kernel is, there is likely to be some requirement for resources and time to coordinate testing, patching, documentation and other tasks associated with kernel modules, which needs to be performed by people who aren’t associated with that particular hardware. The community is generally very generous with its time and understanding around such issues, but once the resources and time required to keep such components “current” reaches a certain level in relation to the amount of use being made of the hardware, it may not make sense to continue.

Security risk to named hardware

People expect the software they run to maintain certain levels of security, and the Linux kernel is no exception. Over the past 5-10 years or so, there’s been a surge in work to improve security for all hardware and platforms which Linux supports. A good example of a feature which is applicable across multiple platforms is Address Space Layout Randomisation (ASLR), for instance. The problem here is not only that there may be some such changes which are not applicable to older hardware platforms – meaning that Linux is less secure when running on older hardware – but also that, even when it is possible, the resources required to port the changes, or just to test that they work, may be unavailable. This relates to the point about resources above: even when there’s a core team dedicated to the hardware, they may not include security experts able to port and verify security features.

The problem goes beyond this, however, in that it is not just new security features which are an issue. Over the past week, issues were discovered in the popular sudo tool which ships with most Linux systems, and libgcrypt, a cryptographic library used by some Linux components. The sudo problem was years old, and the libgcrypt so new that few distributions had taken the updated version, and neither of them is directly related to the Linux kernel, but we know that bugs – security bugs – exist in the Linux kernel for many years before being discovered and patched. The ability to create and test these patches across the range of supported hardware depends, yet again, not just on availability of the hardware to test it on, or enthusiastic volunteers with general expertise in the platform, but on security experts willing, able and with the time to do the work.

Security risks to other hardware – and beyond

There is a final – and possibly surprising – point, which is that there may sometimes be occasions when continuing support for old hardware has a negative impact on security for other hardware, and that is even if resources are available to test and implement changes. In order to be able to make improvements to certain features and functionality to the kernel, sometimes there is a need for significant architectural changes. The best-known example (though not necessarily directly security-related) is the Big Kernel Lock, or BLK, an architectural feature of the Linux kernel until 2.6.39 in 2011, which had been introduced to aid concurrency management, but ended up having significant negative impacts on performance.

In some cases, older hardware may be unable to accept such changes, or, even worse, maintaining support for older hardware may impose such constraints on architectural changes – or require such baroque and complex work-arounds – that it is in the best interests of the broader security of the kernel to drop support. Luckily, the Linux kernel’s modular design means that such cases should be few and far between, but they do need to be taken into consideration.

Conclusion

Some of the arguments I’ve made above apply not only to hardware, but to software as well: people often keep wanting to run software well past its expected support life. The difference with software is that it is often possible to emulate the hardware or software environment on which it is expected to run, often via virtual machines (VMs). Maintaining these environments is a challenge in itself, but may actually offer a via alternative to trying to keep old hardware running.

End of Life is an important consideration for hardware and software, and, much as we may enjoy nursing old hardware along, it doesn’t makes sense to delay the inevitable – End of Life – beyond a certain point. When that point is will depend on many things, but security considerations should be included.

Trust in Computing and the Cloud

I wrote a book.

I usually write and post articles first thing in the morning, before starting work, but today is different. For a start, I’m officially on holiday (so definitely not planning to write any code for Enarx, oh no), and second, I decided that today would be the day that I should finish my book, if I could.

Towards the end of 2019, I signed a contract with Wiley to write a book (which, to be honest, I’d already started) on trust. There’s lots of literature out there on human trust, organisational trust and how humans trust each other, but despite a growing interest in concepts such as zero trust, precious little on how computer systems establish and manage trust relationships to each other. I decided it was time to write a book on this, and also on how trust works (or maybe doesn’t) in the Cloud. I gave myself a target of 125,000 words, simply by looking at a couple of books at the same sort of level and then doing some simple arithmetic around words per page and number of pages. I found out later that I’d got it a bit wrong, and this will be quite a long book – but it turns out that the book that needed writing (or that I needed to write, which isn’t quite the same thing) was almost exactly this long, as when I finished around 1430 GMT today, I found that I was at 124,939 words. I have been tracking it, but not writing to the target, given that my editor told me that I had some latitude, but I’m quite amused by how close it was.

Anyway, I emailed my editor on completion, who replied (despite being on holiday), and given that I’m 5 months or so ahead of schedule, he seems happy (I think they prefer early to late).

I don’t have many more words in me today, so I’m going to wrap up here, but do encourage you to read articles on this blog labelled with “trust”, several of which are edited excerpts from the book. I promise I’ll keep you informed as I get information about publication dates, etc.

Keep safe and have a Merry Christmas and Happy New Year, or whatever you celebrate.

Why physical compromise is game over

Systems have physical embodiments and are actually things we can touch.

This is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

We spend a lot of our time – certainly I do – worrying about how malicious attackers might break into systems that I’m running, architecting, designing, implementint or testing. And most of that time is spent thinking about logical – that is software-based – attacks. But these systems don’t exist solely in the logical realm: they have physical embodiments, and are actually things we can touch. And we need to worry about that more than we typically do – or, more accurately, more of us need to worry about that.

If a system is compromised in software, it’s possible that you may be able to get it back and clean, but there is a general principle in IT security that if an attacker has physical access to a system, then that system should be considered compromised. In trust terms (something I care about a lot, given my professional interests), “if an attacker has physical access to a system, then any assurances around expected actions should be considered reduced or void, thereby requiring a re-evaluation of any related trust relationships”. Tampering (see my previous article on this, Not quantum-safe, not tamper-proof, not secure) is the typical concern when considering physical access, but exactly what an attacker will be able to achieve given physical access will depend on a number of factors, not least the skill of the attacker, resources available to them and the amount of time they have physical access. Scenarios range from an unskilled person attaching a USB drive to a system in short duration Evil Maid[1] attacks and long-term access by national intelligence services. But it’s not just running (or provisioned, but not currently running) systems that we need to be worried about: we should extend our scope to those which have yet to be provisioned, or even necessarily assembled, and to those which have been decommissioned.

Many discussions in the IT security realm around supply chain, concentrate mainly on software, though there are some very high profile concerns that some governments (and organisations with nationally sensitive functions) have around hardware sourced from countries with whom they do not share entirely friendly diplomatic, military or commercial relations. Even this scope is too narrow: there are many opportunities for other types of attackers to attack systems at various points in their life-cycles. Dumpster diving, where attackers look for old computers and hardware which has been thrown out by organisations but not sufficiently cleansed of data, is an old and well-established technique. At the other end of the scale, an attacker who was able to get a job at a debit or credit card personalisation company and was then able to gain information about the cryptographic keys inserted in the bank card magnetic strips or, better yet, chips, might be able to commit fraud which was both extensive and very difficult to track down. None of these attacks require damage to systems, but they do require physical access to systems or the manufacturing systems and processes which are part of the systems’ supply chain.

An exhaustive list and description of physical attacks on systems is beyond the scope of this article (readers are recommended to refer to Ross Anderson’s excellent Security Engineering: A Guide to Building Dependable Distributed Systems for more information on this and many other topics relevant to this blog), but some examples across the range of threats may serve to give an idea of sort of issues that may be of concern.

AttackLevel of sophisticationTime requiredDefences
USB drive to retrieve dataLowSecondsDisable USB ports/use software controls
USB drive to add malware to operating systemLowSecondsDisable USB ports/use software controls
USB drive to change boot loaderMediumMinutesChange BIOS settings
Attacks on Thunderbolt ports[2]MediumMinutesFirmware updates; turn off machine when unattended
Probes on buses and RAMHighHoursPhysical protection of machine
Cold boot attack[3]HighMinutesPhysical protection of machine/TPM integration
Chip scraping attacksHighDaysPhysical protection of machine
Electron microscope probesHighDaysPhysical protection of machine

The extent to which systems are vulnerable to these attacks varies enormously, and it is particularly notable that systems which are deployed at the Edge are particularly vulnerable to some of them, compared to systems in an on-premises data centre or run by a Cloud Service Provider in one of theirs. This is typically either because it is difficult to apply sufficient physical protections to such systems, or because attackers may be able to achieve long-term physical access with little likelihood that their attacks will be discovered, or, if they are, with little danger of attribution to the attackers.

Another interesting point about the majority of the attacks noted above is that they do not involve physical damage to the system, and are therefore unlikely to show tampering unless specific measures are in place to betray them. Providing as much physical protection as possible against some of the more sophisticated and long-term attacks, alongside visual checks for tampering, is the best defence for techniques which can lead to major, low-level compromise of the Trusted Computing Base.


1 – An Evil Maid attack assumes that an attacker has fairly brief access to a hotel room where computer equipment such as a laptop is stored: whilst there, they have unfettered access, but are expected to leave the system looking and behaving in the same way it was before they arrived. This places some bounds on the sorts of attacks available to them, but such attacks are notoriously different to defend.

2 – I wrote an article on one of these: Thunderspy – should I care?

2- A cold boot attack allows an attacker with access to RAM to access data recently held in memory.

Uninterrupted power for the (CI)A

I’d really prefer not to have to restart every time we have a minor power cut.

Regular readers of my blog will know that I love the CIA triad – confidentiality, integrity and availability – as applied to security. Much of the time, I tend to talk about the first two, and spend little time on the third, but today I thought I’d address the balance a little bit by talking about UPSes – Uninterruptible Power Supplies. For anyone who was hoping for a trolling piece on giving extra powers to US Government agencies, it’s time to move along to another blog. And anyone who thinks I’d stoop to a deliberately misleading article title just to draw people into reading the blog … well, you’re here now, so you might as well read on (and you’re welcome to visit others of my articles such as Do I trust this package?, A cybersecurity tip from Hazzard County, and, of course, Defending our homes).

Years ago, when I was young and had more time on my hands, I ran an email server for my own interest (and accounts). This was fairly soon after we moved to ADSL, so I had an “always-on” connection to the Internet for the first time. I kept it on a pretty basic box behind a pretty basic firewall, and it served email pretty well. Except for when it went *thud*. And the reason it went *thud* was usually because of power fluctuations. We live in a village in the East Anglian (UK) countryside where the electricity supply, though usually OK, does go through periods where it just stops from time to time. Usually for under a minute, admittedly, but from the point of view of most computer systems, even a second of interruption is enough to turn them off. Usually, I could reboot the machine and, after thinking to itself for a while, it would come back up – but sometimes it wouldn’t. It was around this time, if I remember correctly, that I started getting into journalling file systems to reduce the chance of unrecoverable file system errors. Today, such file systems are custom-place: in those days, they weren’t.

Even when the box did come back up, if I was out of the house, or in the office for the day, on holiday, or travelling for business, I had a problem, which was that the machine was now down, and somebody needed to go and physically turn it on if I wanted to be able to access my email. What I needed was a way to provide uninterruptible power to the system if the electricity went off, and so I bought a UPS: an Uninterruptible Power System. A UPS is basically a box that sits between your power socket and your computer, and has a big battery in it which will keep your system going for a while in the event of a (short) power failure and the appropriate electronics to provide AC power out from the battery. Most will also have some sort of way to communicate with your system such as a USB port, and software which you can install to talk to it that your system can decide whether or not to shut itself down – when, for instance, the power has been off for long enough that the battery is about to give out. If you’re running a data centre, you’ll typically have lots of UPS boxes keeping your most important servers up while you wait for your back-up generators to kick in, but for my purposes, knowing that my email server would stay up for long enough that it would ride out short power drops, and be able to shut down gracefully if the power was out for longer, was enough: I have no interest in running my own generator.

That old UPS died probably 15 years ago, and I didn’t replace it, as I’d come to my senses and transferred my email accounts to a commercial provider, but over the weekend I bought a new one. I’m running more systems now, some of them are fairly expensive and really don’t like power fluctuations, and there are some which I’d really prefer not to have to restart every time we have a minor power cut. Here’s what I decided I wanted:

  • a product with good open source software support;
  • something which came in under £150;
  • something with enough “juice” (batter power) to tide 2-3 systems over a sub-minute power cut;
  • something with enough juice to keep one low-powered box running for a little longer than that, to allow it to coordinate shutting down the other boxes first, and then take itself down if required;
  • something with enough ports to keep a a couple of network switches up while the previous point happened (I thought ahead!);
  • Lithium Ion rather than Lead battery if possible.

I ended up buying an APC BX950UI, which meets all of my requirements apart from the last one: it turns out that only high-end UPS systems currently seem to have moved to Lithium Ion battery technology. There are two apparently well-maintained open source software suites that support APC UPS systems: apcupsd and nut, both of which are available for my Linux distribution of choice (Fedora). As it happens, they both also have support for Windows and Mac, so you can mix and match if needs be. I chose nut, which doesn’t explicitly list my model of UPS, but which supports most of the product lower priced product line with its usbhid-ups driver – I knew that I could move to apsupsd if this didn’t work out, but nut worked fine.

Set up wasn’t difficult, but required a little research (and borrowing a particular cable/lead from a kind techie friend…), and I plan to provide details of the steps I took in a separate article or articles to make things easier for people wishing do replicate something close to my set-up. However, I’m currently only using it on one system, so haven’t set up the coordinated shutdown configuration. I’ll wait till I’ve got it more set up before trying that.

What has this got to do with security? Well, I’m planning to allow VPN access to at least one of the boxes, and I don’t want it suddenly to disappear, leaving a “hole in the network”. I may well move to a central authentication mechanism for this and other home systems (if you’re interested, check out projects such as yubico-pam): and I want the box that provides that service to stay up. I’m also planning some home automation projects where access to systems from outside the network (to view cameras, for instance) will be a pain if things just go down: IoT devices may well just come back up in the event of a power failure, but the machines which coordinate them are less likely to do so.

Security, cost and usability (pick 2)

If we cannot be explicit that there is a trade-off, it’s always security that loses.

Everybody wants security: why wouldn’t you? Let’s role-play: you’re a software engineer on a project to create a security product. There comes a time in the product life-cycle when it’s nearly due, and, as usual, time is tight. So you’re in the regular project meeting and the product manager’s there, so you ask them what they want you to do: should you prioritise security? The product manager is very clear[1]: they will tell you that they want the product as secure as possible – and they’re right, because that’s what customers want. I’ve never spoken to a customer (and I’ve spoken to lots of customers over the years) who said that they’d prefer a product which wasn’t as secure as possible. But there’s a problem, which is that all customers also want their products tomorrow – in fact, most customers want their products today, if not yesterday.

Luckily, products can generally be produced more quickly if more resources are applied (though Frederick Brooks’ The Mythical Man Month tells us that simple application of more engineers is actually likely to have a negative impact), so the requirement for speed of delivery can be translated to cost. There’s another thing that customers want, however, and that is for products to be easy to use: who wants to get a new product and then, when it arrives, for it to take months to integrate or for it to be almost impossible for their employees to run it as they expect?

So, to clarify, customers want a security product to be be the following:

  1. secure – security is a strong requirement for many enterprises and organisations[3], and although we shouldn’t ever use the word secure on its own, that’s still what customers want;
  2. cheap – nobody wants to pay more than the minimum they can;
  3. usable – everybody likes simple-to-use, easy-to-integrate applications.

There’s a problem, however, which is that out of the three properties above, you can only choose two for any application or project. You say this to your product manager (who’s always right, remember[1]), and they’ll say: “don’t be ridiculous! I want all three”.

But it just doesn’t work like that: why? Here’s my take on the reasons. Security, simply stated, is designed to stop people doing things. Stated from the point of view of a user, security’s view is to reduce usability. “Doing security” is generally around applying controls to actions in a system – whether by users or non-human entities – and the simplest way to apply it is “blanket security” – defaulting to blocking or denying actions. This is sometimes known as fail to safe or fail to closed.

Let’s take an example: you have a simple internal network in your office and you wish to implement a firewall between your network and the Internet, to stop malicious actors from probing your internal machines and to compromised systems on the internal network from communicating out to the Internet. “Easy,” you think, and set up a DENY ALL rule for connections originating outside the firewall, and a DENY ALL rule for connections originating inside the firewall, with the addition of a ALLOW all outgoing port 443 connections to ensure that people can use web browsers to make HTTPS connections. You set up the firewall, and get ready to head home, knowing that your work is done. But then the problems arise:

  • it turns out that some users would like to be able to send email, which requires a different outgoing port number;
  • sending email often goes hand in hand with receiving email, so you need to allow incoming connections to your mail server;
  • one of your printers has been compromised, and is making connections over port 443 to an external botnet;
  • in order to administer the pay system, your accountant – who is not a full-time employee, and works from home, needs to access your network via a VPN, which requires the ability to accept an incoming connection.

Your “easy” just became more difficult – and it’s going to get more difficult still as more users start encountering what they will see as your attempts to make their day-to-day revenue-generating lives more difficult.

This is a very simple scenario, but it’s clear that in order to allow people actually to use a system, you need to spend a lot more time understanding how security will interact with it, and how people’s experience of the measures you put in place will be impacted. Usability and user experience (“UX”) is a complex field on its own, but when you combine it with the extra requirements around security, things become even more tricky.

You need both to manage the requirements of users to whom the security measures should be transparent (“TLS encryption should be on by default”) and those who may need much more control (“developers need to be able to select the TLS cipher suite options when connecting to a vendor’s database”), so you need to understand the different personae[4] you are targeting for your application. You also need to understand the different failures modes, and what the correct behaviour should be: if authentication fails three times in a row, should the medical professional who is trying to get a rush blood test result be locked out of the system, or should the result be provided, and a message sent to an administrator, for example? There will be more decisions to make, based on what your application does, the security policies of your customers, their risk profiles, and more. All of these investigations and decisions take time, and time equates to money. What is more, they also require expertise – both in terms of security but also usability – and that is in itself expensive.

So, you have three options:

  1. choose usability and cost – you can prioritise usability and low cost, but you won’t be able to apply security as you might like;
  2. choose security and cost – in this case, you can apply more security to the system, but you need to be aware that usability – and therefore your customer’s acceptance of the system – will suffer;
  3. choose usability and security – I wish this was the one that we chose every time: you decide that you’re willing to wait longer or pay more for a more secure product, which people can use.

I’m not going to pretend that these are easy decisions, nor that they are always clear cut. And a product manager’s job is sometimes to make difficult choices – hopefully ones which can be re-balanced in a later release, but difficult choices nevertheless. It’s really important, however, that anyone involved in security – as an engineer, as a UX expert, as a product manager, as a customer – understands the trade-off here. If we cannot be explicit that there is a trade-off, then the trade-off will be made silently, and in my experience, it’s always security that loses.


1 – and right: product managers are always right[2].

2 – I know: I used to be a product manager.

3 – and the main subject of this blog, so it shouldn’t be a surprise that I’m writing about it.

4 – or personas if you really, really must. I got an “A” in Latin O level, and I’m not letting this one go.

Do I trust this package?

The area of software supply chain management is growing in importance.

This isn’t one of those police dramas where a suspect parcel arrives at the precinct and someone realises just in time that it may be a bomb – what we’re talking about here is open source software packages (though the impact on your application may be similar if you’re not sufficiently suspicious). Open source software is everywhere these days – which is great – but how can you be sure that you should trust the software you’ve downloaded to do what you want? The area of software supply chain management – of which this discussion forms a part – is fairly newly visible in the industry, but is growing in importance. We’re going to consider a particular example.

There’s a huge conversation to be had here about what trust means (see my article “What is trust?” as a starting point, and I have a forthcoming book on Trust in Computing and the Cloud for Wiley), but let’s assume that you have a need for a library which provides some cryptographic protocol implementation. What do you need to know, and what are you choices? We’ll assume, for now, that you’ve already made what is almost certainly the right choice, and gone with an open source implementation (see many of my articles on this blog for why open source is just best for security), and that you don’t want to be building everything from source all the time: you need something stable and maintained. What should be your source of a new package?

Option 1 – use a vendor

There are many vendors out there now who provide open source software through a variety of mechanisms – typically subscription. Red Hat, my employer (see standard disclosure) is one of them. In this case, the vendor will typically stand behind the fitness for use of a particular package, provide patches, etc.. This is your easiest and best choice in many cases. There may be times, however, when you want to use a package which is not provided by a vendor, or not packaged by your vendor of choice: what do you do then? Equally, what decisions do vendors need to make about how to trust a package?

Option 2 – delve deeper

This is where things get complex. So complex, in fact, that I’m going to be examining them at some length in my book. For the purposes of this article, though, I’ll try to be brief. We’ll start with the assumption that there is a single maintainer of the package, and multiple contributors. The contributors provide code (and tests and documentation, etc.) to the project, and the maintainer provides builds – binaries/libraries – for you to consume, rather than your taking the source code and compiling it yourself (which is actually what a vendor is likely to do, though they still need to consider most of the points below). This is a library to provide cryptographic capabilities, so it’s fairly safe to assume that we care about its security. There are at least five specific areas which we need to consider in detail, all of them relying on the maintainer to a large degree (I’ve used the example of security here, though very similar considerations exist for almost any package): let’s look at the issues.

  1. build – how is the package that you are consuming created? Is the build process performed on a “clean” (that is, non-compromised) machine, with the appropriate compilers and libraries (there’s a turtles problem here!)? If the binary is created with untrusted tools, then how can we trust it at all, so what measures does the maintainer take to ensure the “cleanness” of the build environment? It would be great if the build process is documented as a repeatable build, so that those who want to check it can do so.
  2. integrity – this is related to build, in that we want to be sure that the source code inputs to the build process – the code coming, for instance, from a git repository – are what we expect. If, somehow, compromised code is injected into the build process, then we are in a very bad position. We want to know exactly which version of the source code is being used as the basis for the package we are consuming so that we can track features – and bugs. As above, having a repeatable build is a great bonus here.
  3. responsiveness – this is a measure of how responsive – or not – the maintainer is to changes. Generally, we want stable features, tied to known versions, but a quick response to bug and (in particular) security patches. If the maintainer doesn’t accept patches in a timely manner, then we need to worry about the security of our package. We should also be asking questions like, “is there a well-defined security disclosure of vulnerability management process?” (See my article Security disclosure or vulnerability management?), and, if so, “is it followed”?
  4. provenance – all code is not created equal, and one of the things of which a maintainer should be keeping track is the provenance of contributors. If a large amount of code in a part of the package which provides particularly sensitive features is suddenly submitted by an unknown contributor with a pseudonymous email address and no history of contributions of security functionality, this should raise alarm bells. On the other hand, if there is a group of contributors employed by a company with a history of open source contributions and well-reviewed code who submit a large patch, this is probably less troublesome. This is a difficult issue to manage, and there are typically no definite “OK” or “no-go” signs, but the maintainer’s awareness and management of contributors and their contributions is an important point to consider.
  5. expertise – this is the most tricky. You may have a maintainer who is excellent at managing all of the points above, but is just not an expert in certain aspects of the functionality of the contributed code. As a consumer of the package, however, I need to be sure that it is fit for purpose, and that may include, in the case of the security-related package we’re considering, being assured that the correct cryptographic primitives are used, that bounds-checking is enforced on byte streams, that proper key lengths are used or that constant time implementations are provided for particular primitives. This is very, very hard, and the job of maintainer can easily become a full-time one if they are acting as the expert for a large and/or complex project. Indeed, best practice in such cases is to have a team of trusted, experienced experts who work either as co-maintainers or as a senior advisory group for the project. Alternatively, having external people or organisations (such as industry bodies) perform audits of the project at critical junctures – when a major release is due, or when an important vulnerability is patched, for instance – allows the maintainer to share this responsibility. It’s important to note that the project does not become magically “secure” just because it’s open source (see Disbelieving the many eyes hypothesis), but that the community, when it comes together, can significantly improve the assurance that consumers of a project can have in the packages which is produces.

Once we consider these areas, we then need to work out how we measure and track each of them. Who is in a position to judge the extent to which any particular maintainer is fulfilling each of the areas? How much can we trust them? These are a set of complex issues, and one about which much more needs to be written, but I am passionate about exposing the importance of explicit trust in computing, particularly in open source. There is work going on around open source supply chain management, for instance the new (at time of writing Project Rekor – but there is lots of work still to be done.

Remember, though: when you take a package – whether library or executable – please consider what you’re consuming, what about it you can trust, and on what assurances that trust is founded.

Security is (only) subjective

What aspects of security does it provide?

This article covers ground covered in more detail within (but is not quite an excerpt from) my forthcoming book on Trust in Computing and the Cloud for Wiley.

In 1985, the US Department of Defense [sic] published the “Orange Book”[1], officially named Trusted Computer System Evaluation Criteria. It was a guide to how to create a “trusted system”, and was hugely influential within the IT and security industry as a whole. Eight years later, in 1993, Dorothy Denny published a devastating critique of the Orange Book called A New Paradigm for Trusted Systems[1]. It is a brilliant step-by-step analysis of why the approach taken by the DoD was fundamentally flawed. Denning starts:

“The current paradigm for trusted computer systems holds that trust is a property of a system. It is a property that can be formally modeled, specified, and verified. It can be designed into a system using a rigorous design methodology.”

Later, she explains why this just doesn’t work in the real world:

“The current paradigm of treating trust as a property is inconsistent, with the way trust is actually established in the world. It is not a property, but rather an assessment that is based on experience and shared through networks of people in the world-wide market. It is a declaration made by an observer, rather than a property of the observed.”

Demolishing the idea that trust is an inherent property of a system, and making it relational instead, changed the way that systems designed for security would be considered (and ushered in a new approach by the US Government and associated organisations, known was Common Criteria). Denning was writing about trust, but very similar issues exist around the concept of “security”. Too often, security is considered an inherent or intrinsic property of a system: “it’s secure”, someone will say, or “this fix will secure your computer”. It isn’t, and it won’t.

The first problem with such statements is that it’s not clear what “secure” means. There are a number of properties associated with systems that are relevant to security: three of the ones most often quoted are confidentiality, integrity and availability (which I discuss in more detail in the post The Other CIA: Confidentiality, Integrity and Availability). Specifying which of these you’re interested in removes the temptation just to say that something is “secure”, and if someone says, “it provides security”, we’re now in a position to start asking what that assertion actually means. Which aspects of security does it provide?

I also don’t think it makes sense to say that a system is “confidential” or “available” (there’s no obvious equivalent adjective for integrity – “integral” means something rather different): what we may be able to say is that it exhibits properties associated with confidentiality, integrity and availability, or better, that it has measures associated with it which are designed and intended to provide confidentiality, integrity and availability. These measures can be listed, examined and evaluated, hopefully against well-defined criteria.

This seems like a much better approach: not only have we addressed the suggestion that there is such as thing as “security” that we can apply to a system, but following Denning, we have also challenged the suggestion that it is inherent to – or in – a system. Instead, we have introduced the alternative approach of describing security-related properties which can be subjected to scrutiny by the users of the system. This allows the type of relational understanding of security that Denning was proposing, but it also raises the possibility of differing parties having different views of the security (or not) of a system, depending on who they are, and how it is going to be used.

It turns out, when you think about it, that this makes a lot of sense. A laptop which provides sufficient confidentiality, integrity and availability protection for the computing needs of my retired uncle may not provide sufficient protections for the uses to which an operative of a government security service might put it[3]. Equally, a system which a telecommunications company runs in a physically protected data centre may well be considered to have appropriate security protections, whereas the same system, attached to a pole somewhere on a residential street, might not. The measures applied to provide the protections associated with the properties (e.g. 128 bit AES encryption for the confidentiality) may be objectively specifiable, but the extent to which they provide “security” is not, because they are relative to specific requirements.

One last point, and it’s one which regular readers of my blog will be unsurprised to see: how can you assess the applicability of a system’s security properties to your requirements if it is not open? Open source helps significantly with security. Yes, there are assessment regimes to say that systems meet certain criteria – and sometimes these can be very helpful – but they are generally broad criteria, and difficult to apply to your specific use cases. Equally, most are just a starting point, and many such certified systems will require “exceptions” to be met in order to function in the real world, exceptions which require significant expertise to understand, judge and apply safely (that is, with appropriate levels of risk). If the system you want to use is open, then you, a party who you trust, or the wider community can evaluate the appropriateness of controls and measures, and make an informed decision about whether a system’s security properties are what you need. Without open source, this is impossible.


1 – it had an orange cover.

2 – Denning, Dorothy E. (1993) A New Paradigm for Trusted Systems [online]. Available at: https://www.researchgate.net/publication/234793347_A_New_Paradigm_for_Trusted_Systems [Accessed 3 Apr. 2020]

3 – I’m assuming that my uncle isn’t an operative of a government security service[4].

4 – or at least that his security needs are reduced in retirement[5].

5 – that is, if he has really retired…

Taking some time

I’m going to practice what I preach, and not write.

I’m going to practice what I preach, this week, and not write a full article. I’ve had a stressful and busy few weeks, including needing to spend some extra time with the family (nothing scary or earth-shattering – we just needed some family time), and I think the best thing for me to do today is not spend time writing an article. Let me point you instead at some I’ve written in the past.

On self-care:

On security:

On trust:

Keep safe, and look after yourself, dear reader!

Formal verification … or Ken Thompson?

“You can’t trust code that you did not totally create yourself” – Ken Thompson.

This article is an edited excerpt from my forthcoming book on Trust in Computing and the Cloud for Wiley.

How can we be sure that the code we’re running does what we think it does? One of the answers – or partial answers – to that question is “formal verification.” Formal verification is an important field of study, applying mathematics to computing, and it aims to start with proofs – at best, with an equivalent level of assurance to that of formal mathematical proofs – of the correctness of algorithms to be implemented in code to ensure that they perform the operations expected and set forth in a set of requirements. Though implementation of code can often fall down in the actual instructions created by a developer or set of developers – the programming – mistakes are equally possible at the level of the design of the code to be implemented in the first place, and so this must be a minimum step before looking at any actual implementations. What is more, these types of mistakes can be all the more hard to spot, as even if the developer has introduced no bugs in the work they have done, the implementation will be flawed by virtue of it being incorrectly defined in the first place. It is with an acknowledgement of this type of error, and an intention of reducing or eliminating it, that formal verification starts, but some areas go much further, with methods to examine concrete implementations and make statements about their correctness with regards to the algorithms which they are implementing.

Where we can make these work, they are extremely valuable, and the sort of places that they are applied are exactly where we would expect: for systems where security is paramount, and to prove the correctness of cryptographic designs and implementations. Another major focus of formal verification is software for safety systems, where the “correct” operation of the system – by which we mean “as designed and expected” – is vital. Examples might include oil refineries, fire suppression systems, nuclear power station management, aircraft flight systems and electrical grid management – unsurprisingly, given the composition of such systems, formal verification of hardware is also an important field of study. The practical application of formal verification methods to software is, however, more limited than we might like. As Alessandro Abate notes in a paper on formal verification of software:

“Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models.”

To these shortcomings we can add another, extremely significant one: how sure can you be that what you are running is what you think you are running? Surely knowing what you are running is exactly why we write software, look at the source, and then compile it under our control? That, certainly, is the basic starting point for software that we care about.

The problem is arguably one of layers and dependencies, and was outlined by Ken Thompson, one of the founders or modern computing, in the lecture he gave at his acceptance of the Turing Award in 1983. It is short, stands as one of the establishing artefacts of computing security, and has weathered the tests of time: I have no hesitation in recommending that all readers of this blog read it: Reflections on Trusting Trust. In it, he describes how careful placing of malicious code in the C standard compiler could lead to vulnerabilities (his specific example is in account login code) which are not only undetectable by those without access to the source code, but also not removable. The final section of the paper is entitled “Moral”, and Thompson starts with these words:

“The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.”

However, as he goes on to point out, here is nothing special about the compiler:

“I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.”

It is for this the reasons noted by Thompson that open source software – and hardware – is so vital to the field of computer security, and to our task of defining and understanding what “trust” means in the context of computing. Just relying on the “open source-ness” of your code is not enough: there is more work to be done in understanding your stack, the community and your requirements, but without the ability to look at the source code of all the layers of software and hardware on which you are running code, then you can have only reduced trust that what you are running is what you think you should be running, whether you have performed formal verification on it or not.


Why Enarx is open

It’s not just our coding that we do in the open.

When Nathaniel McCallum and I embarked on the project which is now called Enarx, we made one decision right at the beginning: the code for Enarx would be open source, a stance fully supported by our employer Red Hat (see standard disclaimer). All of it, and for ever. That’s a decision that we’ve not regretted at any point, and it’s something we stand behind. As soon as we had enough code for a demo, and were ready to show it, we created a repository on github and made it public. There’s a very small exception, which is that there are some details of upcoming chip features that are shared with us under NDA[1] where if we write code for them, publishing that code would be a breach of the NDA. But where this applied (which is rarely) we are absolutely clear with the various vendors that we intend to make the code open as soon as possible, and lobby them to release details as early as they can (which may be earlier than they might prefer), so that more experts can look over both their designs and our code.

Auditability and trust

This brings us to possibly the most important reasons for making Enarx open source: auditability and trust. Enarx is a security-related project, and I believe passionately not only that security should be done in the open, but that if anybody is actually going to trust their sensitive data, algorithms and workloads to a piece of software, then they want to be in a position where as many experts as possible have looked at it, scrutinised it, criticised it and improved it: whether that is the people running the software, their employees, contractors or (even better) the wider security community. The more people who check the code, the happier you should be to trust it. This is important for any piece of security software, but vital for software such as Enarx which is designed to protect your more most sensitive workloads.

Bug-catching

There are bugs in Enarx. I know: I’m writing some of the code[2] and I found one yesterday (which I’d put in), just as I was about to give a demo[3]. It is very, very difficult to write perfect code, and we know that if we make our source open, then more people can help us fix issues.

Commonwealth

For Nathaniel and me, open source is an ethical issue, and we make no apologies for that. I think it’s the same for most, if not all, of the team working on Enarx. This include a number of Red Hat employees (see standard disclaimer), so shouldn’t come as a surprise, but we have non-Red Hat contributors from a number of backgrounds, and we feel that Enarx should be a Common Good, and contribute to the commonwealth of intellectual property out there.

More brain power

Making something open source doesn’t just make it easier to fix bugs: it can improve the quality of what you produce in general. The more brain power you have to apply to the problem, but better your chances of making something great – assuming that the brain power is applied efficiently (not always an easy task!). We had a design meeting yesterday where one of the participants said towards the end, “I’m sure I could implement some of this, but don’t know a huge amount about this topic, and I’m worried that I’m not contributing to this discussion.” In fact, they had, by asking questions and clarifying some points, and we assured them that we wanted to include experienced, senior developers for their expertise and knowledge, and to pull out assumptions and to validate the design, and not because we expected everybody to be experts in all parts of the project. Having bright people around, involved in design and coding, spreads expertise and knowledge, and helps keep the work from becoming an insulated, isolated “ivory tower” construction, understood by few, and almost impossible to validate.

Not just code

It’s not just our coding that we do in the open. We manage our architecture in the open, our design meetings, our protocol design, our design methodology[4], our documentation, our bug-tracking, our chat, our CI/CD processes: all of it is open. The one exception is our vulnerability management process, which needs to have the opportunity for confidential exposure for a limited time.

We also take diversity seriously, and the project contributors are subject to the Contributor Covenant Code of Conduct.

In short, Enarx is an open project. I’m sure we could do better, and we’ll strive for that, but our underlying principles are that open is good in general, and vital for security. If you agree, please come and visit!


1 – Non-Disclosure Agreement.

2 – to the surprise of many of the team, including myself. At least it’s not in Perl.

3 – I fixed it. Admittedly after the demo.

4 – we’ve just moved to a Sprint pattern – the details of which we designed and agreed in the open.