Sometimes – and I hope this doesn’t come as too much of a surprise to my readers – sometimes, there are bad people, and they do bad things with computers. These bad things are often about stopping the good things that computers are supposed to be doing* from happening properly. This is generally considered not to be what you want to happen**.
For this reason, when we architect and design systems, we often try to enforce isolation between components. I’ve had a couple of very interesting discussions over the past week about how to isolate various processes from each other, using different types of isolation, so I thought it might be interesting to go through some of the different types of isolation that we see out there. For the record, I’m not an expert on all different types of system, so I’m going to talk some history****, and then I’m going to concentrate on Linux*****, because that’s what I know best.
In the beginning
In the beginning, computers didn’t talk to one another. It was relatively difficult, therefore, for the bad people to do their bad things unless they physically had access to the computers themselves, and even if they did the bad things, the repercussions weren’t very widespread because there was no easy way for them to spread to other computers. This was good.
Much of the conversation below will focus on how individual computers act as hosts for a variety of different processes, so I’m going to refer to individual computers as “hosts” for the purposes of this post. Isolation at this level – host isolation – is still arguably the strongest type available to us. We typically talk about “air-gapping”, where there is literally an air gap – no physical network connection – between one host and another, but we also mean no wireless connection either. You might think that this is irrelevant in the modern networking world, but there are classes of usage where it is still very useful, the most obvious being for Certificate Authorities, where the root certificate is so rarely accessed – and so sensitive – that there is good reason not to connect the host on which it is stored to be connected to any other computer, and to use other means, such as smart-cards, a printer, or good old pen and paper to transfer information from it.
And then…
And then came networks. These allow hosts to talk to each other. In fact, by dint of the Internet, pretty much any host can talk to any other host, given a gateway or two. So along came network isolation to try to stop tha. Network isolation is basically trying to re-apply host isolation, after people messed it up by allowing hosts to talk to each other******.
Later, some smart alec came up with the idea of allowing multiple processes to be on the same host at the same time. The OS and kernel were trusted to keep these separate, but sometimes that wasn’t enough, so then virtualisation came along, to try to convince these different processes that they weren’t actually executing alongside anything else, but had their own environment to do their old thing. Sadly, the bad processes realised this wasn’t always true and found ways to get around this, so hardware virtualisation came along, where the actual chips running the hosts were recruited to try to convince the bad processes that they were all alone in the world. This should work, only a) people don’t always program the chips – or the software running on them – properly, and b) people decided that despite wanting to let these processes run as if they were on separate hosts, they also wanted them to be able to talk to processes which really were on other hosts. This meant that networking isolation needed to be applied not just at the host level, but at the virtual host level, as well******.
A step backwards?
Now, in a move which may seem retrograde, it occurred to some people that although hardware virtualisation seemed like a great plan, it was also somewhat of a pain to administer, and introduced inefficiencies that they didn’t like: e.g. using up lots of RAM and lots of compute cycles. These were often the same people who were of the opinion that processes ought to be able to talk to each other – what’s the fun in having an Internet if you can’t, well, “net” on it? Now we, as security folks, realise how foolish this sounds – allowing processes to talk to each other just encourages the bad people, right? – but they won the day, and containers came along. Containers allow lots of processes to be run on a host in a lightweight way, and rely on kernel controls – mainly namespaces – to ensure isolation********. In fact, there’s more you can do: you can use techniques like system call trapping to intercept the things that processes are attempting and stop them if they look like the sort of things they shouldn’t be attempting*********.
And, of course, you can write frameworks at the application layer to try to control what the different components of an application system can do – that’s basically the highest layer, and you’re just layering applications on applications at this point.
Systems thinking
So here’s where I get to the chance to mention one of my favourite topics: systems. As I’ve said before, by “system” here I don’t mean an individual computer (hence my definition of host, above), but a set of components that work together. The thing about isolation is that it works best when applied to a system.
Let me explain. A system, at least as I’d define it for the purposes of this post, is a set of components that work together but don’t have knowledge of external pieces. Most important, they don’t have knowledge of different layers below them. Systems may impose isolation on applications at higher layers, because they provide abstractions which allow higher systems to be able to ignore them, but by virtue of that, systems aren’t – or shouldn’t be – aware of the layers below them.
A simple description of the layers – and it doesn’t always hold, partly because networks are tricky things, and partly because there are various ways to assemble the stack – may look like this.
Application (top layer) Container System trapping Kernel Hardware virtualisation Networking Host (bottom layer)
As I intimated above, this is a (gross) simplification, but the point holds that the basic rule is that you can enforce isolation upwards in the layers of the stack, but you can’t enforce it downwards. Lower layer isolation is therefore generally stronger than higher layer isolation. This shouldn’t come as a huge surprise to anyone who’s used to considering network stacks – the principle is the same – but it’s helpful to lay out and explain the principles from time to time, and the implications for when you’re designing and architecting.
Because if you are considering trust models and are defining trust domains, you need to be very, very careful about defining whether – and how – these domains spread across the layer boundaries. If you miss a boundary out when considering trust domains, you’ve almost certainly messed up, and need to start again. Trust domains are important in this sort of conversation because the boundaries between trust domains are typically where you want to be able to enforce and police isolation.
The conversations I’ve had recently basically ran into problems because what people really wanted to do was apply lower layer isolation from layers above which had no knowledge of the bottom layers, and no way to reach into the control plane for those layers. We had to remodel, and I think that we came up with some sensible approaches. It was as I was discussing these approaches that it occurred to me that it would have been a whole lot easier to discuss them if we’d started out with a discussion of layers: hence this blog post. I hope it’s useful.
*although they may well not be, because, as I’m pretty sure I’ve mentioned before on this blog, the people trying to make the computers do the good things quite often get it wrong.
**unless you’re one of the bad people. But I’m pretty sure they don’t read this blog, so we’re OK***.
***if you are a bad person, and you read this blog, would you please mind pretending, just for now, that you’re a good person? Thank you. It’ll help us all sleep much better in our beds.
****which I’m absolutely going to present in an order that suits me, and generally neglect to check properly. Tough.
*****s/Linux/GNU Linux/g; Natch.
******for some reason, this seemed like a good idea at the time.
*******for those of you who are paying attention, we’ve got to techniques like VXLAN and SR-IOV.
********kernel purists will try to convince you that there’s no mention of containers in the Linux kernel, and that they “don’t really exist” as a concept. Try downloading the kernel source and doing a search for “container” if you want some ammunition to counter such arguments.
*********this is how SELinux works, for instance.
One thought on “Isolationism”