Learning something new: DNS vulnerability

I had previously been (woefully) unaware of the opportunities for abusing various naming systems

(NOTE: this post deals with a particular company, and though they didn’t sponsor it, I was the grateful recipient of some excellent swag from them at an industry conference, and promised to write an article as thanks!)

A year ago, I visited RSA Conference North America in San Francisco. This was far from my first trip to RSA, which is one of the great (and probably the biggest) global security conferences. There’s a huge exhibitor hall – in fact, several – and many people attend just this, rather than the full conference. I always make a point of having a look at all of the different booths to see if there are any new companies or organisations in the areas that interest me or to find out about things I was previously unaware of. There are people using all kinds of incentives to try to get you to pay attention to them, from food to magicians, from give-away swag to prizes. I’d been doing a lot of walking around and was tired, and happened to discover quite a large booth which had some little seats to sit on. The deal, of course, was the if you sat down, you had to listen to the company pitch – and at the end, they’d do a prize draw and you might win something fun.

At it happened, I’d not heard of the company before and didn’t really have much interest in what they seemed to be talking about – DNS security, it looked like – but I really needed to rest my feet, so I sat down and reminded myself that I had a chance of winning something, even if the subject was as boring as many of the pitches I’ve heard over the years.

It turned out not to be. The company was Infoblox and, to my surprise, I went back several times to find out more about what they do and the research they publish. I went back even after I’d managed to secure one of their prizes, what they do is specialise in an area which I had previously known almost nothing about. On leaving the conference, I promised to write a blog post about what they do, as a gesture of thanks. And I realised as I was preparing to travel to RSA this year (it starts next week, at time of writing) that I’d never fulfilled my promise, and was feeling about it, so this is the post, to assuage my guilt and maybe to prompt you, my dear reader, into finding out more about network security solutions, or what they call DDI (DNS, DHCP, and IPAM) management.

Most companies at exhibitions and conferences spend most of their time telling you about their products, but Infoblox took a different approach – which I heartily recommend to anyone in a similar situation. Rather than just pitching their products and services, they presented the research that they do into the various vulnerabilities, bad actors, criminal traffic distribution systems (TDS) and rest. They had the researchers talking about the work, and made them available after the brief pitch for further questions. Did they mention their products and services? Well, yes, but that wasn’t the main thrust of the presentations. And the presentations were fascinating.

I had previously been (woefully) unaware of the opportunities for abusing the configuration and control of the various naming systems around which our digital lives revolve. I suppose that if I’d thought about it, I might have realised that there would be bad actors messing with these, but the extent to which criminal – and state-sponsored – actors are using these systems shocked me, if only because it’s an area of security that I’d hardly thought about in the 30 or so years that I’ve been in the field. Criminal gangs hijack domains, trick users, redirect traffic and sometimes camp out for years in quiet areas of the Internet, ready to deploy exploits when the rewards seem worthwhile enough. I’ve written over the years about attackers “playing the long game” and biding their time before employing particular techniques or exploiting specific vulnerabilities, but the sheer scale of these networks honestly astounded me. I can’t do justice to this topic, and the very best I can offer is to suggest that you have a look at some of the research that Infoblox provides. They do, of course, also provide services to help you protect your organisation from these threats and to mitigate the risks that you are exposed to, but as I’m not an expert in this particular area, I don’t feel qualified to comment on them: I recommend that you investigate them yourself. All I can say is that if Infoblox do as thorough and expert job around the services they provide as they do in their research activities, then they’re definitely worth taking seriously.

Photo by Alina Grubnyak on Unsplash.

Web3 plus Confidential Computing

Technologies, when combined, sometimes yield fascinating – and commercially exciting – results.

Mike Bursell

Sponsored by Super Protocol

Introduction

One of the things that I enjoy the most is taking two different technologies, accelerating them at speed and seeing what comes out when they hit, rather in the style of a particle physicist with a Large Hadron Collider.  Technologies which may not seem to be obvious fits for each other, when combined, sometimes yield fascinating – and commercially exciting – results, and the idea of putting Web3 and Confidential Computing together is certainly one of those occasions.  Like most great ideas, once someone explained it to me, it was a “oh, well, of course that’s going to make sense!” moment, and I’m hoping that this article, which attempts to explain the combination of the two technologies, will give you the same reaction.  I’ll start with an introduction to the two technologies separately, why they are interesting from a business context, and then look at what happens when you put them together.  We’ll finish with more of a description of a particular implementation: that of Super Protocol, using the Polygon blockchain.

Business context

Introduction to the technologies 

In this section, we look at blockchain in general and Web3 in particular, followed by a description of the key aspects of Confidential Computing.  If you’re already an expert in either of these technologies, feel free to skip these, of course.

Blockchain

Blockchains offer a way for groups of people to agree about the truth of key aspects of the world.  They let people say: “the information that is part of that blockchain is locked in, and we – the other people who use it and I – believe that is correct and represents a true version of certain facts.”  This is a powerful capability, but how does it arise?  The key point about a blockchain is that it is immutable.  More specifically, anything that is placed on the blockchain can’t be changed without such a change being obvious to anybody with access to it.  And another key point about many blockchains is that they are public – that is, anybody with access to the Internet and the relevant software is able to access them.  Such blockchains are sometimes called “permissionless”, in juxtaposition to blockchains to which only authorised entities have access, which are known as “permissioned”.  In both cases, the act of putting something on a blockchain is very important – if we want to view blockchains as providing a source of truth about the world – then the ability to put something onto the blockchain is a power that comes with great responsibility.  The various consensus mechanisms employed vary between implementations but all of them aim for consensus among the parties that are placing their trust in the blockchain, a consensus that what is being represented is correct and valid.  Once such a consensus has been met a cryptographic hash is used to seal the latest information and anchor it to previous parts of the blockchain, adding a new block to it.

While this provides enough for some use cases, the addition of smart contracts provides a new dimension of capabilities.  I’ve noted before that smart contracts aren’t very well named (they’re arguably neither smart nor contracts!), but what they basically allow is for programs and their results to be put on a blockchain.  If I create a smart contract and there’s consensus that it allows deterministic results from known inputs, and it’s put onto the blockchain, then that means that when it’s run, if people can see the inputs – and be assured that the contract was run correctly, a point to which we’ll be returning later in this article – then they will happy to put the results of that smart contract on the blockchain.  What we’ve just created is a way to create data that is known to be correct and valid, and which we can be happy to put directly on the blockchain without further checking: the blockchain can basically add results to itself!

Web3

    Blockchains and smart contracts, on their own, are little more than a diverting combination of cryptography and computing: it’s the use cases that make things interesting.  The first use case that everyone thinks of is crypto-currency, the use of blockchains to create wholly electronic currencies that can be (but don’t have) divorced from centralised government-backed banking systems.  (Parenthetically, the fact that the field of use and study of these crypto-currencies has become known to its enthusiasts as “crypto” drives most experts in the much older and more academic field of cryptology wild.)

    There are other uses of blockchains and smart contracts, however, and the one which occupies our attention here is Web3.  I’m old (I’m not going to give a precise age, but let’s say early-to-mid Gen X, shall we?), so I cut my professional teeth on the technologies that make up what are now known as Web1. Web1 was the world of people running their own websites with fairly simple static pages and CGI interactions with online databases.  Web2 came next and revolves around centralised platforms – often cloud-based – and user-generated data, typically processed and manipulated by large organisations.  While data and information may be generated by users, it’s typically sucked into the platforms owned by these large organisations (banks, social media companies, governments, etc.), and passes almost entirely out of user control.  Web3 is the next iteration, and the big change is that it’s a move to decentralised services, transparency and user control of data.  Web3 is about open protocols – data and information isn’t owned by those processing it: Web3 provides a language of communication and says “let’s start here”.  And Web3 would be impossible without the use of blockchains and smart contracts.

    Confidential Computing

    Confidential Computing is a set of technologies that arose in the mid 2010s, originally to address a number of the problems that people started to realise were associated with cloud computing and Web2.  As organisations moved their applications to the cloud, it followed that the data they were processing also moved there, and this caused issues.  It’s probably safe to say that the first concerns that surfaced were around the organisations’ own data.  Keeping financial data, intellectual property, cryptographic keys and the like safe from prying eyes on servers operated in clouds owned and managed by completely different companies, sometimes in completely different jurisdictions, started to become a worry.  But that worry was compounded by the rising tide of regulation being enacted to protect the data not of the organisations, but of the customers who they (supposedly) served.  This, and the growing reputational damage associated with the loss of private data, required technologies that would allow the safeguarding of sensitive data and applications from the cloud service providers and, in some cases, from the organisations who “owned” – or at least processed – that data themselves.

    Confidential Computing requires two main elements.  The first is a hardware-based Trusted Execution Environment (TEE): a set of capabilities on a chip (typically a CPU or GPU at this point) that can isolate applications and their data from the rest of the system running them, including administrators, the operating system and even the lowest levels of the computer, the kernel itself.  Even someone with physical access to the machine cannot overcome the protection that a TEE provides, except in truly exceptional circumstances.  The second element is remote attestation.  It’s all very well setting up a TEE on a system in, say, a public cloud, but how can you know that it’s actually in place or even that the application you wanted to load into it is the one that’s actually running?  Remote attestation addresses this problem in a multi-step process.  There are a number of ways to manage this, but the basic idea is that the application in the TEE asks the CPU (which understands how this works) to create a measurement of some or all of the memory in the TEE.  The CPU does this, and signs this with a cryptographic key, creating an attestation measurement.  This measurement is then passed to a different system (hence “remote”), which checks it to see if it conforms to the expectations of the party (or parties) running the application and, if it does, provides a verification confirms that all is well.  This basically allows a certificate to be created that attests to the correctness of the CPU, the validity of the TEE’s configuration and the state of any applications or data within the TEE.

    With these elements – TEEs and remote attestation – in place, organisations can use Confidential Computing to prove to themselves, their regulators and their customers that no unauthorised peeking or tampering is possible with those sensitive applications and data that need to be protected.

    Combining blockchain & CC

    One thing – possibly the key thing – about Web3 is that it’s decentralised.  That means that anyone can offer to provide services and, most importantly, computing services, to anybody else.  This means that you don’t need to go to one of the big (and expensive) cloud service providers to run your application – you can run a DApp (Decentralised Application) – or a standard application such as a simple container image – on the hardware of anyone willing to host it.  The question, of course, is whether you can trust them with your application and your data; and the answer, of course in many, if not most, use cases, is “no”.  Cloud service providers may not be entirely worthy of organisations’ trust – hence the need for Confidential Computing – but at least they are publicly identifiable, have reputations and are both shameable and suable.  It’s very difficult to say the same about in a Web3 world about a provider of computing resources who may be anonymous or pseudonymous and with whom you have never had any interactions before – nor are likely to have any in the future.  And while there is sometimes scepticism about whether independent actors can create complex computational infrastructure, we only need look at the example of Bitcoin and other cryptocurrency miners, who have built computational resources which rival those of even the largest cloud providers. 

    Luckily for Web3, it turns out that Confidential Computing, while designed primarily for Web2, has just the properties needed to allow us to build systems that do allow us to do Web3 computing with confidence (I’ll walk through some of the key elements of one such implementation – by Super Protocol – below).  TEEs allow DApps to be isolated from the underlying hardware and system software and remote attestation can provide assurances to clients that everything has been set up correctly (and a number of other properties besides).  

    Open source

    There is one important characteristic that Web3 and Confidential Computing share that is required to ensure the security and transparency that is a key to a system that combines them: open source software.  Where software is proprietary and closed from scrutiny (this is the closed from which open source is differentiated), the development of trust in the various components and how they interact is impossible.  Where proprietary software might allow trust in a closed system of actors and clients who already have trust with each other – or external mechanisms to establish it – the same is not true in a system such as Web3 whose very decentralised nature doesn’t allow for such centralised authorities.

    Open source software is not automatically or by its very nature more secure than proprietary software – it is written by humans, after all (for now!), and  – but its openness and availability to scrutiny means that experts can examine it, check it and, where necessary, fix it.  This allows the open source community and those that interact with it to establish that it is worthy of trust in particular contexts and use cases (see Chapter 9: Open Source and Trust in my book for more details of how this can work).  Confidential Computing – using TEEs and remote attestation – can provide cryptographic assurances not only the elements of a Web3 system are valid and have appropriate security properties, but also that the components of the TEE itself do as well.

    Some readers may have noted the apparent circularity in this set-up – there are actually two trust relationships that are required for Confidential Computing to work: in the chip manufacturer and in the attestation verification service.  The first of these is unavoidable with current systems, while the other can be managed in part by performing the attestation oneself. It turns out that allowing the creation of trust relationships between mutually un-trusting parties is extremely complex, but one way that this can be done is what we will now address.

    Super Protocol’s approach

    Super Protocol have created a system which uses Confidential Computing to allow execution of complex applications to be made within a smart contract on the blockchain and for all the parties in the transaction to have appropriate trust in the performance and result of that execution without having to know or trust each other.  The key layers are:

    • Client Infrastructure, allowing a client to interact with the blockchain, initiate an instance and interact with it
    • Blockchain, including smart contracts 
    • Various providers (TEE, Data, Solution, Storage).

    Central to Super Protocol’s approach are two aspects of the system: that it is open source, and that remote attestation is required to allow the client to have sufficient assurance of the system’s security.  Smart contracts – themselves open source – allow the resources made available by the various actors and combined into an offer that is placed on the blockchain and is available to anyone with access to the blockchain – to execute it, given sufficient resources from all involved.  What makes this approach a Web3 approach, and differentiates it from a more Web2 system, is that none of these actors needs to be connected contractually.

    Benefits of This Approach

    How does this approach help?  Well, you don’t need to store or process data (which may be sensitive or just very large) locally: TEEs can handle it, providing confidentiality and integrity assurances that would otherwise be impossible.  And communications between the various applications are also encrypted transparently, reducing or removing risks of data leakage and exposure, without requiring complex key management by users, but keeping the flexibility and exposure offered by decentralisation and Confidential Computing.

    But the step change that this opens up is the network effect enabled by the possibility of building huge numbers of interconnected Web3 agents and applications, operating with the benefits of integrity and confidentiality offered by Confidential Computing, and backed up by remote attestation.  One of the recurring criticisms of Web2 ecosystems is their fragility and lack of flexibility (not to mention the problems of securing them in the first place): here we have an opportunity to create complex, flexible and robust ecosystems where decentralised agents and applications can collaborate, with privacy controls designed in and clearly defined security assurances and policies.

    Technical details

    In this section, I dig a little further into some of the technical details of Super Protocol’s system. It is, of course, not the only approach to combining Confidential Computing and Web3, but it is available right now, seems carefully architected and designed with security foremost in mind and provides a good example of the technologies and the complexities involved.  

    You can think of Super Protocol’s service as being in two main parts: on-chain and off-chain. The marketplace, with smart contract offers, sits on an Ethereum blockchain, and the client interacts with that, never needing to know the details of how and where their application instance is running. The actual running applications are off-chain, supported by other infrastructure to allow initial configuration and then communication services between clients and running applications.  The “bridge” between the two parts, which moves from an offer to an actual running instance of the application, is a component called a Trusted Loader, which sets up the various parts of the application and sets it running.  The data it is managing contains sensitive information such as cryptographic keys which need to be protected as they provide security for all the other parts of the system and the Trusted Loader also manages the important actions of hash verification (ensuring that what is being loaded what was originally offered) and order integrity (ensuring that no changes can be made whilst the loading is taking place and execution starting).

    Trusted Loader – configuration and deployment with Data and Application information into TEE instance

    But what is actually running?  The answer is that the unit of execution for an application in this service is a Kubernetes Pod, so each application is basically a container image which is run within a Pod, which itself executes within a TEE, isolating it from any unauthorised access. This Pod itself is – of course! – measured, creating an attestation measurement that can now be verified by clients of the application. We should also remember that the application itself – the container image – needs protection as well. This is part of the job of the Trusted Loader, as the container image is stored encrypted, and the Trusted Loader has appropriate keys to decrypt this and other resources required to allow execution.  This is not the only thing that the Trusted Loader does: it also gathers and sets up resources from the smart contract for networking and storage, putting everything together, setting it running and connecting the client to the running instance.

    There isn’t space in this article to go into deeper detail of how the system works, but by combining the capabilities offered by Confidential Computing and a system of cryptographic keys and certificates, the overall system enforces a variety of properties that are vital for sensitive, distributed and decentralised Web3 applications.

    • Decentralised storage: secrets are kept in multiple places instead of one, making them harder to access, steal or leak.
    • Developer independence: creators of applications can’t access these secrets, continuing the lack of need for trust relationships between the various actors.  In other words, each instance of an application is isolated from its creator, maintaining data confidentiality.
    • Unique secrets: Each application gets its own unique secrets that nobody else can use or see and which are not shared between instances.

    Thanks

    Thanks to Super Protocol for sponsoring this article.  Although they made suggestions and provided assistance around the technical details, this article represents my views, the text is mine and final editorial control (and with it the blame for any mistakes!) rests with the author.

    Photo by Rukma Pratista on Unsplash

    100th video up

    Just six months ago, I started a YouTube channel, What is cybersecurity?, to provide short videos (most are under 4 minutes and all are currently well under 10 minutes) discussing topics and issues in cybersecurity. I’ve spent 25+ years in the field (well before anyone called it “cybersecurity”) and had been wondering how people get into it these days. In particular, I’m aware that not everyone processes information in the same way, and that for many people, short video content is there preferred way of gaining new knowledge. So I decided that this was what I’d do: create short videos, publish frequently and see how it went.

    Today, the 100th video was published: What is data privacy?

    To celebrate this, here’s a post describing various aspects of the process.

    Methodology

    I thought it might be interesting to people to understand how I’ve gone about choosing the topics for videos. When I decided to do this, I created a long list of topics (the initial list was over 150) and realised very early on that I was going to have to start with simple issues and build up to more complicated ones if I wanted to be able to address sophisticated concepts. This meant that I’ve started off with some of the basic building blocks in computing which aren’t specifically security-related, just because I wanted to be able to provide basic starting points for people coming to the field.

    I was slightly concerned when I started that I’d run out of ideas for topics: this hasn’t been a problem, and I don’t expect it to be any time in the future. Currently, with 100 videos published, I have over 250 topics that I want to cover (which I haven’t recorded yet). Whenever I come across a topic or concept, I add it to the list. There are few books that I mine for ideas, of which the most notable are:

    • Trust in Computer Systems and the Cloud – Mike Bursell (my book!)
    • Security Enginineering (3rd edition) – Ross Anderson
    • CISSP Exam Guide (9th edition) – Fernando Maymi, Shon Harris

    As mentioned above, the videos are all short, and, so far, they’re all single-takes, in that each is a single recording, without editing pieces together. That doesn’t mean that I don’t have to re-record quite frequently – I’d say, on average, that 50% of videos require two or more takes to get right.

    Audience

    Who do I expect to be my audience? These are the personae that I’ve targeted to start with:

    • undergraduates reading Computer Science or similar, with an interest in cybersecurity
    • masters students looking to move into cybersecurity
    • computing professionals wanting more information on specific cybersecurity topics
    • managers or professionals in non-computing roles looking for a definition or explanation of a particular term
    • (after looking at UK students) A level students in Computer Science

    Playlists

    YouTube encourages you to create playlists to help people find related topics on your channel. These are the playlists that I currently have (I expect to create more as I get into more complex topics):

    Cybersecurity concepts compared takes two or more topics and draws out the differences (and similarities). There are so many complex topics in cybersecurity which are really close to each other and it’s not always easy to differentiate them.

    Equipment and software

    Here’s the equipment and software I’m using.

    Equipment

    System: AMD Ryzen 9 3900X 12-Core Processor, 32GB RAM

    Camera: Razer Kiyo Pro (though currently I’m trying out a Sony ZV-E10, which provides lovely video, but requires a 175ms audio delay due to USB streaming processing performance)

    Microphone: audio-technica AT2035

    Pre-amp: Art Tube MP-Studio V3

    Software

    Operating system: Fedora 39 Workstation

    Studio: OBS Studio

    Transcription: Buzz

    Audio stripping: ffmpeg and some very light bash scripting

    Thumbnails: Canva

    Most watched? “Encapsulation”

    ” Thank you, I have a test tomorrow and you helped clear things up!”

    As I mentioned in my last article on this blog, I’ve started a YouTube channel called “What is cybersecurity?” aimed at people wanting to get into cybersecurity or looking to understand particular topics for professional reasons (or personal interest). So far, the most popular video is “What is encapsulation?“. I was delighted to get a comment on it from a new subscriber saying “Thank you, I have a test tomorrow and you helped clear things up!”. This is exactly the sort of use to which I’ve been hoping people will put my channel videos.

    Since I launched the channel, I’ve been busy recording lots of content, applying some branding (including thumbnails, which make a huge difference to how professional the content looks), scheduling videos and trying to get my head around the analytics available.

    I have to say that I’m really enjoying it, and I’m going to try to keep around a month’s content ready to go in case I’m unable to record things for a while. In order to get a decent amount of content up and provide an underlying set of information, I’m aiming for around 3 videos a week for now, though that will probably reduce over time.

    For now, I’m concentrating on basic topics around cybersecurity, partly because every time I’m tempted to record something more complex, I realise how many more basic concepts it’s going to rely on. For example, if I want to record something on the CIA triad, then being able to refer to existing content on confidentiality, integrity and availability makes a lot of sense, given that they’re building blocks which it’s helpful to understand before getting your head around what the triad really represents and describes.

    As well as single topic videos, I’m creating “What’s the difference…?” videos comparing two or three similar or related topics. There are so many topics that I remember being confused about, or still am, and have to look up to remind myself. I try to define the topics in separate videos first and then use the “What’s the difference…” video as a comparison – then people can refer to the stand-alone topic videos to get the specifics if they need them.

    So, it’s early days, but I’m enjoying it. If you are interested in this topic or if you know people who might be, please do share the channel with them: it’s https://youtube.com/@trustauthor. Oh, and subscribe! I also want suggestions for topics: please let me know what questions or issues you think I should be covering.

    My Youtube channel: “What is cybersecurity?”

    TL;DR: subscribe to my channel What is cybersecurity?

    I’ve been a little quiet here recently, and that’s a result of a number of events coinciding, including a fair amount of travel (hello Bilbao, hello Shanghai!), but also a decision I made recently to create a YouTube channel. “Are there not enough YouTube channels already?” you might reasonably ask. Well yes, there are lots of them, but I’ve become increasingly aware that there don’t seem to be any which provide short, easy-to-understand videos covering the basics of cybersecurity. I’m a big proponent of encouraging more people into cybersecurity, and that means that there need to be easily-found materials that beginners and those interested in the field can consume, and where they can ask for more information about topics that they don’t yet understand. And that’s what seems to be missing.

    There are so many different concepts to get your head around in cybersecurity, and although I’ve been running this blog for quite a while, many of the articles I write are aimed more at existing practitioners in the field. More important than that, I’m aware that there’s a huge potential audience out there of people who prefer to consume content in video format. And, as any of you who have actually met me in real life, or seen me speak at conferences, I enjoy talking (!) and explaining things to people.

    So my hopes are three-fold:

    1. that even if the channel’s current content is a little basic for you now, as I add more videos, you’ll find material that’s useful and interesting to you;
    2. that you’ll ask questions for me to answer – even if I don’t post a response immediately, I’ll try to get to your topic when it’s appropriate;
    3. that you’ll share the channel widely with those you work with: we need to encourage more people to get involved in cybersecurity.

    So, please subscribe, watch and share: What is cybersecurity? And I’ll try to keep interesting and useful content coming.

    “E2E Encryption and governments” aka “Data loss for beginners”

    This is not just an issue for the UK: if our government gets away with it, so will others.

    I recently wrote an article (E2E encryption in danger (again) – sign the petition) about the ridiculous plans that the UK government has around wanting to impose backdoors in messaging services, breaking end-to-end encryption. In fact, I seem to have to keep writing articles about how stupid this is:

    You shouldn’t just take my word about how bad an idea this is: pretty much everyone with a clue has something to say about it (and not in a good way), including the EFF.

    One of the arguments that I’ve used before is that data leaks happen. If you create backdoors, you can expect that the capabilities to access those backdoors and the data that you’ve extracted using those backdoors will get out.

    How do we know that this is the case? Because government agencies – including (particularly…?) Law Enforcement Agencies – are always losing sensitive data. And by losing, I don’t just mean having people crack their systems and leaking them, but also just publishing them by accident.

    “Surely not!” you’re (possibly) saying. “Of all the people we should be trusting to keep sensitive data safe, the police and other LEAs must be the best/safest/most trustworthy?”

    No.

    I’d just like to add a little evidence here. The canonical example is a leak exposed in 2016 where data was leaked about 30,000 DHS and FBI employees.

    But that was the US, and nothing like that would happen in the UK, right? I offer you four (or five, depending on how you count) counter-examples, all from the past few months.

    I’m not saying that our police forces are incompetent or corrupt here. But as everyone in the IT security (“cybersecurity”) business knows, attacks and data loss are not a matter of “if”, they are a matter of “when”. And once it’s out, data stays out.

    We must not allow these changes to be pushed through by governments. This is not just an issue for the UK: if our government gets away with it, so will others. Act now.

    Zero trust and Confidential Computing

    Confidential Computing can provide two properties which are excellent starting points for zero/explicit trust.

    I’ve been fairly scathing about “zero trust” before – see, for instance, my articles Thinking beyond “zero-trust” and “Zero-trust”: my love/hate relationship – and my view of how the industry talks about it hasn’t changed much. I still believe, in particular, that:

    1. the original idea, as conceived, has a great deal of merit;
    2. few people really understand what it means;
    3. it’s become an industry bandwagon that is sorely abused by some security companies;
    4. it would be better called “explicit trust”.

    The reason for this last is that it’s impossible to have zero trust: any entity or component has to have some level of trust in the other entities/components with which it interacts. More specifically, it has to maintain trust relationships – and what they look like, how they’re established, evaluated, maintained and destroyed is the core point of discussion of my book Trust in Computer Systems and the Cloud. If you’re interested in a more complete and reasoned criticism of zero trust, you’ll find that in Chapter 5: The Importance of Systems.

    But, as noted above, I’m actually in favour of the original idea of zero trust, and that’s why I wanted to write this article about how zero trust and Confidential Computing, when combined, can actually provide some real value and improvements over standard distributed architectures (particularly in the Cloud).

    An important starting point, however, is to note that I’ll be using this definition of Confidential Computing:

    Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment.

    Confidential Computing Consortium, https://confidentialcomputing.io/about/

    Confidential Computing, as thus described, can provide two properties which are excellent starting points for components wishing to exercise zero/explicit trust, which we’ll examine individually:

    1. isolation from the host machine/system, particularly in terms of confidentiality of data;
    2. cryptographically verifiable identity.

    Isolation

    One of the main trust relationships that any executing component must establish and maintain is with the system that is providing the execution capabilities – the machine on which it is running (or virtual machine – but that presents similar issues). When you say that your component has “zero trust”, but has to trust the host machine on which it is running to maintain the confidentiality of the code and/or data associated with the component, then you have to accept the fact that you do actually have an enormous trust relationship: with the machine and whomever administers/controls it (and that includes anyone who may have compromised it). This can hardly form the basis for a “zero trust” architecture – but what can be done about it?

    Where Confidential Computing helps is by allowing isolation from the machine which is doing the execution. The component still needs to trust the CPU/firmware that’s providing the execution context – something needs to run the code, after all! – but we can shrink that number of trust relationships required significantly, and provide cryptographic assurances to base this relationship on (see Attestation, below).

    Knowing that a component is isolated from another component allows that component to have assurances about how it will operate and also allows other components to build a trust relationship with that component with the assurance that it is acting with its own agency, rather than under that of a malicious actor.

    Attestation

    Attestation is the mechanism by which an entity can receive assurances that a Confidential Computing component has been correctly set up and can provide the expected properties of data confidentiality and integrity and code integrity (and in some cases, confidentiality). These assurances are bound to a particular Confidential Computing component (and the Trusted Execution Environment in which it executes) cryptographically, which allows for another property to be provided as well: a unique identity. If the attesting service bind this identity cryptographically to the Confidential Computing component by means of, for instance, a standard X.509 certificate, then this can provide one of the bases for trust relationships both to and from the component.

    Establishing a “zero trust” relationship

    These properties allow zero (or “explicit”) trust relationships to be established with components that are operating within a Confidential Computing environment, and to do so in ways which have previously been impossible. Using classical computing approaches, any component is at the mercy of the environment within which it is executing, meaning that any trust relationship that is established to it is equally with the environment – that is, the system that is providing its execution environment. This is far from a zero trust relationship, and is also very unlikely to be explicit!

    In a Confidential Computing environment, components can have a small number of trust relationships which are explicitly noted (typically these include the attestation service, the CPU/firmware provider and the provider of the executing code), allowing for a much better-defined trust architecture. It may not be exactly “zero trust”, but it is, at least, heading towards “minimal trust”.

    SF in June: Confidential Computing Summit

    A good selection of business-led and technical sessions

    It should be around 70F/21C in San Francisco around the 29th June, which is a pretty good reason to pop over to attend the Confidential Computing Summit which is happening on that day. One of the signs that a technology is getting some real attention in the industry is when conferences start popping up, and Confidential Computing is now at the stage where it has two: OC3 (mainly virtual, Europe-based) and CCS.

    I have to admit to having skin in this game – as Executive Director of the Confidential Computing Consortium, I’ll be presenting a brief keynote – but given the number of excellent speakers who’ll be there, it’s very much worth considering if you have an interest in Confidential Computing (and you should). I’d planned to paste the agenda into this article, but it’s just too large. Here is a list of just some of the sessions and panels, instead.

    • State of the Confidential Computing MarketRaluca Ada Popa, Assoc. Prof CS, UC Berkeley and co-founder Opaque Systems
    • Confidential Computing and Zero TrustVikas Bhatia, Head of Product, Microsoft Azure Confidential Computing
    • Overcoming Barriers to Confidential Computing as a Universal PlatformJohn Manferdelli, Office of the CTO, VMware
    • Confidential Computing as a Cornerstone for Cybersecurity Strategies and ComplianceXochitl Monteon, Chief Privacy Officer and VP Cybersecurity Risk & Governance, Intel
    • Citadel: Side-Channel-Resistant Enclaves on an Open-Source, Speculative, Out-of-Order ProcessorSrini Devadas, Webster Professor of EECS, MIT
    • Collaborative Confidential Computing: FHE vs sMPC vs Confidential Computing. Security Models and Real World Use CasesBruno Grieder, CTO & Co-Founder, Cosmian
    • Application of confidential computing to Anti Money Laundering in CanadaVishal Gossain, Practice Leader, Risk Analytics and Strategy, Ernst and Young

    As you can tell, there’s a great selection of business-led and technical sessions, so whether you want to delve into the technology or understand the impact of Confidential Computing on business, please come along: I look forward to seeing you there.

    Functional vs non-functional requirements: a dangerous dichotomy?

    Non-functional requirements are at least as important as functional requirements.

    Imagine you’re thinking about an application or a system: how would you describe it? How would you explain what you want it to do? Most of us, I think, would start with statements like:

    • it should read JPEGs and output SVG images;
    • it should buy the stocks I tell it to when they reach a particular price;
    • it should take a customer’s credit history and decide whether to approve a loan application;
    • it should ensure that the car maintains a specific speed unless the driver presses the brakes or disengages the system;
    • it should level me up when I hit 10,000 gold bars mined;
    • it should take a prompt and output several hundred words about a security topic that sound as if I wrote them;
    • it should strike out any text which would give away its non-human status.

    These are all requirements on the system. Specifically, they are functional requirements: they are things that an application or a system should do based on the state of inputs and outputs to which it is exposed.

    Now let’s look at another set of requirements: requirements which are important to the correct operation of the system, but which aren’t core to what it does. These are non-functional requirements, in that they don’t describe the functions it performs, but its broader operation. Here are some examples:

    • it should not leak cryptographic keys if someone performs a side-channel attack on it;
    • it should be able to be deployed on premises or in the Cloud;
    • it should be able to manage 30,000 transactions a second;
    • it should not slow stop a user’s phone from receiving a phone call when it is running;
    • it should not fail catastrophically, but degrade its performance gracefully under high load;
    • it should be allowed to empty the bank accounts of its human masters;
    • it should recover from unexpected failures, such as its operator switching off the power in a panic on seeing unexpected financial transactions.

    You may notice that some of the non-functional requirements are expressed as negatives – “it should not” – this is fairly common, and though functional requirements are sometimes expressed in the negative, it is more rare.

    So now we come to the important question, and the core of this article: which of the above lists is more important? Is it the list with the functional requirements or the non-functional requirements? I think that there’s a fair case to be made for the latter: the non-functional requirements. Even if that’s not always the case, my (far too) many years of requirements gathering (and requirements meeting) lead me to note that while there may be a core set of functional requirements that typically are very important, it’s very easy for a design, architecture or specification to collect more and more functional requirements which pale into insignificance against some of the non-functional requirements that accrue.

    But the problem is that non-functional requirements are almost always second-class citizens when compared to functional requirements on an application or system. They are are often collected after the functional requirements – if at all – and are often the first to be discarded when things get complicated. They also typically require input from people with skill sets outside the context of the application or system: for instance, it may not be obvious to the designer of a back-end banking application that they need to consider data-in-use protection (such as Confidential Computing) when they are collecting requirements of an application which will initially be run in an internal data centre.

    Agile and DevOps methodologies can be relevant in these contexts, as well. On the one hand, ensuring that the people who will be operating an application or system is likely to focus their minds on some of the non-functional requirements which might impact them if they are not considered early enough. On the other hand, however, a model of development where the the key performance indicator is having something that runs means that the functional requirements are fore-grounded (“yes, you can log in – though we’re not actually checking passwords yet…”).

    What’s the take-away from this article? It’s to consider non-functional requirements as at least as important as functional requirements. Alongside that, it’s vital to be aware that the people in charge of designing, architecting and specifying an application or system may not be best placed to collect all of the broader requirements that are, in fact, core to its safe and continuing (business critical) operation.

    E2E encryption in danger (again) – sign the petition

    You should sign the petition on the official UK government site.

    The UK Government is at it again: trying to require technical changes to products and protocols which will severely impact (read “destroy”) the security of users. This time, it’s the Online Safety Bill and, like pretty much all similar attempts, it requires the implementation of backdoors to things like messaging services. Lots of people have stood up and made the point that this is counter-productive and dangerous – here are a couple of links:

    This isn’t the first time I’ve written about this (The Backdoor Fallacy: explaining it slowly for governments and Helping our governments – differently, for a start), and I fear that it won’t be the last. The problem is that none of these technical approaches work: none of them can work. Privacy and backdoors (and this is a backdoor, make no mistake about it) are fundamentally incompatible. And everyone with an ounce (or gram, I suppose) of technical expertise agrees: we know (and we can prove) that what’s being suggested won’t and can’t work.

    We gain enormous benefits from technology, and with those benefits come risks, and some downsides which malicious actors exploit. The problem is that you can’t have one without the other. If you try to fix (and this approach won’t fix – it might reduce, but not fix) the problem that malicious actors and criminals use online messaging service, you open out a huge number of opportunities for other actors, including malicious governments (now or in the future) to do very, very bad things, whilst reducing significantly the benefits to private individuals, businesses, human rights organisations, charities and the rest. There is no zero sum game here.

    What can you do? You can read up about the problem, you can add your voice to the technical discussions and/or if you’re a British citizen or resident, you should sign the petition on the official UK government site. This needs 10,000 signatures, so please get signing!