100th video up

Just six months ago, I started a YouTube channel, What is cybersecurity?, to provide short videos (most are under 4 minutes and all are currently well under 10 minutes) discussing topics and issues in cybersecurity. I’ve spent 25+ years in the field (well before anyone called it “cybersecurity”) and had been wondering how people get into it these days. In particular, I’m aware that not everyone processes information in the same way, and that for many people, short video content is there preferred way of gaining new knowledge. So I decided that this was what I’d do: create short videos, publish frequently and see how it went.

Today, the 100th video was published: What is data privacy?

To celebrate this, here’s a post describing various aspects of the process.

Methodology

I thought it might be interesting to people to understand how I’ve gone about choosing the topics for videos. When I decided to do this, I created a long list of topics (the initial list was over 150) and realised very early on that I was going to have to start with simple issues and build up to more complicated ones if I wanted to be able to address sophisticated concepts. This meant that I’ve started off with some of the basic building blocks in computing which aren’t specifically security-related, just because I wanted to be able to provide basic starting points for people coming to the field.

I was slightly concerned when I started that I’d run out of ideas for topics: this hasn’t been a problem, and I don’t expect it to be any time in the future. Currently, with 100 videos published, I have over 250 topics that I want to cover (which I haven’t recorded yet). Whenever I come across a topic or concept, I add it to the list. There are few books that I mine for ideas, of which the most notable are:

  • Trust in Computer Systems and the Cloud – Mike Bursell (my book!)
  • Security Enginineering (3rd edition) – Ross Anderson
  • CISSP Exam Guide (9th edition) – Fernando Maymi, Shon Harris

As mentioned above, the videos are all short, and, so far, they’re all single-takes, in that each is a single recording, without editing pieces together. That doesn’t mean that I don’t have to re-record quite frequently – I’d say, on average, that 50% of videos require two or more takes to get right.

Audience

Who do I expect to be my audience? These are the personae that I’ve targeted to start with:

  • undergraduates reading Computer Science or similar, with an interest in cybersecurity
  • masters students looking to move into cybersecurity
  • computing professionals wanting more information on specific cybersecurity topics
  • managers or professionals in non-computing roles looking for a definition or explanation of a particular term
  • (after looking at UK students) A level students in Computer Science

Playlists

YouTube encourages you to create playlists to help people find related topics on your channel. These are the playlists that I currently have (I expect to create more as I get into more complex topics):

Cybersecurity concepts compared takes two or more topics and draws out the differences (and similarities). There are so many complex topics in cybersecurity which are really close to each other and it’s not always easy to differentiate them.

Equipment and software

Here’s the equipment and software I’m using.

Equipment

System: AMD Ryzen 9 3900X 12-Core Processor, 32GB RAM

Camera: Razer Kiyo Pro (though currently I’m trying out a Sony ZV-E10, which provides lovely video, but requires a 175ms audio delay due to USB streaming processing performance)

Microphone: audio-technica AT2035

Pre-amp: Art Tube MP-Studio V3

Software

Operating system: Fedora 39 Workstation

Studio: OBS Studio

Transcription: Buzz

Audio stripping: ffmpeg and some very light bash scripting

Thumbnails: Canva

Most watched? “Encapsulation”

” Thank you, I have a test tomorrow and you helped clear things up!”

As I mentioned in my last article on this blog, I’ve started a YouTube channel called “What is cybersecurity?” aimed at people wanting to get into cybersecurity or looking to understand particular topics for professional reasons (or personal interest). So far, the most popular video is “What is encapsulation?“. I was delighted to get a comment on it from a new subscriber saying “Thank you, I have a test tomorrow and you helped clear things up!”. This is exactly the sort of use to which I’ve been hoping people will put my channel videos.

Since I launched the channel, I’ve been busy recording lots of content, applying some branding (including thumbnails, which make a huge difference to how professional the content looks), scheduling videos and trying to get my head around the analytics available.

I have to say that I’m really enjoying it, and I’m going to try to keep around a month’s content ready to go in case I’m unable to record things for a while. In order to get a decent amount of content up and provide an underlying set of information, I’m aiming for around 3 videos a week for now, though that will probably reduce over time.

For now, I’m concentrating on basic topics around cybersecurity, partly because every time I’m tempted to record something more complex, I realise how many more basic concepts it’s going to rely on. For example, if I want to record something on the CIA triad, then being able to refer to existing content on confidentiality, integrity and availability makes a lot of sense, given that they’re building blocks which it’s helpful to understand before getting your head around what the triad really represents and describes.

As well as single topic videos, I’m creating “What’s the difference…?” videos comparing two or three similar or related topics. There are so many topics that I remember being confused about, or still am, and have to look up to remind myself. I try to define the topics in separate videos first and then use the “What’s the difference…” video as a comparison – then people can refer to the stand-alone topic videos to get the specifics if they need them.

So, it’s early days, but I’m enjoying it. If you are interested in this topic or if you know people who might be, please do share the channel with them: it’s https://youtube.com/@trustauthor. Oh, and subscribe! I also want suggestions for topics: please let me know what questions or issues you think I should be covering.

My Youtube channel: “What is cybersecurity?”

TL;DR: subscribe to my channel What is cybersecurity?

I’ve been a little quiet here recently, and that’s a result of a number of events coinciding, including a fair amount of travel (hello Bilbao, hello Shanghai!), but also a decision I made recently to create a YouTube channel. “Are there not enough YouTube channels already?” you might reasonably ask. Well yes, there are lots of them, but I’ve become increasingly aware that there don’t seem to be any which provide short, easy-to-understand videos covering the basics of cybersecurity. I’m a big proponent of encouraging more people into cybersecurity, and that means that there need to be easily-found materials that beginners and those interested in the field can consume, and where they can ask for more information about topics that they don’t yet understand. And that’s what seems to be missing.

There are so many different concepts to get your head around in cybersecurity, and although I’ve been running this blog for quite a while, many of the articles I write are aimed more at existing practitioners in the field. More important than that, I’m aware that there’s a huge potential audience out there of people who prefer to consume content in video format. And, as any of you who have actually met me in real life, or seen me speak at conferences, I enjoy talking (!) and explaining things to people.

So my hopes are three-fold:

  1. that even if the channel’s current content is a little basic for you now, as I add more videos, you’ll find material that’s useful and interesting to you;
  2. that you’ll ask questions for me to answer – even if I don’t post a response immediately, I’ll try to get to your topic when it’s appropriate;
  3. that you’ll share the channel widely with those you work with: we need to encourage more people to get involved in cybersecurity.

So, please subscribe, watch and share: What is cybersecurity? And I’ll try to keep interesting and useful content coming.

“E2E Encryption and governments” aka “Data loss for beginners”

This is not just an issue for the UK: if our government gets away with it, so will others.

I recently wrote an article (E2E encryption in danger (again) – sign the petition) about the ridiculous plans that the UK government has around wanting to impose backdoors in messaging services, breaking end-to-end encryption. In fact, I seem to have to keep writing articles about how stupid this is:

You shouldn’t just take my word about how bad an idea this is: pretty much everyone with a clue has something to say about it (and not in a good way), including the EFF.

One of the arguments that I’ve used before is that data leaks happen. If you create backdoors, you can expect that the capabilities to access those backdoors and the data that you’ve extracted using those backdoors will get out.

How do we know that this is the case? Because government agencies – including (particularly…?) Law Enforcement Agencies – are always losing sensitive data. And by losing, I don’t just mean having people crack their systems and leaking them, but also just publishing them by accident.

“Surely not!” you’re (possibly) saying. “Of all the people we should be trusting to keep sensitive data safe, the police and other LEAs must be the best/safest/most trustworthy?”

No.

I’d just like to add a little evidence here. The canonical example is a leak exposed in 2016 where data was leaked about 30,000 DHS and FBI employees.

But that was the US, and nothing like that would happen in the UK, right? I offer you four (or five, depending on how you count) counter-examples, all from the past few months.

I’m not saying that our police forces are incompetent or corrupt here. But as everyone in the IT security (“cybersecurity”) business knows, attacks and data loss are not a matter of “if”, they are a matter of “when”. And once it’s out, data stays out.

We must not allow these changes to be pushed through by governments. This is not just an issue for the UK: if our government gets away with it, so will others. Act now.

Zero trust and Confidential Computing

Confidential Computing can provide two properties which are excellent starting points for zero/explicit trust.

I’ve been fairly scathing about “zero trust” before – see, for instance, my articles Thinking beyond “zero-trust” and “Zero-trust”: my love/hate relationship – and my view of how the industry talks about it hasn’t changed much. I still believe, in particular, that:

  1. the original idea, as conceived, has a great deal of merit;
  2. few people really understand what it means;
  3. it’s become an industry bandwagon that is sorely abused by some security companies;
  4. it would be better called “explicit trust”.

The reason for this last is that it’s impossible to have zero trust: any entity or component has to have some level of trust in the other entities/components with which it interacts. More specifically, it has to maintain trust relationships – and what they look like, how they’re established, evaluated, maintained and destroyed is the core point of discussion of my book Trust in Computer Systems and the Cloud. If you’re interested in a more complete and reasoned criticism of zero trust, you’ll find that in Chapter 5: The Importance of Systems.

But, as noted above, I’m actually in favour of the original idea of zero trust, and that’s why I wanted to write this article about how zero trust and Confidential Computing, when combined, can actually provide some real value and improvements over standard distributed architectures (particularly in the Cloud).

An important starting point, however, is to note that I’ll be using this definition of Confidential Computing:

Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment.

Confidential Computing Consortium, https://confidentialcomputing.io/about/

Confidential Computing, as thus described, can provide two properties which are excellent starting points for components wishing to exercise zero/explicit trust, which we’ll examine individually:

  1. isolation from the host machine/system, particularly in terms of confidentiality of data;
  2. cryptographically verifiable identity.

Isolation

One of the main trust relationships that any executing component must establish and maintain is with the system that is providing the execution capabilities – the machine on which it is running (or virtual machine – but that presents similar issues). When you say that your component has “zero trust”, but has to trust the host machine on which it is running to maintain the confidentiality of the code and/or data associated with the component, then you have to accept the fact that you do actually have an enormous trust relationship: with the machine and whomever administers/controls it (and that includes anyone who may have compromised it). This can hardly form the basis for a “zero trust” architecture – but what can be done about it?

Where Confidential Computing helps is by allowing isolation from the machine which is doing the execution. The component still needs to trust the CPU/firmware that’s providing the execution context – something needs to run the code, after all! – but we can shrink that number of trust relationships required significantly, and provide cryptographic assurances to base this relationship on (see Attestation, below).

Knowing that a component is isolated from another component allows that component to have assurances about how it will operate and also allows other components to build a trust relationship with that component with the assurance that it is acting with its own agency, rather than under that of a malicious actor.

Attestation

Attestation is the mechanism by which an entity can receive assurances that a Confidential Computing component has been correctly set up and can provide the expected properties of data confidentiality and integrity and code integrity (and in some cases, confidentiality). These assurances are bound to a particular Confidential Computing component (and the Trusted Execution Environment in which it executes) cryptographically, which allows for another property to be provided as well: a unique identity. If the attesting service bind this identity cryptographically to the Confidential Computing component by means of, for instance, a standard X.509 certificate, then this can provide one of the bases for trust relationships both to and from the component.

Establishing a “zero trust” relationship

These properties allow zero (or “explicit”) trust relationships to be established with components that are operating within a Confidential Computing environment, and to do so in ways which have previously been impossible. Using classical computing approaches, any component is at the mercy of the environment within which it is executing, meaning that any trust relationship that is established to it is equally with the environment – that is, the system that is providing its execution environment. This is far from a zero trust relationship, and is also very unlikely to be explicit!

In a Confidential Computing environment, components can have a small number of trust relationships which are explicitly noted (typically these include the attestation service, the CPU/firmware provider and the provider of the executing code), allowing for a much better-defined trust architecture. It may not be exactly “zero trust”, but it is, at least, heading towards “minimal trust”.

SF in June: Confidential Computing Summit

A good selection of business-led and technical sessions

It should be around 70F/21C in San Francisco around the 29th June, which is a pretty good reason to pop over to attend the Confidential Computing Summit which is happening on that day. One of the signs that a technology is getting some real attention in the industry is when conferences start popping up, and Confidential Computing is now at the stage where it has two: OC3 (mainly virtual, Europe-based) and CCS.

I have to admit to having skin in this game – as Executive Director of the Confidential Computing Consortium, I’ll be presenting a brief keynote – but given the number of excellent speakers who’ll be there, it’s very much worth considering if you have an interest in Confidential Computing (and you should). I’d planned to paste the agenda into this article, but it’s just too large. Here is a list of just some of the sessions and panels, instead.

  • State of the Confidential Computing MarketRaluca Ada Popa, Assoc. Prof CS, UC Berkeley and co-founder Opaque Systems
  • Confidential Computing and Zero TrustVikas Bhatia, Head of Product, Microsoft Azure Confidential Computing
  • Overcoming Barriers to Confidential Computing as a Universal PlatformJohn Manferdelli, Office of the CTO, VMware
  • Confidential Computing as a Cornerstone for Cybersecurity Strategies and ComplianceXochitl Monteon, Chief Privacy Officer and VP Cybersecurity Risk & Governance, Intel
  • Citadel: Side-Channel-Resistant Enclaves on an Open-Source, Speculative, Out-of-Order ProcessorSrini Devadas, Webster Professor of EECS, MIT
  • Collaborative Confidential Computing: FHE vs sMPC vs Confidential Computing. Security Models and Real World Use CasesBruno Grieder, CTO & Co-Founder, Cosmian
  • Application of confidential computing to Anti Money Laundering in CanadaVishal Gossain, Practice Leader, Risk Analytics and Strategy, Ernst and Young

As you can tell, there’s a great selection of business-led and technical sessions, so whether you want to delve into the technology or understand the impact of Confidential Computing on business, please come along: I look forward to seeing you there.

Functional vs non-functional requirements: a dangerous dichotomy?

Non-functional requirements are at least as important as functional requirements.

Imagine you’re thinking about an application or a system: how would you describe it? How would you explain what you want it to do? Most of us, I think, would start with statements like:

  • it should read JPEGs and output SVG images;
  • it should buy the stocks I tell it to when they reach a particular price;
  • it should take a customer’s credit history and decide whether to approve a loan application;
  • it should ensure that the car maintains a specific speed unless the driver presses the brakes or disengages the system;
  • it should level me up when I hit 10,000 gold bars mined;
  • it should take a prompt and output several hundred words about a security topic that sound as if I wrote them;
  • it should strike out any text which would give away its non-human status.

These are all requirements on the system. Specifically, they are functional requirements: they are things that an application or a system should do based on the state of inputs and outputs to which it is exposed.

Now let’s look at another set of requirements: requirements which are important to the correct operation of the system, but which aren’t core to what it does. These are non-functional requirements, in that they don’t describe the functions it performs, but its broader operation. Here are some examples:

  • it should not leak cryptographic keys if someone performs a side-channel attack on it;
  • it should be able to be deployed on premises or in the Cloud;
  • it should be able to manage 30,000 transactions a second;
  • it should not slow stop a user’s phone from receiving a phone call when it is running;
  • it should not fail catastrophically, but degrade its performance gracefully under high load;
  • it should be allowed to empty the bank accounts of its human masters;
  • it should recover from unexpected failures, such as its operator switching off the power in a panic on seeing unexpected financial transactions.

You may notice that some of the non-functional requirements are expressed as negatives – “it should not” – this is fairly common, and though functional requirements are sometimes expressed in the negative, it is more rare.

So now we come to the important question, and the core of this article: which of the above lists is more important? Is it the list with the functional requirements or the non-functional requirements? I think that there’s a fair case to be made for the latter: the non-functional requirements. Even if that’s not always the case, my (far too) many years of requirements gathering (and requirements meeting) lead me to note that while there may be a core set of functional requirements that typically are very important, it’s very easy for a design, architecture or specification to collect more and more functional requirements which pale into insignificance against some of the non-functional requirements that accrue.

But the problem is that non-functional requirements are almost always second-class citizens when compared to functional requirements on an application or system. They are are often collected after the functional requirements – if at all – and are often the first to be discarded when things get complicated. They also typically require input from people with skill sets outside the context of the application or system: for instance, it may not be obvious to the designer of a back-end banking application that they need to consider data-in-use protection (such as Confidential Computing) when they are collecting requirements of an application which will initially be run in an internal data centre.

Agile and DevOps methodologies can be relevant in these contexts, as well. On the one hand, ensuring that the people who will be operating an application or system is likely to focus their minds on some of the non-functional requirements which might impact them if they are not considered early enough. On the other hand, however, a model of development where the the key performance indicator is having something that runs means that the functional requirements are fore-grounded (“yes, you can log in – though we’re not actually checking passwords yet…”).

What’s the take-away from this article? It’s to consider non-functional requirements as at least as important as functional requirements. Alongside that, it’s vital to be aware that the people in charge of designing, architecting and specifying an application or system may not be best placed to collect all of the broader requirements that are, in fact, core to its safe and continuing (business critical) operation.

E2E encryption in danger (again) – sign the petition

You should sign the petition on the official UK government site.

The UK Government is at it again: trying to require technical changes to products and protocols which will severely impact (read “destroy”) the security of users. This time, it’s the Online Safety Bill and, like pretty much all similar attempts, it requires the implementation of backdoors to things like messaging services. Lots of people have stood up and made the point that this is counter-productive and dangerous – here are a couple of links:

This isn’t the first time I’ve written about this (The Backdoor Fallacy: explaining it slowly for governments and Helping our governments – differently, for a start), and I fear that it won’t be the last. The problem is that none of these technical approaches work: none of them can work. Privacy and backdoors (and this is a backdoor, make no mistake about it) are fundamentally incompatible. And everyone with an ounce (or gram, I suppose) of technical expertise agrees: we know (and we can prove) that what’s being suggested won’t and can’t work.

We gain enormous benefits from technology, and with those benefits come risks, and some downsides which malicious actors exploit. The problem is that you can’t have one without the other. If you try to fix (and this approach won’t fix – it might reduce, but not fix) the problem that malicious actors and criminals use online messaging service, you open out a huge number of opportunities for other actors, including malicious governments (now or in the future) to do very, very bad things, whilst reducing significantly the benefits to private individuals, businesses, human rights organisations, charities and the rest. There is no zero sum game here.

What can you do? You can read up about the problem, you can add your voice to the technical discussions and/or if you’re a British citizen or resident, you should sign the petition on the official UK government site. This needs 10,000 signatures, so please get signing!

Executive Director, Confidential Computing Consortium

I look forward to furthering the aims of the CCC

I’m very pleased to announce that I’ve just started a new role as part-time Executive Director for the Confidential Computing Consortium, which is a project of the The Linux Foundation. I have been involved from the very earliest days of the consortium, which was founded in 2019, and I’m delighted to be joining as an officer of the project as we move into the next phase of our growth. I look forward to working with existing and future members and helping to expand industry adoption of Confidential Computing.

For those of you who’ve been following what I’ve been up to over the years, this may not be a huge surprise, at least in terms of my involvement, which started right at the beginning of the CCC. In fact, Enarx, the open source project of which I was co-founder, was the very first project to be accepted into the CCC, and Red Hat, where I was Chief Security Architect (in the Office of the CTO) at the time, was one of the founding members. Since then, I’ve served on the Governing Board (twice, once as Red Hat’s representative as a Premier member, and once as an elected representative of the General members) acted as Treasurer, been Co-chair of the Attestation SIG and been extremely active in the Technical Advisory Council. I was instrumental in initiating the creation of the first analyst report into Confidential Computing and helped in the creation of the two technical and one general white paper published by the CCC. I’ve enjoyed working with the brilliant industry leaders who more than ably lead the CCC, many of whom I now count not only as valued colleagues but also as friends.

The position – Executive Director – however, is news. For a while, the CCC has been looking to extend its activities beyond what the current officers of the consortium can manage, given that they have full-time jobs outside the CCC. The consortium has grown to over 40 members now – 8 Premier, 35 General and 8 Associate – and with that comes both the opportunity to engage in a whole new set of activities, but also a responsibility to listen to the various voices of the membership and to ensure that the consortium’s activities are aligned with the expectations and ambitions of the members. Beyond that, as Confidential Computing becomes more pervasive, it’s time to ensure that (as far as possible), there’s a consistent, crisp and compelling set of messages going out to potential adopters of the technology, as well as academics and regulators.

I plan to be working on the issues above. I’ve only just started and there’s a lot to be doing – and the role is only part-time! – but I look forward to furthering the aims of the CCC:

The Confidential Computing Consortium is a community focused on projects securing data in use and accelerating the adoption of confidential computing through open collaboration.

The core mission of the CCC

Wish me luck, or, even better, get in touch and get involved yourself.

SVB & finance stress: remember the (other) type of security

Now is the time for more vigilance, not less.

This is the week that the start-up world has been reeling after the collapse of Silicon Valley Bank. There have been lots of articles about it and about how the larger ecosystem (lawyers, VCs, other banks and beyond) have rallied to support those affected, written (on the whole, at least!) by people much better qualified than me to do so. But there’s another point that could get lost in the noise, and that’s the opportunity presented to bad actors by all of this.

When humans are tired, stressed, confused or have too many inputs, they (we – I’ve not succumbed to the lure of ChatGPT yet…) are prone to make poor decisions, or to take less time over decisions – even important decisions – than they ought to. Sadly, bad people know this, and that means that they will be going out of their way to exploit us (I’m very aware that I’m as vulnerable to this type of exploitation as anybody else). The problem is that when banks start looking dodgy, or when money is at stake, people need to do risky things. And these are often risky things which involve an awful lot of money, things like:

  • withdrawing large amounts of money
  • moving large amounts of money between accounts
  • opening new accounts
  • changing administrative access permissions and privileges on accounts
  • adding new people as administrators on accounts.

All of the above are actions (or involve actions) which we would normally be very careful about, and take very seriously (though that doesn’t stop us making the occasional mistake). The problem (and the opportunity for bad actors) is that when we’re stressed or in a hurry (as we’re likely to be in the current situation), we may pay less attention to important steps than we might otherwise. We might not enable multi-factor authentication, we might not check website certificates, we might click-through on seemingly helpful offers in emails to help us out, or we might not check the email addresses to which we’re sending invitations. All of these could lead bad folks to get at our money. They know this, and they’ll be going out of their way to find ways to encourage us to make mistakes, be less careful or hurry our way through vital processes.

My plea, then, is simple: don’t drop your guard because of the stress of the current situation. Now is the time for more vigilance, not less.