Of different types of trust

What is doing the trusting, and what does the word actually even mean?

As you may have noticed if you regularly read this blog, it’s not uncommon for me to talk about trust.  In fact, one of the earliest articles that I posted – over two years ago, now – was entitled “What is trust?”.  I started thinking about this topic seriously nearly twenty years ago, specifically when thinking about peer to peer systems, and how they might establish trust relationships, and my interest has continued since, with a particular fillip during my time on the Security Working Group for ETSI NFV[1], where we had some very particular issues that we wanted to explore and I had the opportunity to follow some very interesting lines of thought.  More recently, I introduced Enarx, whose main starting point is that we want to reduce the number of trust relationships that you need to manage when you deploy software.

It was around the time of the announcement that I realised quite how much of my working life I’ve spent thinking and talking about trust;

  • how rarely most other people seem to have done the same;
  • how little literature there is on the subject; and
  • how interested people often are to talk about it when it comes up in a professional setting.

I’m going to clarify the middle bullet point in a minute, but let me get to my point first, which is this: I want to do a lot more talking about trust via this blog, with the possible intention of writing a book[2] on the subject.

Here’s the problem, though.  When you use the word trust, people think that they know what you mean.  It turns out that the almost never do.  Let’s try to tease out some of the reasons for that by starting with four fairly innocuously simple-looking statements:

  1. I trust my brother and my sister.
  2. I trust my bank.
  3. My bank trusts its IT systems.
  4. My bank’s IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case.  I stand by my definition of trust and the three corollaries, as expressed in “What is trust?”.  I’ll restate them here in case you can’t be bothered to follow the link:

  • “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
  • My first corollary: “Trust is always contextual.”
  • My second corollary:” One of the contexts for trust is always time”.
  • My third corollary: “Trust relationships are not symmetrical.”

These all hold true for each of the statements above – although they may not be self-evident in the rather bald way that I’ve put them.  What’s more germane to the point I want to make today, however, and hopefully obvious to you, dear reader[4], is that the word “trust” signifies something very different in each of the four statements.

  • Case 1 – my trusting my brother and sister.  This is about trust between individual humans – specifically my trust relationship to my brother, and my trust relationship to my sister.
  • Case 2 – my trusting my bank.  This is about trust between an individual and an organisation: specifically a legal entity with particular services and structure.
  • Case 3 – the bank trusting its IT systems.  This is about an organisation trusting IT systems, and it suddenly feels like we’ve moved into a very different place from the initial two cases.  I would argue that there’s a huge difference between the first and second case as well, actually, but we are often lulled into false sense of equivalence because when we interact with a bank, it’s staffed by people, and also has many of the legal protections afforded to an individual[5]. There are still humans in this case, though, in that one assumes that it is the intention of certain humans who represent the bank to have a trust relationship with certain IT systems.
  • Case 4 – the IT systems trusting each other.  We’re really not in Kansas anymore with this statement[6].  There are no humans involved in this set of trust relationships, unless you’re attributing agency to specific systems, and if so, which? What, then, is doing the trusting, and what does the word actually even mean?

It’s clear, then, that we can’t just apply the same word, “trust” to all of these different contexts and assume that it means the same thing in each case.  We need to differentiate between them.

I stated, above, that I intended to clarify my statement about the lack of literature around trust.  Actually, there’s lots and lots of literature around trust, but it deals almost exclusively with cases 1 and 2 above.  This is all well and good, but we spend so much time talking about trust with regards to systems (IT or computer systems) that we deserve, as a community, some clarity about what we mean, what assumptions we’re making, and what the ramifications of those assumptions are.

That, then, is my mission.  It’s certainly not going to be the only thing that I write about on this blog, but when I do write about trust, I’m going to try to set out my stall and add some better definition and clarification to what I – and we – are talking about.


0 – apropos of nothing in particular, I often use pixabay for my images.  This is one of the suggestions if you search on “trust”, but what exactly is going on here?  The child is trusting the squirrel thing to do what?  Not eat its nose?  Not stick its claws up its left nostril?  I mean, really?

1 – ETSI is a telco standards body, NFV is “Network Function Virtualisation”.

2 – which probably won’t just consist of a whole bunch of these articles in a random order, with the footnotes taken out[3].

3 – because, if nothing else, you know that I’m bound to keep the footnotes in.

4 – I always hope that there’s actually more than one of you, but maybe it’s just me, the solipsist, writing for a world conjured by my own brain.

5 – or it may do, depending on your jurisdiction.

6 – I think I’ve only been to Kansas once, actually.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Announcing Enarx

If I’ve managed the process properly, this article should be posting at almost exactly the time that we show a demo at Red Hat Summit 2019 in Boston.  That demo, to be delivered by my colleague Nathaniel McCallum, will be of an early incarnation of Enarx, a project that a few of us at Red Hat have been working on for a few months now, and which we’re ready to start announcing to the world.  We have code, we have a demo, we have a github repository, we have a logo: what more could a project want?  Well, people – but we’ll get to that.

What’s the problem?

When you run software (a “workload”) on a system (a “host”) on the cloud or on your own premises, there are lots and lots of layers.  You often don’t see those layers, but they’re there.  Here’s an example of the layers that you might see in a standard cloud virtualisation architecture.  The different colours represent different entities that “own”  different layers or sets of layers.

classic-cloud-virt-arch

Here’s a similar diagram depicting a standard cloud container architecture.  As before, each different colour represents a different “owner” of a layer or set of layers.

cloud-container-arch

These owners may be of very different types, from hardware vendors to OEMs to Cloud Service Providers (CSPs) to middleware vendors to Operating System vendors to application vendors to you, the workload owner.  And for each workload that you run, on each host, the exact list of layers is likely to be different.  And even when they’re the same, the versions of the layers instances may be different, whether it’s a different BIOS version, a different bootloader, a different kernel version or whatever else.

Now, in many contexts, you might not worry about this and your Cloud Service Provider goes out of its way to abstract these layers and their version details away from you.  But this is a security blog, for security people, and that means that anybody who’s reading this probably does care.

The reason we care is not just the different versions and the different layers, but the number of different things – and different entities – that we need to trust if we’re going to be happy running any sort of sensitive workload on these types of stacks.  I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised.  This is a big stretch when it comes to running my sensitive workloads.

What’s Enarx?

Enarx is a project which is trying to address this problem of having to trust all of those layers.  We made the decision that we wanted to allow people running workloads to be able to reduce the number of layers – and owners – that they need to trust to the absolute minimum.  We plan to use Trusted Execution Environments (“TEEs” – see Oh, how I love my TEE (or do I?)), and to provide an architecture that looks a little more like this:

reduced-arch

In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware – of which Enarx is part – but you don’t need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application.  The Enarx project will provide attestation of the TEE, so that you know you’re running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath you application.

The initial code is out there – working on AMD’s SEV TEE at the moment – and enough of it works now that we’re ready to tell you about it.

Making sure that your application meets your own security requirements is down to you.  🙂

How do I find out more?

Easiest is to visit the Enarx github: https://github.com/enarx.

We’ll be adding more information there – it’s currently just code – but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we’d like to have, but I thought I’d start here for now.

We’d love to have people in the community getting involved in the project.  It’s currently quite low-level, and requires quite a lot of knowledge to get running, but we’ll work on that.  You will need some specific hardware to make it work, of course.  Oh, and if you’re an early boot or a low-level kvm hacker, we’re particularly interested in hearing from you.

I will, of course, respond to comments on this article.

 

Security at Red Hat Summit

And a little teaser on my session…

I don’t often talk about my job specifically, but I’m very proud to be employed by Red Hat, working as Chief Security Architect, a role based in the Office of the CTO[1], and sometimes it’s the right time to talk about job-related stuff.  Next week is our annual Summit, and this year it’s in Boston[2], starting on Tuesday, 2019-05-07.  If you’re coming – great!  If you’re thinking about coming – please do!  And if you’re not able to come, then rest assured that many of the sessions will be recorded so that you can watch them in the future[3].

There is going to be a lot going on at Summit this year: including, I suspect, some big announcements[4].  There will also be lots of hands-on sessions, which are always extremely popular, and a number of excellent sessions and other activities around Diversity and Inclusion, a topic about which I’m extremely passionate.  As always, though, security is a big topic at Summit, and there are 50 security topic sessions listed in the agenda[5] (here’s the session catalog[ue]):

  • 26 breakout sessions
  • 11 instructor-led labs
  • 7 mini-sessions
  • 4 birds-of-a-feather sessions (“BOFs”)
  • 2 theatre sessions

These include sessions by partners and customers, as well as by Red Hatters themselves.

Many of my colleagues in OCTO will be presenting sessions in the “Emerging Technology” track, as will I.  My session is entitled “Security: Emerging technologies and open source”, and on Tuesday, at 1545 (3.45pm) I’ll be co-presenting it with my (non-OCTO) colleague Nathaniel McCallum.  The abstract is this:

What are some of the key emerging security technologies, and what impact will they have on the open source world? And what impact could open source have on them?

In this session, we’ll look at a handful of up-and-coming hardware and software technologies—from trusted execution environments to multi-party computation—and discuss the strategic impact we can expect them to have on our world. While individual technologies will be discussed (and you can expect a sneak peek demo of one of them), the focus of this session is not a deep-dive on any of them, but rather an architectural, strategic, and business view.

I’m trying to ensure that when I talk about all of these cool technologies, I talk about why open source is important to them, and/or why they are important to open source.

Here’s the particularly exciting bit, though: what’s not clear from the abstract – as it’s a late addition – is that Nathaniel plans to present a demo.  I can’t go into details at the moment, partly because we’re keeping it as a surprise, and partly because exactly what is demoed will depend on what Nathaniel’s frantic coding manages to achieve before Tuesday afternoon.  It’s one of the early results from a project we’re running, and I can tell you: a) that it involves TEEs (trusted execution environments); and b) that it’s really exciting.  I’m hoping that we can soon make more of a noise about it, and our Summit session is the start of that.

I’m hoping that the description above will be enough to convince you to attend Summit, but in case it isn’t, bear in mind the following:

  1. there will be keynotes from Jim Whitehurst (Red Hat CEO), Satya Nadella (Microsoft CEO) and Ginni Rometty (IBM CEO)
  2. the Summit party will feature Neon Trees[6].

There are lots of other great reasons to come as well, and if you do, please track me down and say hello: it’s always great to meet readers of this blog.  See you in Boston next week!


1 – “OCTO” – which, I guess, makes me one of the Octonauts.

2 – the picture at the top of this article is of Fenway Park, a place in Boston where they play baseball, which is like cricket, only quicker.  And you’re allowed to chuck the ball.

3 – in case, for any crazy reason, you’d like to see me speaking at last year’s Summit, here’s a link to the session: Getting strategic about security

4 – this should not be interpreted as a “forward-looking statement”, as I’m not privy to any particular definite decisions as to any such announcements.  Sorry – legal stuff…

5 – I’m indebted to my colleague Lucy Kerner, who’s organised and documented much of the security pieces, and from whom I have stolen copied gratefully reused much of the information in this article.

6 – I’ve only just clocked this, and my elder daughter is going to be very, very jealous when she gets back from school to discover this information.

Oh, how I love my TEE (or do I?)

Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security

I realised just recently that I’ve not written yet about Trusted Execution Environments (TEEs) on this blog.  This is a surprise, honestly, because TEEs are fascinating, and I spend quite a lot of my professional time thinking – and sometimes worrying – about them.  So what, you may ask, is a TEE?

Let’s look at one of the key use cases first, and then get to what a Trusted Execution Environment is.  A good place to start it the “Cloud”, which, as we all know, is just somebody else’s computer.  What this means is that if you’re running an application (let’s call it a “workload”) in the Cloud – AWS, Azure, whatever – then what you’re doing is trusting somebody else to take the constituent parts of that workload – its code and its data – and run them on their computer.  “Yay”, you may be thinking, “that means that I don’t have to run it in my computer: it’s all good.”  I’m going to take issue with the “all good” bit of that statement.  The problem is that the company – or people within that company – who run your workload on their computer (let’s call it a “host”) can, if they so wish, look inside it, change it, and stop it running.  In other words, they can break all three classic “CIA” properties of security: confidentiality (by looking inside it); integrity (by changing it); and availability (by stopping it running).  This is because the way that workloads run on hosts – whether in hardware-mediated virtual machines, within containers or on bare-metal – all allow somebody with sufficient privilege on that machine to do all of the bad things I’ve just mentioned.

And these are bad things.  We don’t tend to care about them too much as individuals – because the amount of value a cloud provider would get from bothering to look at our information is low – but as businesses, we really should be worried.

I’m afraid that the problem doesn’t go away if you run your systems internally.  Remember that anybody with sufficient access to hosts can look inside and tamper with your workloads?  Well, are you happy that you sysadmins should all have access to your financial results?  Merger and acquisition details?  Pay roll?  Because if you have this kind of data running on your machines on your own premises, then they do have access to all of those.

Now, there are a number of controls that you can put in place to help with this – not least background checks and Acceptable Use Policies – but TEEs aim to solve this problem with technology.  Actually, they only really aim to solve the confidentiality and integrity pieces, so we’ll just have to assume for now that you’re going to be in a position to notice if your sales order process fails to run due to malicious activity (for instance).  Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security where processes can execute (and data can be processed) in ways that mean that even privileged users of the host cannot attack their confidentiality or integrity.  To get a little bit technical, these enclaves are memory pages with particular controls on them such that they are always encrypted except when they are actually being processed by the chip.

The two best-known TEE implementations so far are Intel’s SGX and AMD’s SEV (though other silicon vendors are beginning to talk about their alternatives).  Both Intel and AMD are aiming to put these into server hardware and create an ecosystem around their version to make it easy for people to run workloads (or components of workloads) within them.  And the security community is doing what it normally does (and, to be clear, absolutely should be doing), and looking for vulnerabilities in the implementation.  So far, most of the vulnerabilities that have been identified are within Intel’s SGX – though I’m not in a position to say whether that’s because the design and implementation is weaker, or just because the researchers have concentrated on the market leader in terms of server hardware.  It looks like we need to go through a cycle or two of the technologies before the industry is convinced that we have a working design and implementation that provides the levels of security that are worth deploying.  There’s also work to be done to provide sufficiently high quality open source software and drivers to support TEEs for wide deployment.

Despite the hopes of the silicon vendors, it may be some time before TEEs are in common usage, but people are beginning to sit up and take notice, partly because there’s so much interest in moving workloads to the Cloud, but still serious concerns about the security of your sensitive processes and data when they’re there.  This has got to be a good thing, and I think it’s really worth considering how you might start designing and deploying workloads in new ways once TEEs actually do become commonly available.

Equality in volunteering and open source

Volunteering favours the socially privileged

Volunteering is “in”. Lots of companies – particularly tech companies, it seems – provide incentives to employees to volunteer for charities, NGOs abs other “not-for-profits”. These incentives range from donations matching to paid volunteer days to matching hours worked for a charity with a cash donation.

Then there’s other types of voluntary work: helping out at a local sports club, mowing a neighbour’s lawn or fetching their groceries, and, of course, a open source, which we’ll be looking at in some detail. There are almost countless thousands of projects which could benefit from your time.

Let’s step back first and look at the benefits of volunteering. The most obvious, if course, is the direct benefit to the organisation, group or individual of your time and/or expertise. Then there’s the benefits to the wider community. Having people volunteering their time to help out with various groups – particularly those with whom they would have little contact in other circumstances – helps social cohesion and encourages better understanding of differing points of view as you meet people, and not just opinions.

Then there’s the benefit to you. Helping others feels great, looks good on your CV[1], can give you more skills, and make you friends – quite apart from the benefit I mentioned above about helping you to understand differing points of view. On the issue of open source, it’s something that lots of companies – certainly the sorts of companies with which I’m generally involved – are interested in, or even expect to see on your CV. Your contributions to open source projects are visible – unlike whatever you’ve been doing in most other jobs – they can be looked over, they show a commitment and are also a way of gauging your enthusiasm, expertise and knowledge in particular areas. All this seems to make lots of sense, and until fairly recently, I was concerned when I was confronted with a CV which didn’t have any open source contributions that I could check.

The inequality of volunteering

And then I did some reading by a feminist open sourcer (I’m afraid that I can’t remember who it was[3]), and did a little more digging, and realised that it’s far from that simple. Volunteering is an activity which favours the socially privileged – whether that’s in terms of income, gender, language or any other number of indicator. That’s particularly true for software and open source volunteering.

Let me explain. We’ll start with the gender issue. On average, you’re much less likely to have spare time to be involved in an open source project if you’re a woman, because women, on average, have more responsibilities in the home, and less free time. They are also globally less likely to have access to computing resources with which to contribute. due to wage discrepancies. Even beyond that, they are less likely to be welcomed into communities and have their contributions valued, whilst being more likely to attract abuse.

If you are in a low income bracket, you are less likely to have time to volunteer, and again, to have access to the resources needed to contribute.

If your first language is not English, you are less likely to be able to find an accepting project, and more likely to receive abuse for not explaining what you are doing.

If your name reflects a particular ethnicity, you may not be made to feel welcome in some contexts online.

If you are not neurotypical (e.g. you have Aspergers or are on the autism spectrum, or if you are dyslexic), you may face problems in engaging in the social activities – online and offline – which are important to full participation in many projects.

The list goes on. There are, of course, many welcoming project and communities that attempt to address all of these issues, and we must encourage that. Some people who are disadvantaged in terms of some of the privilege-types that I’ve noted above may actually find that open source suits them very well, as their privilege can be hidden online in ways in which it could not be in other settings, and that some communities make a special effort to be welcoming and accepting.

However, if we just assume – that’s unconscious bias, folks – that volunteering, and specifically open source volunteering, is a sine qua non for “serious” candidates for roles, or a foundational required expertise for someone we are looking to employ, then we set a dangerous precedent, and run a very real danger of reinforcing privilege, rather than reducing it.

What can we do?

First, we can make our open source projects more welcoming, and be aware of the problems that those from less privileged groups may face. Second, we must be aware, and make our colleagues aware, that when we are interviewing and hiring, lack evidence of volunteering is not evidence that the person is not talented, enthusiastic or skilled. Third, and always, we should look for more ways to help those who are less privileged than us to overcome the barriers to accessing not only jobs but also volunteering opportunities which will benefit not only them, but our communities as a whole.


1 – Curriculum vitae[2].

2 – Oh, you wanted the Americanism? It’s “resume” or something similar, but with more accents on it.

3 – a friend reminded me that it might have been this: https://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community

Cryptographers arise!

Cryptography is a strange field, in that it’s both concerned with keeping secrets, but also has a long history of being kept secret, as well.  There are famous names from the early days, from Caesar (Julius, that is) to Vigenère, to more recent names like Diffie, Hellman[1], Rivest, Shamir and Adleman.  The trend even more recently has been away from naming cryptographic protocols after their creators, and more to snappy names like Blowfish or less snappy descriptions such as “ECC”.  Although I’m not generally a fan of glorifying individual talent over collective work, this feels like a bit of a pity in some ways.

In fact, over the past 80 years or so, more effort has been probably put into keeping the work of teams in cryptanalysis – the study of breaking cryptography – secret, though there are some famous names from the past like Al-Kindi, Phelippes (or “Phillips), Rejewski, Turing, Tiltman, Knox and Briggs[2].

Cryptography is difficult.  Actually, let me rephrase that: cryptography is easy to do badly, and difficult to do well.  “Anybody can design a cipher that they can’t break”, goes an old dictum, with the second half of the sentence, “and somebody else can easily break”, being generally left unsaid.  Creation of cryptographic primitives requires significant of knowledge of mathematics – some branches of which are well within the grasp of an average high-school student, and some of which are considerably more arcane.  Putting those primitives together in ways that allow you to create interesting protocols for use in the real world doesn’t necessarily require that you understand the full depth of the mathematics of the primitives that you’re using[3], but does require a good grounding in how they should be used, and how they should not be used.  Even then, a wise protocol designer, like a wise cryptographer[4], always gets colleagues and others to review his or her work.  This is one of the reasons that it’s so important that cryptography should be in the public domain, and preferably fully open source.

Why am I writing about this?  Well, partly because I think that, on the whole, the work of cryptographers is undervalued.  The work they do is not only very tricky, but also vital.  We need cryptographers and cryptanalysts to be working in the public realm, designing new algorithms and breaking old (and, I suppose) new ones.  We should be recognising and celebrating their work.  Mathematics is not standing still, and, as I wrote recently, quantum computing is threatening to chip away at our privacy and secrecy.  The other reasons that I’m writing about this is because I think we should be proud of our history and heritage, inspired to work on important problems, and to inspire those around us to work on them, too.

Oh, and if you’re interested in the t-shirt, drop me a line or put something in the comments.


1 – I’m good at spelling, really I am, but I need to check the number of ells and ens in his name every single time.

2 – I know that is heavily Bletchley-centric: it’s an area of history in which I’m particularly interested.  Bletchley was also an important training ground for some very important women in security – something of which we have maybe lost sight.

3 – good thing, too, as I’m not a mathematician, but I have designed the odd protocol here and there.

4 – that is, any cryptographer who recognises the truth of the dictum I quote above.