What’s a secure channel?

Always beware of products and services which call themselves “secure”. Because they’re not.

A friend asked me what I considered a secure channel a couple of months ago, and it made me think. Many of us have information that we wish to communicate which we’d rather other people can’t look at, for all sorts of reasons. These might range from present ideas for our spouse or partner sent by a friend to my phone to diplomatic communications about espionage targets sent between embassies over the Internet, with lots in between: intellectual property discussions, bank transactions and much else. Sometimes, we want to ensure that people can’t change what’s in the messages we send: it might be OK for other people to know that I pay £300 in rent, but not for them to be able to change the amount (or the bank account into which it goes). These two properties are referred to as confidentiality (keeping information secret) and integrity (keeping information unchangeable), and often you want to combine them – in the case of our espionage plans, I’d prefer that my enemies don’t know what targets are at risk, but also that they don’t change the targets I’ve selected to something less bothersome for them.

Modern encryption systems generally provide both confidentiality and integrity for messages and data, so I’m going to treat these as standard properties for an encrypted channel. Which means that if I use encryption on a channel, it’s secure, right?

Hmm. Let’s step back a bit, because, unfortunately, there’s rather a lot more to unpack than that. Three of the questions we need to tackle should give us pause. They are: “secure from whom?”, “secure for how long?” and “secure where?”. The answers we give to these questions will be important, and though they are all somewhat intertwined, I’m going to deal with them in order, and I’m going to use the examples of the espionage message and the present ideas to discuss them. I’m also going to talk more about confidentiality than integrity – though we’ll assume that both properties are important to what we mean by “secure”.

Secure from whom?

In our examples, we have very different sets of people wanting to read our messages – a nation state and my spouse. Unless my spouse has access to skills and facilities of which I’m unaware (and I wouldn’t put it past her), the resources that she has at her disposal to try to break the security of my communication are both fewer and less powerful than those of the nation state. A nation state may be able to apply cryptologic attacks to messages, attack the software (and even firmware or hardware) implementations of the encryption system, mess with the amount of entropy available for key generation at either or both ends of the channel, perform interception (e.g. Person-In-The-Middle) attacks, coerce the sender or recipient of the message and more. I’m hoping that most of the above are not options for my wife (though coercion might be, I suppose!). The choice of encryption system, including entropy sources, cipher suite(s), hardware and software implementation are all vital in the diplomatic message case, as are vetting of staff and many other issues. In the case of gift ideas for my wife’s birthday, I’m assuming that a standard implementation of a commercial messaging system should be enough.

Secure for how long?

It’s only a few days till my wife’s birthday (yes, I have got her a present, though that does remind me; I need a card…), so I only have to keep the gift ideas secure for a little longer. It turns out that, in this case, the time sensitivity of the integrity of the message is different to that of the confidentiality: even if she managed to change what the gift idea in the message was, it wouldn’t make a difference to what I’ve got her at this point. However, I’d still prefer if she didn’t know what the gift ideas are.

In the case of the diplomatic espionage message, we can assume that confidentiality and the integrity are both important for a much longer time, but we’ll concentrate on the confidentiality. Obviously an attacking country would prefer it if the target were unaware of an attack before it happened, but if the enemy managed to prove an attack was performed by the message sender’s or recipient’s country, even a decade or more in the future, this could also lead to major (and negative) consequences. We want to ensure that whatever steps we take to protect the message are sufficient that access to a copy of the message taken when it was sent (via wire-tapping, for instance) or retrieved at a later date (via access to a message store in the future), is insufficient to allow it to be cracked. This is tricky, and the history of cryptologic attacks on encryption schemes, not to mention human failures (such as leaks) and advances in computation (such as quantum computing) should serve as a strong warning that we need to consider very carefully what mechanisms we should use to protect our messages.

Secure where?

Are the embassies secure? Are all the machines between the embassies secure? Is the message stored before delivery? If so, is it stored on a machine within the embassy or on a server elsewhere? Is it end-to-end encrypted, or is it decrypted before delivery and then re-encrypted (I really, really hope not). While this is unlikely in the case of diplomatic messages, a good number of commercially sensitive messages (including much email) is not end-to-end encrypted, leading to vulnerabilities if someone trying to break the security can get access to the system where they are stored, or intercept them between decryption and re-encryption.

Typically, we have better control over different parts of the infrastructure which carry or host our communications than we do over others. For most of the article above, I’ve generally assumed that the nation state trying to read embassy message is going to have more relevant resources to try to breach the security of the message than my wife does, but there’s a significant weakness in protecting my wife’s gift idea: she has easy access to my phone. I tend to keep it locked, and it has a PIN, but, if I’m honest, I don’t tend to go out of my way to keep her out: the PIN is to deter someone who might steal it. Equally, it’s entirely possible that I may be sharing some material (a video or news article) with her at exactly the time that the gift idea message arrives from our mutual friend, leading her to see the notification. In either case, there’s a good chance that the property of confidentiality is not that strong after all.

Conclusion

I’ve said it before, and I plan to say it again (and again, and again): there is no “secure”. When we talk about secure channels, we must be aware that what we mean should be “channels secured with appropriate measures to protect against the risks associated with the security being compromised”. This is a long way of saying “if I’m protecting diplomatic messages, I need to make greater efforts than if I’m trying to stop my wife finding out ahead of time what she’s getting for her birthday”, but it’s important to understand this. Part of the problem is that we’re bombarded with words like “secure”, which are unqualified, and may lead us to think that they’re absolute, when they’re absolutely not. Another part of the problem is that once we’ve put one type of security in place, particularly when it’s sold or marketed as “best in breed” or “best practice”, that it addresses all of the issues we might have. This is clearly not the case – using the strongest encryption possible for messages between my friend and me isn’t going to stop my wife from knowing I’ve bought her if knows the PIN for my phone. Please, please, consider what you need when you’re protecting your communications (and other data, of course), and always beware of products and services which call themselves “secure”. Because they’re not.

Vint Cerf’s “game changer”

I’m really proud to be involved with a movement which I believe can change the way we do computing.

Today’s article is a little self-indulgent, but please bear with me, as I’m a little excited. Vint Cerf is one of a small handful of people who have a claim to being called “greats”. He’s one of the co-developers of TCP/IP protocol with Bob Kahn in 1974, and has been working on technology – much of it pretty cool technology – since then. I turned 50 recently, and if I’d achieved half of what he had by his 50th birthday, I’d be feeling more accomplished than I do right now! As well as his work in technology, he’s also an advocate for accessibility, which is something which is also dear to my heart.

What does this have to do with Alice, Eve and Bob – a security blog? Well, last week, Dark Reading[1], an influential technology security site, published a commentary piece by Cerf under its “Cloud” heading: Why Confidential Computing is a Game Changer. I could hardly have been more pleased: this is an area which I’m very excited about, and which the Enarx project, of which I’m co-founder, addresses. The Enarx project is part of the Confidential Computing Consortium (mentioned in Cerf’s article), a Linux Foundation project to increase use of confidential computing through open source projects.

So, what is confidential computing? Cerf describes it as “a breakthrough technology that encrypts data in use, while it is being processed”. He goes on to give a good description of the technology, noting that Google (his employer[2]) has recently released a product using confidential computing. Google is actually far from the first cloud service provider to do this, but it’s only fair that Cerf should mention his employer’s services from time to time: I’m going to forgive him, given how enthusiastic he is about the technology more generally. He describes it as a transformational technology which “will and should be a part of every enterprise cloud deployment”.

I agree, and it’s really exciting to see such a luminary embracing the possibilities the confidential computing presents. For those readers who aren’t aware of what it is, confidential computing allows you to keep data and processes secret in the cloud, on private servers, on the Edge, IoT, etc. – even from administrators, hypervisors and the host kernel. It uses TEEs – Trusted Execution Environments – to protect the confidentiality and integrity of the workloads (application, programs) that you want to run. If you’re not sure you trust your cloud provider, if your regulatory body won’t let you run your applications in certain places, if you want to deploy to machines which are vulnerable to attack – physical or logical – then TEEs and confidential computing can help.

You can find a more information in some of my articles:

You can always visit the Confidential Computing Consortium[3] or visit the Enarx project (links above): all of our code and documentation is open, and we’d love to see you. I’m really proud to be involved with – in fact, deeply embedded in – a movement which I believe can change the way we do computing. And really excited that someone like Vint Cerf agrees.


1 – I have no affiliation with Dark Reading, though I do recommend it to readers of this blog.

2- neither do I have any affiliation with Google or Alphabet, its parent!

3 – I am, however, a member of both the Governing Board and the Technical Advisory Council of the Confidential Computing Consortium. I’m also the Treasurer.

What’s a hash function?

It should be computationally implausible to work backwards from the output hash to the input.

Note – many thanks to a couple of colleagues who provided excellent suggestions for improvements to this text, which has been updated to reflect them.

Sometimes I like to write articles about basics in security, and this is one of those times. I’m currently a little over a third of the way through writing a book on trust in computing and the the cloud, and ended up creating a section about cryptographic hashes. I thought that it might be a useful (fairly non-technical) article for readers of this blog, so I’ve edited it a little bit and present it here for your delectation, dears readers.

There is a tool in the security practitioner’s repertoire that it is helpful for everyone to understand: cryptographic hash functions. A cryptographic hash function, such as SHA-256 or MD5 (now superseded for cryptographic uses as it’s considered “broken”) takes as input a set of binary data (typically as bytes) and gives as output which is hopefully unique for each set of possible inputs. The length of the output – “the hash” – for any particularly hash function is typically the same for any pattern of inputs (for SHA-256, it is 32 bytes, or 256 bits – the clue’s in the name). It should be computationally implausible (cryptographers hate the word “impossible”) to work backwards from the output hash to the input: this is why they are sometimes referred to as “one-way hash functions”. The phrase “hopefully unique” when describing the output is extremely important: if two inputs are discovered that yield the same output, the hash is said to have “collisions”. The reason that MD5 has become deprecated is that it is now trivially possible to find collisions with commercially-available hardware and software systems. Another important property is that even a tiny change in the message (e.g. changing a single bit) should generate a large change to the output (the “avalanche effect”).

What are hash functions used for, and why is the property of being lacking in collisions so important? The simplest answer to the first question is that hash functions are typically used to ensure that when someone hands you a piece of binary data (and all data in the world of computing can be described in binary format, whether it is text, an executable, a video, an image or complete database of data), it is what you expect. Comparing binary data directly is slow and arduous computationally, but hash functions are designed to be very quick. Given two files of several Megabytes or Gigabytes of data, you can produce hashes of them ahead of time, and defer the comparisons to when you need them[1].

Indeed, given the fact that it is easy to produce hashes of data, there is often no need to have both sets of data. Let us say that you want to run a file, but before you do, you want to check that it really is the file you think you have, and that no malicious actor has tampered with it. You can hash that file very quickly and easily, and as long as you have a copy of what the hash should look like, then you can be fairly certain that you have the file you wanted. This is where the “lack of collisions” (or at least “difficulty in computing collisions”) property of hash functions is important. If the malicious actor can craft a replacement file which shares the same hash as the real file, then the process is essentially useless.

In fact, there are more technical names for the various properties, and what I’ve described above mashes three of the important ones together. More accurately, they are:

  1. pre-image resistance – this says that if you have a hash, it should be difficult to find the message from which it was created, even if you know the hash function used;
  2. second pre-image resistance – this says that if you have a message, it should be difficult to find another message which, when hashed, generates the same hash;
  3. collision resistance – this says that it should be difficult to find any two messages which generate the same hash.

Collision resistance and second pre-image resistance sound like the same property, at first glances, but are subtly (but importantly) different. Pre-image resistance says that if you already have a message, finding another with a matching hash, whereas collision resistance should make it hard for you to find any two messages which will generate the same hash, and is a much harder property to fulfil in a hash function.

Let us go back to our scenario of a malicious actor trying to exchange a file (with a hash which we can check) with another one. Now, to use cryptographic hashes “in the wild” – out there in the real world beyond the perfectly secure, bug-free implementations populated by unicorns and overflowing with fat-free doughnuts – there are some important and difficult provisos that need to be met. More paranoid readers may already have spotted some of them, in particular:

  1. you need to have assurances that the copy of the hash you have has also not been subject to tampering;
  2. you need to have assurances that the entity performing the hash performs and reports it correctly;
  3. you need to have assurances that the entity comparing the two hashes reports the result of that comparison correctly.

Ensuring that you can meet such assurances it not necessarily an easy task, and is one of the reasons that Trusted Platform Modules (TPMs) are part of many computing systems: they act as a hardware root of trust with capabilities to provide such assurances. TPMs are a useful an important tool for real-world systems, and I plan to write an article on them (similar to What’s an HSM?) in the future.


1 – It’s also generally easier to sign hashes of data, rather than large sets of data themselves – this happens to be important as one of the most common uses of hashes is for cryptographic (“digital”) signatures.

Should I back up to iCloud?

Don’t walk into this with your eyes closed.

This is a fairly easy one to answer. If the response to either of these questions is “yes”, then you probably shouldn’t.

1. Do you have any sensitive data that you would be embarrassed to be seen by any agent of the US Government?
2. Are you a non-US citizen?

This may seem somewhat inflammatory, so let’s look into what’s going on here.

It was widely reported last week that Apple has decided not to implement end-to-end encryption for back-ups from devices to Apple’s iCloud.  Apparently, the decision was made after Apple came under pressure from the FBI, who are concerned that their ability to access data from suspects will be reduced.  This article is not intended to make any judgments about either Apple or any law enforcement agencies, but I had a request from a friend (you know who you are!) for my thoughts on this.

The main problem is that Apple (I understand – I’m not an Apple device user) do a great job of integrating all the services that they offer, and making them easy to use across all of your Apple products.  iCloud is one of these services, and it’s very easy to use.  Apple users have got used to simplicity of use, and are likely to use this service by default.  I understand this, but there’s a classic three-way tug of war for pretty much all applications or services, and it goes like this: you get to choose two out of the following three properties of a system, application or service, but only two of them.

  1. security
  2. ease of use
  3. cost

Apple make things easy to use, and quite often pretty secure, and you pay for this, but the specific cost (in inconvenience, legal fees, political pressure, etc.) of making iCloud more secure seems to have outweighed the security in this situation, and led them to decide not to enable end-to-end encryption.

So, it won’t be as secure as you might like.  Do you care?  Well, if you have anything you’d be embarrassed for US government agents to know about – and beyond embarrassed, if you have anything which isn’t necessary entirely, shall we say, legal – then you should know that it’s not going to be difficult for US government agents such as the FBI to access it.  This is all very well, but there’s a catch for those who aren’t in such a position.

The catch is that the protections offered to protect the privacy of individuals, though fairly robust within the US, are aimed almost exclusively at US citizens.  I am in no sense a lawyer, but as a non-US citizen, I would have zero confidence that it would be particularly difficult for any US government agent to access any information that I had stored on any iCloud account that I held.  Think about that for a moment.  The US has different standards to some other countries around, for instance, drug use, alcohol use, sexual practices and a variety of other issues.  Even if those things are legal in your country, who’s to say that they might be used now or in the future to decide whether you should be granted a visa to the US, or even allowed entry at all.  There’s already talk of immigration officials checking your social media for questionable material – extending this to unencrypted data held in the US is far from out of the question.

So this is another of those issues where you need to make a considered decision.  But you do need to make a decision: don’t walk into this with your eyes closed, because once Apple has the data, there’s realistically no taking it back from the US government.

 

 

 

Confidential computing – the new HTTPS?

Security by default hasn’t arrived yet.

Over the past few years, it’s become difficult to find a website which is just “http://…”.  This is because the industry has finally realised that security on the web is “a thing”, and also because it has become easy for both servers and clients to set up and use HTTPS connections.  A similar shift may be on its way in computing across cloud, edge, IoT, blockchain, AI/ML and beyond.  We’ve know for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive.  Confidential computing – providing this type of protection for data and algorithms in use, using hardware capabilities such as Trusted Execution Environments (TEEs) – protects data on hosted system or vulnerable environments.

I’ve written several times about TEEs and, of course, the Enarx project of which I’m a co-founder with Nathaniel McCallum (see Enarx for everyone (a quest) and Enarx goes multi-platform for examples).  Enarx uses TEEs, and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as micro-services) onto hosts that you don’t trust.  Enarx is, of course, completely open source (we’re using the Apache 2.0 licence, for those with an interest).  Being able to run workloads on hosts that you don’t trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:

  • storage: you encrypt your data at rest because you don’t fully trust the underlying storage infrastructure;
  • networking: you encrypt your data in transit because you don’t fully trust the underlying network infrastructure;
  • compute: you encrypt your data in use because you don’t fully trust the underlying compute infrastructure.

I’ve got a lot to say about trust, and the word “fully” in the statements above is important (I actually added it on re-reading what I’d written).  In each case, you have to trust the underlying infrastructure to some degree, whether it’s to deliver your packets or store your blocks, for instance.  In the case of the compute infrastructure, you’re going to have to trust the CPU and associate firmware, just because you can’t really do computing without trusting them (there are techniques such as homomorphic encryption which are beginning to offer some opportunities here, but they’re limited, and the technology still immature).

Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them and also whether they are fully secure against physical attacks on the host in which they reside.

The answer to both questions is “no”, but this is the best technology we currently have available at scale and at a price point to make it generally deployable.  To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our threat model and decide whether TEEs (in this case) provide sufficient security for our specific requirements.  In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPU.  So, for example, of vendor Q’s generation R chips are found to contain a vulnerability, it will be easy to say “refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type and U-type chips from Q and any CPUs from vendors P, M and N.”


5 security tips from Santa

Have you been naughty or nice this year?

If you’re reading this in 2019, it’s less than a month to Christmas (as celebrated according to the Western Christian calendar), or Christmas has just passed.  Let’s assume that it’s the former, and that, like all children and IT professionals, it’s time to write your letter to Santa/St Nick/Father Christmas.  Don’t forget, those who have been good get nice presents, and those who don’t get coal.  Coal is not a clean-burning fuel these days, and with climate change well and truly upon us[1], you don’t want to be going for the latter option.

Think back to all of the good security practices you’ve adopted over the past 11 or so months.  And then think back to all the bad security practices you’ve adopted when you should have been doing the right thing.  Oh, dear.  It’s not looking good for you, is it?

Here’s the good news, though: unless you’re reading this very, very close to Christmas itself[2], then there’s time to make amends.  Here’s a list of useful security tips and practices that Santa follows, and which are therefore bound to put you on his “good” side.

Use a password manager

Santa is very careful with his passwords.  Here’s a little secret: from time to time, rather than have his elves handcraft every little present, he sources his gifts from other parties.  I’m not suggesting that he pays market rates (he’s ordering in bulk, and he has a very, very good credit rating), but he uses lots of different suppliers, and he’s aware that not all of them take security as seriously as he does.  He doesn’t want all of his account logins to be leaked if one of his suppliers is hacked, so he uses separate passwords for each account.  Now, Santa, being Santa, could remember all of these details if he wanted to, and even generate passwords that meet all the relevant complexity requirements for each site, but he uses an open source password manager for safety, and for succession planning[3].

Manage personal information properly

You may work for a large company, organisation or government, and you may think that you have lots of customers and associated data, but consider Santa.  He manages, or has managed, names, dates of birth, addresses, hobby, shoe sizes, colour preferences and other personal data for literally every person on Earth.  That’s an awful lot of sensitive data, and it needs to be protected.  When people grow too old for presents from Santa[4], he needs to delete their data securely.  Santa may well have been the archetypal GDPR Data Controller, and he needs to be very careful who and what can access the data that he holds.  Of course, he encrypts all the data, and is very careful about key management.  He’s also very aware of the dangers associated with Cold Boot Attacks (given the average temperature around his relevance), so he ensures that data is properly wiped before shutdown.

Measure and mitigate risk

Santa knows all about risk.  He has complex systems for ordering, fulfilment, travel planning, logistics and delivery that are the envy of most of the world.  He understands what impact failure in any particular part of the supply chain can have on his customers: mainly children and IT professionals.  He quantifies risk, recalculating on a regular basis to ensure that he is up to date with possible vulnerabilities, and ready with mitigations.

Patch frequently, but carefully

Santa absolutely cannot afford for his systems to go down, particularly around his most busy period.  He has established processes to ensure that the concerns of security are balanced with the needs of the business[5].  He knows that sometimes, business continuity must take priority, and that on other occasions, the impact of a security breach would be so major that patches just have to be applied.  He tells people what he wants, and listens to their views, taking them into account where he can. In other words, he embraces open management, delegating decisions, where possible, to the sets of people who are best positioned to make the call, and only intervenes when asked for an executive decision, or when exceptions arise.  Santa is a very enlightened manager.

Embrace diversity

One of the useful benefits of running a global operation is that Santa values diversity.  Old or young (at heart), male, female or gender-neutral, neuro-typical or neuro-diverse, of whatever culture, sexuality, race, ability, creed or nose-colour, Santa takes into account his stakeholders and their views on what might go wrong.  What a fantastic set of viewpoints Santa has available to him.  And, for an Aging White Guy, he’s surprisingly hip to the opportunities for security practices that a wide and diverse set of opinions and experiences can bring[6].

Summary

Here’s my advice.  Be like Santa, and adopt at least some of his security practices yourself.  You’ll have a much better opportunity of getting onto his good side, and that’s going to go down well not just with Santa, but also your employer, who is just certain to give you a nice bonus, right?  And if not, well, it’s not too late to write that letter directly to Santa himself.


1 – if you have a problem with this statement, then either you need to find another blog, or you’re reading this in the far future, where all our climate problems have been solved. I hope.

2 – or you dwell in one of those cultures where Santa visits quite early in December.

3 – a high-flying goose in the face can do terrible damage to a fast-moving reindeer, and if the sleigh were to crash, what then…?

4 – not me!

5 – Santa doesn’t refer to it as a “business”, but he’s happy for us to call it that so that we can model our own experience on his.  He’s nice like that.

6 – though Santa would never use the phrase “hip to the opportunities”.  He’s way too cool for that.

Enarx goes multi-platform

Now with added SGX!

Yesterday, Nathaniel McCallum and I presented a session “Confidential Computing and Enarx” at Open Source Summit Europe. As well as some new information on the architectural components for an Enarx deployment, we had a new demo. What’s exciting about this demo was that it shows off attestation and encryption on Intel’s SGX. Our initial work focussed on AMD’s SEV, so this is our first working multi-platform work flow. We’re very excited, and particularly as this week a number of the team will be attending the first face to face meetings of the Confidential Computing Consortium, at which we’ll be submitting Enarx as a project for contribution to the Consortium.

The demo had been the work of several people, but I’d like to call out Lily Sturmann in particular, who got things working late at night her time, with little time to spare.

What’s particularly important about this news is that SGX has a very different approach to providing a TEE compared with the other technology on which Enarx was previously concentrating, SEV. Whereas SEV provides a VM-based model for a TEE, SGX works at the process level. Each approach has different advantages and offers different challenges, and the very different models that they espouse mean that developers wishing to target TEEs have some tricky decisions to make about which to choose: the run-time models are so different that developing for both isn’t really an option. Add to that the significant differences in attestation models, and there’s no easy way to address more than one silicon platform at a time.

Which is where Enarx comes in. Enarx will provide platform independence both for attestation and run-time, on process-based TEEs (like SGX) and VM-based TEEs (like SEV). Our work on SEV and SGX is far from done, but also we plan to support more silicon platforms as they become available. On the attestation side (which we demoed yesterday), we’ll provide software to abstract away the different approaches. On the run-time side, we’ll provide a W3C standardised WebAssembly environment to allow you to choose at deployment time what host you want to execute your application on, rather than having to choose at development time where you’ll be running your code.

This article has sounded a little like a marketing pitch, for which I apologise. As one of the founders of the project, alongside Nathaniel, I’m passionate about Enarx, and would love you, the reader, to become passionate about it, too. Please visit enarx.io for more information – we’d love to tell you more about our passion.

What is DoH, and why should I care?

Firefox is beginning to roll out DoH

DoH is DNS-over-HTTPS.  Let’s break that down.

DNS is Domain Network System, and it’s what allows you to type in the server name (e.g. aliceevebob.com or http://www.redhat.com), which typically makes up the key part of a URL, and then get back the set of numbers which your computer needs actually to contact the machine you want it to talk to.  This is because computers don’t actually use the names, they use the numbers, and the mapping between the two can change, for all sorts of reasons (a server might move to another machine, it might be behind a firewall, it might be behind a load-balancer – those sorts of reasons).   These numbers are called “IP addresses”, and are typically[1] what are called “dotted quads”.  An example would be 127.0.0.1 – in fact, this is a special example, because it maps back to your own machine, so if you ask for “localhost”, then the answer that DNS gives you is “127.0.0.1”.  All IP[1] addresses must be in of the type a.b.c.d, where the a, b, c and d are numbers between 0 and 254 (there are some special rules beyond that, but we won’t go into them here).

Now, your computer doesn’t maintain a list of the millions upon millions of server names and their mappings to specific IP address – that would take too much memory, and ages to download.  Instead, if it needs to find a server (to get email, talk to Facebook, download a webpage, etc.), it will go to a “DNS server”.  Most Internet providers will provide their own DNS servers, and there are a number of special DNS servers to which all others connect from time to time to update their records.  It’s a well-established and generally well-run system across the entire Internet.  Your computer will keep a cache of some of the most recently used mappings, but it’s never going to know all of them across the Internet.

What worries some people about the DNS look-up process, however, is that when you do this look-up, anyone who has access to your network traffic can see where you want to go.  “But isn’t secure browsing supposed to stop that?” you might think.  Well, yes and no.  What secure browsing (websites that start “https://”) means is that nobody with access to your network traffic can see what you download from and transmit to the website itself.  But the initial DNS look-up to find out what server your browser should contact is not encrypted. This might generally  be fine if you’re just checking the BBC news website from the UK, but there are certainly occasions when you don’t want this to be the case.  It turns out although DoH doesn’t completely fix the problem of being able to see where you’re visiting, many organisations (think companies, ISPs, those under the control of countries…) try to block where you can even get to by messing with the responses you get to look-ups.  If your computer can’t even work out where the BBC news server is, then how can it visit it?

DoH – DNS-over-HTTPS – aims to fix this problem.  Rather than your browser asking your computer to do a DNS look-up and give it back the IP address, DoH has the browser itself do the look-up, and do it over a secure connection.  That’s what the HTTPS stands for – “HyperText Transfer Protocol Secure” – it’s what your browser does for all of that other secure traffic (look for the green padlock”).  All someone monitoring your network traffic would see is a connection to a DNS server, but not what you’re asking the DNS server itself.  This is a nice fix, and the system (DoH) is already implemented by the well-known Tor browser.

The reason that I’m writing about it now is that Firefox – a very popular open source browser, used by millions of people across the world – is beginning to roll out DoH by default in a trial of a small percentage of users.  If the trial goes well, it will be available to people worldwide.  This is likely to cause problems in some oppressive regimes, where using this functionality will probably be considered grounds for suspicion on its own, but I generally welcome any move which improves the security of everyday users, and this is definitely an example of one of those.


1 – for IPv4.  I’m not going to start on IPv6: maybe another time.

What is confidential computing?

Industry interest has been high, and overwhelmingly positive.

On Wednesday, 21st August, 2019 (just under a week ago, at time of writing), Jim Zemlin of the Linux Foundation announced the intent to form the Confidential Computing Consortium, with members including Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.  I’m particularly proud as Red Hat (my employer) is one of those[1], and I spent the preceding few weeks and days working very hard to ensure that we would be listed as one of the planned founding members.

“Confidential Computing” sounds like a lofty goal, and it is.  We’ve known for ages that you should encrypt sensitive data at rest (in storage), in transit (on the network), but confidential computing, as defined by the consortium, is about doing the same for sensitive data – and algorithms – in use.  The consortium plans to encourage industry to use hardware technologies generally called Trust Execution Environments to allow applications and processes to be encrypted as they are running.

This may sound somewhat familiar to those who follow my blog, and it should: Enarx, an open source project launched by Red Hat, was announced as one of the projects that should be part of the initial launch.  I’ve written about Enarx in several places:

Additionally, you’ll find lots of information on the introduction page of the Enarx wiki.

The press release from the Linux Foundation lists the following goals for the Confidential Computing Consortium (my emboldening):

The Confidential Computing Consortium will bring together hardware vendors, cloud providers, developers, open source experts and academics to accelerate the confidential computing market; influence technical and regulatory standards; and build open source tools that provide the right environment for TEE development. The organization will also anchor industry outreach and education initiatives.

Enarx, of course, fits perfectly into this description, as per the text in bold.  Beyond that, however, is the alignment that there is with the other aims of the Enarx project, and the opportunities with which a wider consortium presents us.  The addition of hardware vendors gives us – and the other participants – opportunities to discuss implementations (hardware and software) in an open environment, cloud providers and other users will give us great use cases, and academic involvement broadens the likelihood of quick access to new ideas and research.

We also expect industry and regulatory standards to be forthcoming, and a need for education as the more sectors and industries engage with confidential computing: the consortium provides a framework to engage in related activities.

It’s early days for the Confidential Computing Consortium, but I’m really hopeful and optimistic.  Already, the openness displayed between the planned members on both technical and non-technical collaboration has gone far beyond what I would have expected.  The industry interest – as evidenced by press and community activities – has been high, and overwhelmingly positive. Fans of Enarx – and confidential computing generally – should be excited by the prospect of greater visibility and collaboration.  After all, isn’t that what open source is about in the first place?


1 – this seems like a good place to point out that the views in this article and blog are my own, and may not represent those of my employer, of the Confidential Computing Consortium, the Linux Foundation or any other body.

HSMって何?

セキュリティ強化には重要なHSM。ただ、どのプロジェクトにも当てはまるわけじゃありません。

今週も3文字略語です。(訳注:毎週分まだ訳せてません、頑張ります)

HSM(Hardware Security Module)のお話です。

HSMって何だ?何に使うんだっけ?どうして検討する必要があるの?

その話をする前に、「鍵」特に、暗号鍵について考えてみましょう。

 

最近のほとんどの暗号は、実装されているアルゴリズムは特定の簡単なもの(ブロック暗号)で公開されていますし、一般的にも受け入れられています。

アルゴリズムを知っているかとかどのように動いているかは問題ではないんです。というのは問題になるのは鍵の安全性だからです。

 

例として、AESアルゴリズムでデータを暗号化したいとします。これで特定のタイプの(対称)暗号化ができます。(この例では1つのAESタイプだけ使うこととします。実際はいくつも微妙な違いがあってここでは省きますが、ポイントは変わりません)

 

このアルゴリズムには二つのデータを与えられます:

 

  1. 暗号化したい、平文のデータ
  2. 暗号化するための鍵

 

結果としてでたデータは一つです。

 

  1. 暗号化されたデータ

 

この暗号化されたデータを復号するには、AESアルゴリズムに鍵を入れ込見ます。すると元の平文データが出力されます。

この仕組みは非常によく出来ています。鍵が盗み出さなければ、です。

 

ここでHSMが出てきます。鍵はとても大切です。以下の場合、とても攻撃を受けやすいのです:

 

鍵の作成時:もし、暗号鍵を作成した時にヒントとなるビットを埋め込めたら、そのデータは悪意を持って複合される可能性が高くなります。

 

鍵の使用時:データを暗号化したり複合化している間、鍵はメモリ上にあります。つまり、そのメモリを覗き見ることができれば、データを盗み見ることができます。(下記の「サイドチャネルアタック」参照)

 

鍵の保存時:鍵の保存時にしっかりと保護していない限り、鍵が盗まれる可能性があります。

 

鍵の転送時:鍵を使用する場所と違うところに保存している場合、そこに転送する時に盗みとられる可能性があります。

 

HSMは上記の全ての場合に役立ちます。

これが必要となる理由としては、鍵の作成、使用、保存、転送時に、システムの安全性が不確実な場合があるからです。

 

 もし鍵がメールの暗号化に使われるとして、もしそこに侵入されてしまったらとてもみっともない事態に陥ります。もし、これがあなたが持っている全てのクレジットカードのチップに関するものだったら、もっと大変なことになります。

 

もし、そのコンピュートシステムで十分な権限を持っていれば、その権限者はメモリを見て、鍵を得ることもできます。TEE(Trusted Esxecution Environment

)環境でなければ、の話ですけれどね。

 

もっとタチの悪いことに、メモリを見ることができなくても、暗号鍵(もしくは、暗号化データ、平文データ)に関する情報を引き出して、攻撃を仕掛けることができます。このタイプの攻撃は通常「サイドチャネルアタック」と呼ばれます。

 

これは車のエンジンのシリンダーやバルブと同じようなもので、ボンネットを通してエンジンに耳を澄ますのと同じようなことです。エンジン構造はそのつもりではなかったとしても、エンジン部品からエンジンについての情報を盗み見ることができる、ということです。

HSMはそのような攻撃を防ぐように作られているのです。

 

ではHSMの定義をお話ししましょう。

 

HSMとはハードウェアの一つで、ネットワークやPCIのようなものを介して、システムに付随された暗号化作業を行うことができる保護ストレージを持っています。そしてサイドアタック、物理的にこじ開けようとしたり、コンポーネントに物理ケーブルを差し込んで電気信号を読み取ろうとする、などの色々な攻撃から保護する物理防御機能を持っています。

 

数々のHSMは、色々なタイプの攻撃を耐えられることを証明するため

FIPS140などの標準化の認可を取得しようと検査を受けています。

 

以下にHSMの主な使用方法を挙げます。

 

鍵の作成

鍵の作成は上で述べたように、とても大切な作業です。ただサイドアタックが非常に効果的に行われる部分でもあります。HSMは(比較的)安全な鍵の生成をし、鍵に求められる適度なランダム性があります。

 

鍵の保管

HSMは何者かが侵入しようとした場合に保管されている鍵を破棄するようにできているので、鍵の保管には適しています。

 

暗号化処理

 

鍵をHSMという安全な場所から別のシステムに転送して危険に晒すより、暗号化前の平文をHSMに置いてしまってはどうでしょう(できれば転送する場合には転送用の鍵を使ってです)。そしてHSMにすでにある鍵で暗号化させ、暗号化したデータを送り返せば?(ここでも転送中は転送用の鍵を使います)こうすることで転送中と使用中の攻撃の機会を減らします。これがHSMの鍵の使い方です。

 

通常のコンピューティング処理

 

全てのHSMがこの使い方をサポートするわけではなく(他のほとんどの方法はサポートされますが)、鍵とアルゴリズムたくさん使って機密作業をするのであれば、アプリケーションをHSMで動くように書くことができます。

これは例えばAIやMLのような、前に書いたような古いやり方とは違って、非常に機密性のある場合です。

 

簡単に保証できるものではありませんが、実行環境は往往にして非常に制限があります。「正しい」ことをするのは難しく、間違いを犯すのは簡単です。すると思っていたよりも大変安全性の低いことになります。

 

結論 HSMを使うべき?

 

HSMはPKI(Public Key Infrastructure)プロジェクトなどにはルートオブトラスト(信頼性の基点)としてとてもいいものです。

 

使うのは難しいでしょうが、PKCS#11インターフェース(Public Key Cryptography Standard )を提供しているはずなので、共通化した作業は簡易化されています。機密鍵や暗号化の要件がある場合、HSMをシステムで使うのは賢明な選択ですが、どうやって静的化して使うのはアーキテクチャと設計の段階で必要で、構築の十分前段階でする必要があります。

 

日々のプロビジョニングからプロビジョニングの解除の時まで、HSMの作業は非常に注意して行う必要があることを十分に考慮してください。HSMの使用はとても意味があることですがとても高価で拡張性は多くの場合あまりありません。

 

HSMはとても機密性の高いデータとその作業を行うというユースケースには特に最適ですが、軍用や政府、ファイナンスに使われることが多いのです。

HSMは全てのプロジェクトに合うものではないのですが、機密システムの設計と運用の武装化に大切なものなのです。

 

元の記事:https://aliceevebob.com/2019/06/11/whats-an-hsm/

2019年6月11日 Mike Bursell

 

タグ:セキュリティ