AI, blockchain and security – huh? Ahh!

There’s a place for the security community to be more involved AI and blockchain.

Next week, I’m flying to the US on Tuesday, heading back on Thursday evening, arriving back in Heathrow on Friday morning, and heading (hopefully after a shower) to Cyber Security & Cloud Expo Global to present a session and then, later in the afternoon, to participate in a panel.  One of the reasons that I’m bothering to post this here is that I have a feeling that I’m going to be quite tired by the end of Friday, and it’s probably a good idea to get at least a few thoughts in order for the panel session up front[1].  This is particularly the case as there’s some holiday happening between now and then, and if I’m lucky, I’ll be able to forget about everything work-related in the meantime.

The panel has an interesting title.  It’s “How artificial intelligence and blockchain are the battlegrounds for the next security wars“.  You could say that this is buzzwordy attempt to shoehorn security into a session about two unrelated topics, but the more I think about it, the more I’m glad that the organisers have put this together.  It seems to me that although AI and blockchain are huge topics, have their own conference circuits[2] and garner huge amounts of interest from the press, there’s a danger that people whose main job and professional focus is security won’t be the most engaged in this debate.  To turn that around a bit, what I mean is that although there are people within the AI and blockchain communities considering security, I think that there’s a place for people within the security community to be more involved in considering AI and blockchain.

AI and security

“What?” you may say.  “Have you not noticed all of the AI-enabled products being sold by vendors in the security community?”

To which I reply, “pfft.”

It would be rude[3] to suggest that none of those security products have any “Artificial Intelligence” actually anywhere near them.  However, it seems to me (and I’m not alone) that most of those products are actually employing much more basic algorithms which are more accurately portrayed as “Machine Learning”.  And that’s if we’re being generous.

More importantly, however, I think that what’s really important to discuss here is security in a world where AI (or, OK, ML) is the key element of a product or service, or in a world where AI/ML is a defining feature of how we live at least part of our world.  In other words, what’s the impact of security when we have self-driving cars, AI-led hiring practices and fewer medical professionals performing examinations and diagnoses?  What does “the next battleground” mean in this context?  Are we fighting to keep these systems from overstepping their mark, or fighting to stop malicious actors from compromising or suborning them to their ends?  I don’t know, and that’s part of why I’m looking forward to my involvement on the panel.

Blockchain and security

Blockchain is the other piece, of course, and there are lots of areas for us to consider here, too.  Three that spring to my mind are:

Whether you believe that blockchain(s) is(are) going to take over the world or not, it’s a vastly compelling topic, and I think it’s important for it to be discussed outside its “hype bubble” in the context of a security conference.

Conclusion

While I’m disappointed that I’m not going to be able to see some of the other interesting folks speaking at different times of the conference, I’m rather looking forward to the day that I will be there.  I’m also somewhat relieved to have been able to use this article to consider some of the points that I suspect – and hope! – will be brought up in the panel.  As always, I welcome comments and thoughts around areas that I might not have considered (or just missed out).  And last, I’d be very happy to meet you at the conference if you’re attending.


1 – I’ve already created the slides for the talk, I’m pleased to say.

2 – my, do they have their own conference circuits…

3 – not to mention possibly actionable.  And possibly even incorrect.

3 tips to avoid security distracti… Look: squirrel!

Executive fashions change – and not just whether shoulder-pads are “in”.

There are many security issues to worry about as an organisation or business[1].  Let’s list some of them:

  • insider threats
  • employee incompetence
  • hacktivists
  • unpatched systems
  • patched systems that you didn’t test properly
  • zero-day attacks
  • state actor attacks
  • code quality
  • test quality
  • operations quality
  • underlogging
  • overlogging
  • employee-owned devices
  • malware
  • advanced persistent threats
  • data leakage
  • official wifi points
  • unofficial wifi points
  • approved external access to internal systems via VPN
  • unapproved external access to internal systems via VPN
  • unapproved external access to internal systems via backdoors
  • junior employees not following IT-mandated rules
  • executives not following IT-mandated rules

I could go on: it’s very, very easy to find lots of things that should concern us.  And it’s particularly confusing if you just go around finding lots of unconnected things which are entirely unrelated to each other and aren’t even of the same type[2]. I mean: why list “code quality” in the same list as “executives not following IT-mandated rules”?  How are you supposed to address issues which are so diverse?

And here, of course, is the problem: this is what organisations and businesses do have to address.  All of these issues may present real risks to the functioning (or at least continued profitability) of the organisations[3].  What are you supposed to do?  How are you supposed to keep track of all these things?

The first answer that I want to give is “don’t get distracted”, but that’s actually the final piece of advice, because it doesn’t really work unless you’ve already done some work up front.  So what are my actual answers?

1 – Perform risk analysis

You’re never going to be able to give your entire attention to everything, all the time: that’s not how life works.  Nor are you likely to have sufficient resources to be happy that everything has been made as secure as you would like[4].  So where do you focus your attention and apply those precious, scarce resources?  The answer is that you need to consider what poses the most risk to your organisation.  The classic way to do this is to use the following formula:

Risk = Likelihood x Impact

This looks really simple, but sadly it’s not, and there are entire books and companies dedicated to the topic.  Impact may be to reputation, to physical infrastructure, system up-time, employee morale, or one of hundreds of other items.  The difficulty of assessing the likelihood may range from simple (“the failure rate on this component is once every 20 years, and we have 500 of them”[5]) to extremely difficult (“what’s the likelihood of our CFO clicking on a phishing attack?”[6]).  Once it’s complete, however, for all the various parts of the business you can think of – and get other people from different departments in to help, as they’ll think of different risks, I can 100% guarantee – then you have an idea of what needs the most attention.  (For now: because you need to repeat this exercise of a regular basis, considering changes to risk, your business and the threats themselves.)

2 – Identify and apply measures

You have a list of risks.  What to do?  Well, a group of people – and this is important, as one person won’t have a good enough view of everything – needs to sit[7] down and work out what measures to put in place to try to reduce or at least mitigate the various risks.  The amount of resources that the organisation should be willing to apply to this will vary from risk to risk, and should generally be proportional to the risk being addressed, but won’t always be of the same kind.  This is another reason why having different people involved is important.  For example, one risk that you might be able to mitigate by spending a £50,000 (that’s about the same amount of US dollars) on a software solution might be equally well addressed by a physical barrier and a sign for a few hundred pounds.  On the other hand, the business may decide that some risks should not be mitigated against directly, but rather insured against.  Other may require training regimes and investment in t-shirts.

Once you’ve identified what measures are appropriate, and how much they are going to cost, somebody’s going to need to find money to apply them.  Again, it may be that they are not all mitigated: it may just be too expensive.  But the person who makes that decision should be someone senior – someone senior enough to take the flak should the risk come home to roost.

Then you apply your measures, and, wherever possible, you automate them and their reporting.  If something is triggered, or logged, you then know:

  1. that you need to pay attention, and maybe apply some more measures;
  2. that the measure was at least partially effective;
  3. that you should report to the business how good a job you – and all those involved – have done.

3 – Don’t get distracted

My final point is where I’ve been trying to go with this article all along: don’t get distracted.  Distractions come in many flavours, but here are three of the most dangerous.

  1. A measure was triggered, and you start paying all of your attention to that measure, or the system(s) that it’s defending.  If you do this, you will miss all of the other attacks that are going on.  In fact, here’s your opportunity to look more broadly and work out whether there are risks that you’d not considered, and attacks that are coming in right now, masked by the one you have noticed.
  2. You assume that the most expensive measures are the ones that require the most attention, and ignore the others.  Remember: the amount of resources you should be ready to apply to each risk should be proportional to the risk, but the amount actually applied may not be.  Check that the barrier you installed still works and that the sign is still legible – and if not, then consider whether you need to spend that £50,000 on software after all.  Also remember that just because a risk is small, that doesn’t mean that it’s zero, or that the impact won’t be high if it does happen.
  3. Executive fashions change – and not just whether shoulder-pads are “in”, or the key to the boardroom bathroom is now electronic, but a realisation that executives (like everybody else) are bombarded with information.  The latest concern that your C-levels read about in the business section, or hears about from their buddies on the golf course[9] may require consideration, but you need to ensure that it’s considered in exactly the same way as all of the other risks that you addressed in the first step.  You need to be firm about this – both with the executive(s), but also yourself, because although I identified this as an executive risk, the same goes for the rest of us. Humans are generally better at keeping their focus on the new, shiny thing in front of them, rather than the familiar and the mundane.

Conclusion

You can’t know everything, and you probably won’t be able to cover everything, either, but having a good understanding of risk – and maintaining your focus in the event of distractions – means that at least you’ll be covering and managing what you can know, and can be ready to address new ones as they arrive.


1 – let’s be honest: there are lots if you’re a private individual, too, but that’s for another day.

2- I did this on purpose, annoying as it may be to some readers. Stick with it.

3 – not to mention the continued employment of those tasked with stopping these issues.

4 – note that I didn’t write “everything has been made secure”: there is no “secure”.

5 – and, to be picky, this isn’t as simple as it looks either: does that likelihood increase or decrease over time, for instance?

6 – did I say “extremely difficult”?  I meant to say…

7 – you can try standing, but you’re going to get tired: this is not a short process.

8 – now that, ladles and gentlespoons, is a nicely mixed metaphor, though I did stick with an aerial theme.

9 – this is a gross generalisation, I know: not all executives play golf.  Some of them play squash/racketball instead.

5 (Professional) development tips for security folks

… write a review of “Sneakers” or “Hackers”…

To my wife’s surprise[1], I’m a manager these days.  I only have one report, true, but he hasn’t quit[2], so I assume that I’ve not messed this management thing up completely[2].  One of the “joys” of management is that you get to perform performance and development (“P&D”) reviews, and it’s that time of year at the wonderful Red Hat (my employer).  In my department, we’re being encouraged (Red Hat generally isn’t in favour of actually forcing people to do things) to move to “OKRs”, which are “Objectives and Key Results”.  Like any management tool, they’re imperfect, but they’re better than some.  You’re supposed to choose a small number of objectives (“learn a (specific) new language”), and then have some key results for each objective that can be measured somehow (“be able to check into a hotel”, “be able to order a round of drinks”) after a period of time (“by the end of the quarter”).  I’m simplifying slightly, but that’s the general idea.

Anyway, I sometimes get asked by people looking to move into security for pointers to how to get into the field.  My background and route to where I am is fairly atypical, so I’m very sensitive to the fact that some people won’t have taken Computer Science at university or college, and may be pursuing alternative tracks into the profession[3].  As a service to those, here are a few suggestions as to what they can do which take a more “OKR” approach than I provided in my previous article Getting started in IT security – an in/outsider’s view.

1. Learn a new language

And do it with security in mind.  I’m not going to be horribly prescriptive about this: although there’s a lot to be said for languages which are aimed a security use cases (Rust is an obvious example), learning any new programming language, and thinking about how it handles (or fails to handle) security is going to benefit you.  You’re going to want to choose key results that:

  • show that you understand what’s going on with key language constructs to do with security;
  • show that you understand some of what the advantages and disadvantages of the language;
  • (advanced) show how to misuse the language (so that you can spot similar mistakes in future).

2. Learn a new language (2)

This isn’t a typo.  This time, I mean learn about how other functions within your organisations talk.  All of these are useful:

  • risk and compliance
  • legal (contracts)
  • legal (Intellectual Property Rights)
  • marketing
  • strategy
  • human resources
  • sales
  • development
  • testing
  • UX (User Experience)
  • IT
  • workplace services

Who am I kidding?  They’re all useful.  You’re learning somebody else’s mode of thinking, what matters to them, and what makes them tick.  Next time you design something, make a decision which touches on their world, or consider installing a new app, you’ll have another point of view to consider, and that’s got to be good.  Key results might include:

  • giving a 15 minute presentation to the group about your work;
  • arranging a 15 minute presentation to your group about the other group’s work;
  • (advanced) giving a 15 minute presentation yourself to your group about the other group’s work.

3. Learning more about cryptography

So much of what we do as security people comes down to or includes some cryptography.  Understanding how it should be used is important, but equally, being able to understand how it shouldn’t be used is something we should all understand.  Most important, from my point of view, however, is to know the limits of your knowledge, and to be wise enough to call in a real cryptographic expert when you’re approaching those limits.  Different people’s interests and abilities (in mathematics, apart from anything else) vary widely, so here is a broad list of different possible key results to consider:

  • learn when to use asymmetric cryptography, and when to use symmetric cryptography;
  • understand the basics of public key infrastructure (PKI);
  • understand what one-way functions are, and why they’re important;
  • understand the mathematics behind public key cryptography;
  • understand the various expiry and revocation options for certificates, their advantages and disadvantages.
  • (advanced) design a protocol using cryptographic primitives AND GET IT TORN APART BY AN EXPERT[4].

4. Learn to think about systems

Nothing that we manage, write, design or test exists on its own: it’s all part of a larger system.  That system involves nasty awkwardnesses like managers, users, attackers, backhoes and tornadoes.  Think about the larger context of what you’re doing, and you’ll be a better security person for it.  Here are some suggestions for key results:

  • read a book about systems, e.g.:
    • Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson;
    • Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design, ed. Diomidis Spinellis and Georgios Gousios;
    • Building Evolutionary Architectures: Support Constant Change by Neal Ford, Rebecca Parsons & Patrick Kua[5].
  • arrange for the operations folks in your organisation to give a 15 minute presentation to your group (I can pretty much guarantee that they think about security differently to you – unless you’re in the operations group already, of course);
  • map out a system you think you know well, and then consider all the different “external” factors that could negatively impact its security;
  • write a review of “Sneakers” or “Hackers”, highlighting how unrealistic the film[6] is, and how, equally, how right on the money it is.

5. Read a blog regularly

THIS blog, of course, would be my preference (I try to post every Tuesday), but getting into the habit of reading something security-related[7] on a regular basis means that you’re going to keep thinking about security from a point of view other than your own (which is a bit of a theme for this article).  Alternatively, you can listen to a podcast, but as I don’t have a podcast myself, I clearly can’t endorse that[8].  Key results might include:

  • read a security blog once a week;
  • listen to a security podcast once a month;
  • write an article for a site such as (the brilliant) OpenSource.com[9].

Conclusion

I’m aware that I’ve abused the OKR approach somewhat by making a number of the key results non-measureable: sorry.  Exactly what you choose will depend on you, your situation, how long the objectives last for, and a multitude of other factors, so adjust for your situation.  Remember – you’re trying to develop yourself and your knowledge.


1 – and mine.

2 – yet.

3 – yes, I called it a profession.  Feel free to chortle.

4 – the bit in CAPS is vitally, vitally important.  If you ignore that, you’re missing the point.

5 – I’m currently reading this after hearing Dr Parsons speak at a conference.  It’s good.

6 – movie.

7 – this blog is supposed to meet that criterion, and quite often does…

8 – smiley face.  Ish.

9 – if you’re interested, please contact me – I’m a community moderator there.

What is a password for, anyway?

Which of my children should I use as my password?

This may look like it’s going to be one of those really short articles, because we all know what a password is for, right?  Well, I’m not sure we do.  Or, more accurately, I’m not sure that the answer is always the same, or has always been the same, so I think it’s worth spending some time looking at what passwords are used for, particularly as I’ve just seen (another) set of articles espousing the view that either a) passwords are dead; or b) multi-factor authentication is dead, and passwords are here to stay.

History

Passwords (or, as Wikipedia points out, “watchwords”) have been used in military contexts for centuries.  If you wish to pass the guard, you need to give them a word or phrase that matches what they’re expecting (“Who goes there?” “Friend” doesn’t really cut it).  Sometimes there’s a challenge and response, which allows both parties to have some level of assurance that they’re on the same side.  Whether one party is involved, or two, this is an authentication process – one side is verifying the identity of another.

Actually, it’s not quite that simple.  One side is verifying that the other party is a member of a group of people who have a particular set of knowledge (the password) in order to authorise them access to a particular area (that is being guarded).  Anyone without the password is assumed not to be in that group, and will be denied access (and may also be subject to other measures).

Let’s step forward to the first computer recorded as having had a password.  This was the Compatible Time-Sharing System (CTSS) at MIT around 1961, and its name gives you a clue as to the reason it needed a password: different people could use the computer at the same time, so it was necessary to provide a way to identify them and the jobs they were running.

Here, the reason for having a password seems a little different to our first use case.  Authentication is there not to deny or allow access to a physical area – or even a virtual area – but to allow one party to discriminate[1] between different parties.

Getting more modern

I have no knowledge of how military uses of passwords developed, other than to note that by 1983, use of passwords on military systems was well-known enough to make it into the film[2] Wargames.  Here, the use of a password is much closer to our earlier example: though the area is virtual, the idea is to restrict access to it based on a verification of the party logging on.  There are two differences, however:

  1. it is not so much access to the area that is important, and more access to the processes available within the area;
  2. each user has a different password, it seems: the ability to guess the correct password gives instant access to a particular account.

Now, it’s not clear whether the particular account is hardwired to the telephone number that’s called in the film, but there are clearly different accounts for different users.  This is what you’d expect for a system where you have different users with different types of access.

It’s worth noting that there’s no sign that the school computer accessed in Wargames has multiple users: it seems that logging in at all gives you access to a single account – which is why auditing the system to spot unauthorised usage is, well, problematic.  The school system is also more about access to data and the ability to change it, rather than specific processes[3 – SPOILER ALERT].

Things start getting interesting

In the first few decades of computing, most systems were arguably mainly occupied with creating or manipulating data associated with the organisations that owned it[4].  That could be sales data, stock data, logistics data, design data, or personnel data, for instance.  It then also started to be intellectual property data such as legal documents, patent applications and the texts of books.  Passwords allowed the owners of the systems to decide who should have access to that data, and the processes to make changes.  And then something new happened.

People started getting their own computers.  You could do your own accounts on them, write your own books.  As long at they weren’t connected to any sort of network, the only passwords you really needed were to stop your family from accessing and changing data that wasn’t theirs.  What got really interesting, though, was when those computers started getting connected to networks, which meant that they could talk to other computers, and other computers could talk to them.  People started getting involved in chatrooms and shared spaces, and putting their views and opinions on them.

It turns out (and this should be of little surprise to regular readers of this blog) that not all people are good people.  Some of them are bad.  Some of them, given the chance, would pretend to be other people, and misrepresent their views.  Passwords were needed to allow you to protect your identity in a particular area, as well as to decide who was allowed into that area in the first place.  This is new: this is about protection of the party associated with the password, rather than the party whose resources are being used.

Our data now

What does the phrase above, “protect your identity” really mean, though?  What is your identity?  It’s data that you’ve created, and, increasingly, data that’s been created about you, and is associated with that data.  That may be tax accounts data that you’ve generated for your own use, but it may equally well be your bank balance – and the ability to pay and receive money from and to an account.  It may be your exercise data, your general health data, your fertility cycle, the assignments you’ve written for your university course, your novel or pictures of your family.  Whereas passwords used to be to protect data associated with an organisation, they’re now increasingly to protect data associated with us, and that’s  a big change.  We don’t always have control over that data – GDPR and similar legal instruments are attempts to help with that problem – but each password that is leaked gives away a bit of our identity.  Sometimes being able to change that data is what is valuable – think of a bank account – sometimes just having access to it – think of your criminal record[5] – is enough, but control over that access is important to us, and not just the organisations that control us with which we interact.

This is part of the reason that ideas such as self-sovereign identity (where you get to decide who sees what of the data associated with you) are of interest to many people, of course, but they are likely to use passwords, too (at least as one method of authentication).  Neither am I arguing that passwords are a bad thing – they’re easy to understand, and people know how to use them – but I think it’s important for us to realise that they’re not performing the task they were originally intended to fulfil – or even the task they were first used for in a computing context.  There’s a responsibility on the security community to educate people about why they need to be in control of their passwords (or other authentication mechanisms), rather than relying on those who provide services to us to care about them.  In the end, it’s our data, and we’re the ones who need to care.

Now, which of my children should I use as my password: Joshua or Rache…?[6]


1 – that is “tell the difference”, rather than make prejudice-based choices.

2 – “movie”.

3 – such as, say, the ability to start a global thermonuclear war.

4 – or “them”, if you prefer your data plural.

5 – sorry – obviously nobody who reads this blog has ever run a red light.

6 – spot the popular culture references!

Don’t talk security: talk risk

We rush to implement the latest, greatest AI-enhanced, post-quantum container-based blockchain security solution.

We don’t do security because it’s fun. No: let me qualify that. Most of us don’t do security because it’s fun, but none of us get paid to do security because it’s fun[1]. Security isn’t a thing in itself, it’s a means to an end, and that end is to reduce risk.  This was a notable change in theme in and around the RSA Conference last week.  I’d love to say that it was reflected in the Expo, but although it got some lip service, selling point solutions still seemed to be the approach for most vendors.  We’re way overdue some industry consolidation, given the number of vendors advertising solutions which, to me, seemed almost indistinguishable.

In some of the sessions, however, and certainly in many of the conversations that I had in the “hallway track” or the more focused birds-of-a-feather type after show meetings, risk is beginning to feature large.  I ended up spending quite a lot of time with CISO folks and similar – CSO (Chief Security Officer) and CPSO (Chief Product Security Officer) were two other of the favoured titles – and risk is top of mind as we see the security landscape develop.  The reason this has happened, of course, is that we didn’t win.

What didn’t we win?  Well, any of it, really.  It’s become clear that the “it’s not if, it’s when” approach to security breaches is correct.  Given some of the huge, and long-term, breaches across some huge organisations from British Airways to the Marriott group to Citrix, and the continued experience of the industry after Sony and Equifax, nobody is confident that they can plug all of the breaches, and everybody is aware that it just takes one breach, in a part of the attack surface that you weren’t even thinking about, for you to be exposed, and to be exposed big time.

There are a variety of ways to try to manage this problem, all of which I heard expressed at the conference.  They include:

  • cultural approaches (making security everybody’s responsibility/problem, training more staff in different ways, more or less often);
  • process approaches (“shifting left” so that security is visible earlier in your projects);
  • technical approaches (too many to list, let alone understand or implement fully, and ranging from hardware to firmware to software, using Machine Learning, not using Machine Learning, relying on hardware, not relying on hardware, and pretty much everything in between);
  • design approaches (using serverless, selecting security-friendly languages, using smart contracts, not using smart contracts);
  • cryptographic approaches (trusting existing, tested, peer-reviewed primitives, combining established but underused techniques such as threshold signatures, embracing quantum-resistant algorithms, ensuring that you use “quantum-generated” entropy);
  • architectural approaches (placing all of your sensitive data in the cloud, placing none of your sensitive data in the cloud).

In the end, none of these is going to work.  Not singly, not in concert.  We must use as many of them as make sense in our environment, and ensure that we’re espousing a “defence in depth” philosophy such that no vulnerability will lay our entire estate or stack open if it is compromised.  But it’s not going to be enough.

Businesses and organisations exist to run, not to be weighed down by the encumbrance of security measure after security measure.  Hence the “as make sense in our environment” above, because there will always come a point where the balance of security measures outweighs the ability of the business to function effectively.

And that’s fine, actually.  Security people have always managed risk.  We may have forgotten this, as we rush to implement the latest, greatest AI-enhanced, post-quantum container-based blockchain security solution[2], but we’re always making a balance.  Too often that balance is “if we lose data, I’ll get fired”, though, rather than a different conversation entirely.

The people who pay our salaries are not our customers, despite what your manager and SVP of Sales may tell you.  They are the members of the Board.  Whether the relevant person on the Board is the CFO, the CISO, the CSO, the CTO or the CRO[3], they need to be able to talk to their colleagues about risk, because that’s the language that the rest of them will understand.  In fact, it’s what they talk about every day.  Whether it’s fraud risk, currency exchange risk, economic risk, terrorist risk, hostile take-over risk, reputational risk, competitive risk or one of the dozens of other types, risk is what they want to hear about.  And not security.  Security should be a way to measure, monitor and mitigate risk.  They know by now – and if they don’t, it’s the C[F|IS|S|T|R]O’s job to explain to them – that there’s always a likelihood that the security of your core product/network/sales system/whatever won’t be sufficient.  What they need to know is what risks that exposes.  Is it risk that:

  • the organisation’s intellectual property will be stolen;
  • customers’ private information will be exposed to the Internet;
  • merger and acquisition information will go to competitors;
  • payroll information will be leaked to the press – and employees;
  • sales won’t be able to take any orders for a week;
  • employees won’t be paid for a month;
  • or something completely different?

The answer (or, more likely, answers) will depend on the organisation and sector, but the risks will be there.  And the Board will be happy to hear about them.  Well, maybe that’s an overstatement, but they’ll be happier hearing about them in advance than after an attack has happened.  Because if they hear about them in advance, they can plan mitigations, whether that’s insurance, changes in systems, increased security or something else.

So we, as a security profession, need to get better a presenting the risk, and also at presenting options to the Board, so that they can make informed decisions.  We don’t always have all the information, and neither will anybody else, but the more understanding there is of what we do, and why we do it, the more we will be valued.  And there’s little risk in that.


1 – if I’m wrong about this, and you do get paid to do security because it’s fun, please contact me privately. I interested, but don’t think we should share the secret too widely.

2 – if this buzzphrase-compliant clickbait doesn’t get me page views, I don’t know what will.

3 – Chief [Financial|Information Security|Security|Technology|Risk] Officer.

Oh, how I love my TEE (or do I?)

Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security

I realised just recently that I’ve not written yet about Trusted Execution Environments (TEEs) on this blog.  This is a surprise, honestly, because TEEs are fascinating, and I spend quite a lot of my professional time thinking – and sometimes worrying – about them.  So what, you may ask, is a TEE?

Let’s look at one of the key use cases first, and then get to what a Trusted Execution Environment is.  A good place to start it the “Cloud”, which, as we all know, is just somebody else’s computer.  What this means is that if you’re running an application (let’s call it a “workload”) in the Cloud – AWS, Azure, whatever – then what you’re doing is trusting somebody else to take the constituent parts of that workload – its code and its data – and run them on their computer.  “Yay”, you may be thinking, “that means that I don’t have to run it in my computer: it’s all good.”  I’m going to take issue with the “all good” bit of that statement.  The problem is that the company – or people within that company – who run your workload on their computer (let’s call it a “host”) can, if they so wish, look inside it, change it, and stop it running.  In other words, they can break all three classic “CIA” properties of security: confidentiality (by looking inside it); integrity (by changing it); and availability (by stopping it running).  This is because the way that workloads run on hosts – whether in hardware-mediated virtual machines, within containers or on bare-metal – all allow somebody with sufficient privilege on that machine to do all of the bad things I’ve just mentioned.

And these are bad things.  We don’t tend to care about them too much as individuals – because the amount of value a cloud provider would get from bothering to look at our information is low – but as businesses, we really should be worried.

I’m afraid that the problem doesn’t go away if you run your systems internally.  Remember that anybody with sufficient access to hosts can look inside and tamper with your workloads?  Well, are you happy that you sysadmins should all have access to your financial results?  Merger and acquisition details?  Pay roll?  Because if you have this kind of data running on your machines on your own premises, then they do have access to all of those.

Now, there are a number of controls that you can put in place to help with this – not least background checks and Acceptable Use Policies – but TEEs aim to solve this problem with technology.  Actually, they only really aim to solve the confidentiality and integrity pieces, so we’ll just have to assume for now that you’re going to be in a position to notice if your sales order process fails to run due to malicious activity (for instance).  Trusted Execution Environments use chip-level instructions to allow you to create enclaves of higher security where processes can execute (and data can be processed) in ways that mean that even privileged users of the host cannot attack their confidentiality or integrity.  To get a little bit technical, these enclaves are memory pages with particular controls on them such that they are always encrypted except when they are actually being processed by the chip.

The two best-known TEE implementations so far are Intel’s SGX and AMD’s SEV (though other silicon vendors are beginning to talk about their alternatives).  Both Intel and AMD are aiming to put these into server hardware and create an ecosystem around their version to make it easy for people to run workloads (or components of workloads) within them.  And the security community is doing what it normally does (and, to be clear, absolutely should be doing), and looking for vulnerabilities in the implementation.  So far, most of the vulnerabilities that have been identified are within Intel’s SGX – though I’m not in a position to say whether that’s because the design and implementation is weaker, or just because the researchers have concentrated on the market leader in terms of server hardware.  It looks like we need to go through a cycle or two of the technologies before the industry is convinced that we have a working design and implementation that provides the levels of security that are worth deploying.  There’s also work to be done to provide sufficiently high quality open source software and drivers to support TEEs for wide deployment.

Despite the hopes of the silicon vendors, it may be some time before TEEs are in common usage, but people are beginning to sit up and take notice, partly because there’s so much interest in moving workloads to the Cloud, but still serious concerns about the security of your sensitive processes and data when they’re there.  This has got to be a good thing, and I think it’s really worth considering how you might start designing and deploying workloads in new ways once TEEs actually do become commonly available.

The “invisible” trade-off? Security.

“For twenty years, people have been leaving security till last.”

Colleague (in a meeting): “For twenty years, people have been leaving security till last.”

Me (in response): “You could have left out those last two words.”

This article will be a short one, and it’s a plea.  It’s also not aimed at my regular readership, because if you’re part of my regular readership, then you don’t need telling.  Many of the articles on this blog, however, are written with the express intention of meeting two criteria:

  1. they should be technical credible[1].
  2. you should be able to show them to your parents or to your manager[2].

I suspect that it’s your manager, this time round, who I’ll be targeting, but I don’t want to make assumptions about your parents’ roles or influence, so let’s leave it open.

The issue I want to address this week is the impact of not placing security firmly at the beginning, middle and end of any system or application design process.  As we all know, security isn’t something that you can bolt onto the end of a project and hope that you’ll be OK.  Equally, if you think about it only at the beginning, you’ll find that by the end, your requirements, use cases, infrastructure or personae will have changed[3], and what you planned at the beginning is no longer fit for purpose.  After all, if you know that your functional requirements will change (and everybody knows this), then why would your non-functional requirements be subject to the same drift?

The problem is that security, being a non-functional requirement[4], doesn’t get the up-front visibility that it needs.  And, because it’s difficult to do well, and it’s often the responsibility of a non-core team member “flown in” as a consultant or expert for a small percentage design meetings, security is the area that it’s easy to decide to let slide a bit.  Or a lot.  Or completely.

If there’s a trade-off around features, functionality or resource location, it’s likely to be security, and often, nobody even raises the point that there has been a trade-off: it’s completely invisible (this is one of the reasons Why I love technical debt).  This is also the reason that whenever I look at a system, I try to think “what were the decisions made about security?”, because, too often, no decisions were made about security at all.

So, if you’re a manager[6], and you’re involved with designing a system or application, don’t let security be the invisible trade-off.  I’m not saying that it needs to be the be-all and end-all of the project, but at least ensure that you think about it.  Thank you.


1 – they should be accurate, to be honest, but I also try not to dive deeper into technical topics than is absolutely required for context.

2 – to be clear, this isn’t about making them work- and parent-safe, but about presenting the topics in a manner that is approachable by non-experts.

3 – or, equally likely, all of them.

4 – I don’t mean that security doesn’t function correctly[5], but rather that it’s not one of the key functions of the system or application that’s being designed.

5 – though, now you mention it…

6 – or parent – see above.