Stop reading, start patching

… in order to correct this problem, BOTH the client AND the router must be patched

This is an emergency post: normal* service will resume next week**.

So, over the past 48 hours or so, news of the KRACK vulnerability for Wifi has started spreading.  This vulnerability makes is pretty trivially easy to snoop on information sent between a device (mobile phone, laptop, etc.) and a wifi router, in some cases allowing changes to that information.  This is not a bug in code, but a mis-design in the crypto algorithm that’s used by the vast majority of Wifi connections: WPA2.

Some key facts:

  • WPA2 personal and WPA2 enterprise are vulnerable
  • the vulnerability is in the design of the code, not the implementation
    • however, Linux and Android 6.0+ implementations (which use wpa_supplicant 2.4 or higher, are even more easily attacked)
  • in order to correct this problem, BOTH the client AND the router must be patched.
    • this means that it’s not good enough just to update your laptop, but also the router in your house, business, etc.
  • Android phones typically take a long time to get patches (if at all)
    • unless you have evidence to the contrary, assume that your phone is vulnerable
  • many hotels, businesses, etc., rarely update or patch their routers
    • assume that any wifi connection that you use from now on is vulnerable unless you know that it’s been patched
  • you can continue to rely on VPNs
  • you can continue to rely on website encryption***
    • but remember that you may be betraying lots of metadata, including, of course, the address of the website that you’re visiting, to any snoopers
  • the security of IoT devices is going to continue to be a problem
    • unless their firmware can easily be patched, it’s difficult to believe that they will be safe

For my money, it’s worth investing in that VPN solution you were always promising yourself, and starting to accept the latency hit that it may cost you.

For more information in an easily readable form, I suggest heading over to The Register, which is pretty much always a good place to start.

For information in the initial publisher of this attack, visit https://www.krackattacks.com/


*well, as normal as it ever was

**hopefully

***as much as you ever could before…

Wow: autonomous agents!

The problem is not the autonomy. The problem isn’t even particularly with the intelligence…

Autonomous, intelligent agents offer some great opportunities for our digital lives*.  There, look, I said it.  They will book meetings for us, negotiate cheap holidays, order our children’s complete school outfit for the beginning of term, and let us know when it’s time to go to the nurse for our check-up.  Our business lives, our personal lives, our family relationships – they’ll all be revolutionised by autonomous agents.  Autonomous agents will learn our preferences, have access to our diaries, pay for items, be able to send messages to our friends.

This is all fantastic, and I’m very excited about it.  The problem is that I’ve been excited about it for nearly 20 years, when I was involved in a project around autonomous agents in Java.  It was very neat then, and it’s still very neat now***.

Of course, technology has moved on.  Some of the underlying capabilities are much more advanced now than then.  General availability of APIs, consistency of data formats, better Machine Learning (or Artificial Intelligence, if you must), less computationally expensive cryptography, and the rise of blockchains and distributed ledgers: they all bring the ability for us to build autonomous agents closer than ever before.  We talked about disintermediation back in the day, and that looked plausible.  We really can build scalable marketplaces now in ways which just weren’t as feasible two decades ago.

The problem, though, isn’t the technology.  It was never the technology.  We could have made the technology work 20 years ago, even if it wasn’t as fast, secure or wide-ranging as it could be today.  It isn’t even vested interests from the large platform players, who arguably own much of this space at the moment – though these interests are much more consolidated than they were when I was first looking at this issue.

The problem is not the autonomy.  The problem isn’t even particularly with the intelligence: you can program as much or as little in as you want, or as the technology allows.  The problem is with the agency.

How much of my life do I want to hand over to what’s basically a ‘bot?  Ignore***** the fact that these things will get hacked******, and assume we’re talking about normal, intended usage.  What does “agency” mean?  It means acting for someone: being their agent – think of what actors’ agents do, for example.  When I engage a lawyer or a builder or an accountant to do something for me, or when an actor employs an agent for that matter, we’re very clear about what they’ll be doing.  This is to protect both me and them from unintended consequences.  There’s a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent.  There are contracts, and agreed restitutions – basically punishments – for when things go wrong.  Say that an accountant buys 500 shares in a bank, and then I turn round and say that she never had the authority to do so: if we’ve set up the relationship correctly, it should be entirely clear whether or not she did, and whose responsibility it is to deal with any fall-out from that purchase.

Now think about that in terms of autonomous, intelligent agents.  Write me that contract, and make it equivalent in software and the legal system.  Tell me what happens when things go wrong with the software.  Show me how to prove that I didn’t tell the agent to buy those shares.  Explain to me where the restitution lies.

And these are arguably the simple problems.  How to I rebuild the business reputation that I’ve built up over the past 15 years when my agent posts on Twitter a tweet about how I use a competitor’s products, when I’m just trialling them for interest?  How does an agent know not to let my wife see the diary entry for my meeting with that divorce lawyer*******?  What aspects of my browsing profile are appropriate for suggesting – or even buying – online products or services with my personal or business credit card*********?  And there’s the classic “buying flowers for the mistress and having them sent to the wife” problem**********.

I don’t think we have an answer to these questions: not even close.  You know that virtual admin assistant we’ve been promised in sci-fi movies for decades now: the one with the futuristic haircut who appears as a hologram outside our office?  Holograms – nearly.  Technology behind it – pretty much.  Trust, reputation and agency?  Nowhere near.

 


*I hate this word: “digital”.  Well, not really, but it’s used far too much as a shorthand for “newest technology”**.

**”Digital businesses”.  You mean, unlike all the analogue ones?  Come on.

***this is one of those words that my kids hate me using.  There are two types of word that come into this category: old words and new words.  Either I’m showing how old I am, or I’m trying to be hip****, which is arguably worse.  I can’t win.

****yeah, they don’t say hip.  That’s one of the “old person words”.

*****for now, at least.  Let’s not forget it.

******_everything_  gets hacked*******.

*******I could say “cracked”, but some of it won’t be malicious, and hacking might be positive.

********I’m not.  This is an example.

*********this isn’t even about “dodgy” things I might have been browsing on home time.  I may have been browsing for analyst services, with the intent to buy a subscription: how sure am I that the agent won’t decide to charge these to my personal credit card when it knows that I perform other “business-like” actions like pay for business-related books myself sometimes?

**********how many times do I have to tell you, darling…?

Isolationism

… what’s the fun in having an Internet if you can’t, well, “net” on it?

Sometimes – and I hope this doesn’t come as too much of a surprise to my readers – sometimes, there are bad people, and they do bad things with computers.  These bad things are often about stopping the good things that computers are supposed to be doing* from happening properly.  This is generally considered not to be what you want to happen**.

For this reason, when we architect and design systems, we often try to enforce isolation between components.  I’ve had a couple of very interesting discussions over the past week about how to isolate various processes from each other, using different types of isolation, so I thought it might be interesting to go through some of the different types of isolation that we see out there.  For the record, I’m not an expert on all different types of system, so I’m going to talk some history****, and then I’m going to concentrate on Linux*****, because that’s what I know best.

In the beginning

In the beginning, computers didn’t talk to one another.  It was relatively difficult, therefore, for the bad people to do their bad things unless they physically had access to the computers themselves, and even if they did the bad things, the repercussions weren’t very widespread because there was no easy way for them to spread to other computers.  This was good.

Much of the conversation below will focus on how individual computers act as hosts for a variety of different processes, so I’m going to refer to individual computers as “hosts” for the purposes of this post.  Isolation at this level – host isolation – is still arguably the strongest type available to us.  We typically talk about “air-gapping”, where there is literally an air gap – no physical network connection – between one host and another, but we also mean no wireless connection either.  You might think that this is irrelevant in the modern networking world, but there are classes of usage where it is still very useful, the most obvious being for Certificate Authorities, where the root certificate is so rarely accessed – and so sensitive – that there is good reason not to connect the host on which it is stored to be connected to any other computer, and to use other means, such as smart-cards, a printer, or good old pen and paper to transfer information from it.

And then…

And then came networks.  These allow hosts to talk to each other.  In fact, by dint of the Internet, pretty much any host can talk to any other host, given a gateway or two.  So along came network isolation to try to stop tha.  Network isolation is basically trying to re-apply host isolation, after people messed it up by allowing hosts to talk to each other******.

Later, some smart alec came up with the idea of allowing multiple processes to be on the same host at the same time.  The OS and kernel were trusted to keep these separate, but sometimes that wasn’t enough, so then virtualisation came along, to try to convince these different processes that they weren’t actually executing alongside anything else, but had their own environment to do their old thing.  Sadly, the bad processes realised this wasn’t always true and found ways to get around this, so hardware virtualisation came along, where the actual chips running the hosts were recruited to try to convince the bad processes that they were all alone in the world.  This should work, only a) people don’t always program the chips – or the software running on them – properly, and b) people decided that despite wanting to let these processes run as if they were on separate hosts, they also wanted them to be able to talk to processes which really were on other hosts.  This meant that networking isolation needed to be applied not just at the host level, but at the virtual host level, as well******.

A step backwards?

Now, in a move which may seem retrograde, it occurred to some people that although hardware virtualisation seemed like a great plan, it was also somewhat of a pain to administer, and introduced inefficiencies that they didn’t like: e.g. using up lots of RAM and lots of compute cycles.  These were often the same people who were of the opinion that processes ought to be able to talk to each other – what’s the fun in having an Internet if you can’t, well, “net” on it?  Now we, as security folks, realise how foolish this sounds – allowing processes to talk to each other just encourages the bad people, right? – but they won the day, and containers came along. Containers allow lots of processes to be run on a host in a lightweight way, and rely on kernel controls – mainly namespaces – to ensure isolation********.  In fact, there’s more you can do: you can use techniques like system call trapping to intercept the things that processes are attempting and stop them if they look like the sort of things they shouldn’t be attempting*********.

And, of course, you can write frameworks at the application layer to try to control what the different components of an application system can do – that’s basically the highest layer, and you’re just layering applications on applications at this point.

Systems thinking

So here’s where I get to the chance to mention one of my favourite topics: systems.  As I’ve said before, by “system” here I don’t mean an individual computer (hence my definition of host, above), but a set of components that work together.  The thing about isolation is that it works best when applied to a system.

Let me explain.  A system, at least as I’d define it for the purposes of this post, is a set of components that work together but don’t have knowledge of external pieces.  Most important, they don’t have knowledge of different layers below them.  Systems may impose isolation on applications at higher layers, because they provide abstractions which allow higher systems to be able to ignore them, but by virtue of that, systems aren’t – or shouldn’t be – aware of the layers below them.

A simple description of the layers – and it doesn’t always hold, partly because networks are tricky things, and partly because there are various ways to assemble the stack – may look like this.

Application (top layer)
Container
System trapping
Kernel
Hardware virtualisation
Networking
Host (bottom layer)

As I intimated above, this is a (gross) simplification, but the point holds that the basic rule is that you can enforce isolation upwards in the layers of the stack, but you can’t enforce it downwards.  Lower layer isolation is therefore generally stronger than higher layer isolation.   This shouldn’t come as a huge surprise to anyone who’s used to considering network stacks – the principle is the same – but it’s helpful to lay out and explain the principles from time to time, and the implications for when you’re designing and architecting.

Because if you are considering trust models and are defining trust domains, you need to be very, very careful about defining whether – and how – these domains spread across the layer boundaries.  If you miss a boundary out when considering trust domains, you’ve almost certainly messed up, and need to start again.  Trust domains are important in this sort of conversation because the boundaries between trust domains are typically where you want to be able to enforce and police isolation.

The conversations I’ve had recently basically ran into problems because what people really wanted to do was apply lower layer isolation from layers above which had no knowledge of the bottom layers, and no way to reach into the control plane for those layers.  We had to remodel, and I think that we came up with some sensible approaches.  It was as I was discussing these approaches that it occurred to me that it would have been a whole lot easier to discuss them if we’d started out with a discussion of layers: hence this blog post.  I hope it’s useful.

 


*although they may well not be, because, as I’m pretty sure I’ve mentioned before on this blog, the people trying to make the computers do the good things quite often get it wrong.

**unless you’re one of the bad people. But I’m pretty sure they don’t read this blog, so we’re OK***.

***if you are a bad person, and you read this blog,  would you please mind pretending, just for now, that you’re a good person?  Thank you.  It’ll help us all sleep much better in our beds.

****which I’m absolutely going to present in an order that suits me, and generally neglect to check properly.  Tough.

*****s/Linux/GNU Linux/g; Natch.

******for some reason, this seemed like a good idea at the time.

*******for those of you who are paying attention, we’ve got to techniques like VXLAN and SR-IOV.

********kernel purists will try to convince you that there’s no mention of containers in the Linux kernel, and that they “don’t really exist” as a concept.  Try downloading the kernel source and doing a search for “container” if you want some ammunition to counter such arguments.

*********this is how SELinux works, for instance.

Staying on topic – speaking at conferences

Just to be entirely clear: I hate product pitches.

As I mentioned last week, I’ve recently attended the Open Source Summit and Linux Security Summit.  I’m also currently submitting various speaking sessions to various different upcoming events, and will be travelling to at least one more this year*.  So conferences are on my mind at the moment.  There seem to be four main types of conference:

  1. industry – often combined with large exhibitions, the most obvious of these in the security space would be Black Hat and RSA.  Sometimes, the exhibition is the lead partner: InfoSec has a number of conference sessions, but the main draw for most people seems to be the exhibition.
  2. project/language – often associated with Open Source, examples would be Linux Plumbers Conference or the Openstack Summit.
  3. company – many companies hold their own conferences, inviting customers, partners and employees to speak.  The Red Hat Summit is a classic in this vein, but Palo Alto has Ignite, and companies like Gartner run focussed conferences through the year.  The RSA Conference may have started out like this, but it’s now so generically security that it doesn’t seem to fit**.
  4. academic – mainly a chance for academics to present papers, and some of these overlap with industry events as well.

I’ve not been to many of the academic type, but I get to a smattering of the other types a year, and there’s something that annoys me about them.

Before I continue, though, a little question; why do people go to conferences?  Here are the main reasons*** that I’ve noticed****:

  • they’re a speaker
  • they’re an exhibitor with a conference pass (rare, but it happens, particularly for sponsors)
  • they want to find out more about particular technologies (e.g. containers or VM orchestrators)
  • they want to find out more about particular issues and approaches (e.g. vulnerabilities)
  • they want to get career advice
  • they fancy some travel, managed to convince their manager that this conference was vaguely relevant and got the travel approval in before the budget collapsed*****
  • they want to find out more about specific products
  • the “hallway track”.

A bit more about the last two of these – in reverse order.

The “hallway track”

I’m becoming more and more convinced that this is often the most fruitful reason for attending a conference.  Many conferences have various “tracks” to help attendees decide what’s most relevant to them.  You know the sort of thing: “DevOps”, “Strategy”, “Tropical Fish”, “Poisonous Fungi”.  Well, the hallway track isn’t really a track: it’s just what goes on in hallways: you meet someone – maybe at the coffee stand, maybe at a vendor’s booth, maybe asking questions after a session, maybe waiting in the queue for conference swag – and you start talking.  I used to feel guilty when this sort of conversation led me to miss a session that I’d flagged as “possibly of some vague interest” or “might take some notes for a colleague”, but frankly, if you’re making good technical or business contacts, and increasing your network in a way which is beneficial to your organisation and/or career, then knock yourself out*******: and I know that my boss agrees.

Finding out about specific products

Let’s be clear.  The best place to this is usually at a type 2 or a type 3 conference.  Type 3 conferences are often designed largely to allow customers and partners to find out the latest and greatest details about products, services and offerings, and I know that these can be very beneficial.  Bootcamp-type days, workshops and hands-on labs are invaluable for people who want to get first-hand, quick and detailed access to product details in a context outside of their normal work pattern, where they can concentrate on just this topic for a day or two.  In the Open Source world, it’s more likely to be a project, rather than a specific vendor’s project, because the Open Source community is generally not overly enamoured by commercial product pitches.  Which leads me to my main point: product pitches.

Product pitches – I hate product pitches

Just to be entirely clear: I hate product pitches.  I really, really do.  Now, as I pointed out in the preceding paragraph, there’s a place for learning about products.  But it’s absolutely not in a type 1 conference.  But that’s what everybody does – even (and this is truly horrible) in key notes.  Now, I really don’t mind too much if a session title reads something like “Using Gutamaya’s Frobnitz for token ring network termination” – because then I can ignore if it’s irrelevant to me.  And, frankly, most conference organisers outside type 3 conferences actively discourage that sort of thing, as they know that most people don’t come to those types of conferences to hear them.

So why do people insist on writing session titles like “The problem of token ring network termination – new approaches” and then pitching their product?  They may spend the first 10 minutes (if you’re really, really lucky) talking about token ring network termination, but the problem is that they’re almost certain to spend just one slide on the various approaches out there before launching into a commercial pitch for Frobnitzes********* for the entirety of the remaining time.  Sometimes this is thinly veiled as a discussion of a Proof of Concept or customer deployment, but is a product pitch nevertheless: “we solved this problem by using three flavours of Frobnitzem, and the customer was entirely happy, with a 98.37% reduction in carpet damage due to token ring leakage.”

Now, I realise that vendors need to sell products and/or services.  But I’m convinced that the way to do this is not to pitch products and pretend that you’re not.  Conference attendees aren’t stupid**********: they know what you’re doing.  Don’t be so obvious.  How about actually talking about the various approaches to token ring network termination, with the pluses and minuses, and a slide at the end in which you point out that Gutamaya’s solution, Frobnitz, takes approach y, and has these capabilities?  People will gain useful technical knowledge!  Why not talk about that Proof of Concept, what was difficult and how there were lessons to be learned from your project – and then have a slide explaining how Frobnitz fitted quite well?  People will take away lessons that they can apply to ther project, and might even consider Gutamaya’s Frobnitz range for it.  Even better, you could tell people how it wasn’t a perfect fit (nothing ever is, not really), but you’ve learned some useful lessons, and plan to make some improvements in the next release (“come and talk to me after the session if you’d like to know more”).

For me, at least, being able to show that your company has the sort of technical experts who can really explain and delve into issues which are, of course, relevant to your industry space, and for which you have a pretty good product fit is much, much more likely to get real interest in you, your product and your company.  I want to learn: not about your product, but about the industry, the technologies and maybe, if you’re lucky, about why I might consider your product next time I’m looking at a problem.  Thank you.


*Openstack Summit in Sydney.  Already getting quite excited: the last Openstack Summit I attended was interesting, and it’s been a few years since I was in Sydney.  Nice time of the year…

**which is excellent – as I’ll explain.

***and any particular person going to any particular conference may hit more than one of these.

****I’d certainly be interested about what I’ve missed.  I considered adding “they want to collect lots of swag”, but I really hope that’s not one.

*****to be entirely clear: I don’t condone this particular one******.

******particularly as my boss has been known to read this blog.

*******don’t, actually.  I’ve concussed myself before – not at a conference, to be clear – and it’s not to be recommended.  I remember it as feeling like being very, very jetlagged and having to think extra hard about things that normally would come to me immediately********.

********my wife tells me I just become very, very vague.  About everything.

*********I’ve looked it up: apparently the plural should be “Frobnitzem”.  You have my apologies.

**********though if they’ve been concussed, they may be acting that way temporarily.

Next generation … people

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead.  The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out.  Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”.  Or, if you’re really lucky “Next gen”.  What I’d like to talk about this week, however, is the actual next generation – that’s people.  IT people.  IT security people.  I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field.  Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture.  Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents.  However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know).  Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it.  Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise.  I mean: really?] and improve security that way by having a lot of developers.”  Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction.  And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees.  I was astonished to note that maybe half of the people there had not come before.  And heartened.  I was also pleased to note that a good number of them appeared fairly young*********.  On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

  1. we’ve got a job to do.  The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.
  2. if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”.  It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever.  Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least.  Which, if you listen to my kids, isn’t that hard.

**********who actually gets paid by cheque (or check) any more?

Blockchain: should we all play?

Don’t increase the technical complexity of a process just because you’ve got a cool technology that you could throw at it.

I’m attending Open Source Summit 2017* this week in L.A., and went to an interesting “fireside chat” on blockchain moderated by Brian Behlendforf of Hyperledger, with Jairo*** of Wipro and Siva Kannan of Gem.  It was a good discussion – fairly basic in terms of the technical side, and some discussion of identity in blockchain – but there was one particular part of the session that I found interesting and which I thought was worth some further thought.  As in my previous post on this topic, I’m going to conflate blockchain with Distributed Ledger Technologies (DLTs) for simplicity.

Siva presented three questions to ask when considering whether a process is a good candidate for moving to the blockchain.  There’s far too much bandwagon-jumping around blockchain: people assume that all processes should be blockchained.  I was therefore very pleased to see this come up as a topic.  I think it’s important to spend some time looking at when it makes sense to use blockchains, and when it’s not.  To paraphrase Siva’s points:

  1. is the process time-consuming?
  2. is the process multi-partite, made up of multiple steps?
  3. is there a trust problem within the process?

I liked these as a starting point, and I was glad that there was a good conversation around what a trust problem might mean.  I’m not quite sure it went far enough, but there was time pressure, and it wasn’t the main thrust of the conversation.  Let’s spend a time looking at why I think the points above are helpful as tests, and then I’m going to add another.

Is the process time-consuming?

The examples that were presented were two of the classic ones used when we’re talking about blockchain: inter-bank transfer reconciliation and healthcare payments.  In both cases, there are multiple parties involved, and the time it takes for completion seems completely insane for those of us used to automated processes: in the order of days.  This is largely because the processes are run by central authorities when, from the point of view of the process itself, the transactions are actually between specific parties, and don’t need to be executed by those authorities, as long as everybody trusts that the transactions have been performed fairly.  More about the trust part below.

Is the process multi-partite?

If the process is simple, and requires a single step or transaction, there’s very little point in applying blockchain technologies to it.  The general expectation for multi-partite processes is that they involve multiple parties, as well as multiple parts.  If there are only a few steps in a transaction, or very few parties involved, then there are probably easier technological solutions for it.  Don’t increase the technical complexity of a process just because you’ve got a cool technology that you can throw at it******.

Is there a trust problem within the process?

Above, I used the phrase “as long as everybody trusts that the transactions have been performed fairly”******.  There are three interesting words in this phrase*******: “everybody”, “trusts” and “fairly”.  I’m going to go through them one by one:

  • everybody: this might imply full transparency of all transactions to all actors in the system, but we don’t need to assume that – that’s part of the point of permissioned blockchains.  It may be that only the actors involved in the particular process can see the details, whereas all other actors are happy that they have been completed correctly.  In fact, we don’t even need to assume that the actors involved can see all the details: secure multi-party computation means that only restricted amounts of information need to be exposed********.
  • trusts: I’ve posted on the topic of trust before, and this usage is a little less tight than I’d usually like.  However, the main point is to ensure sufficient belief that the process meets expectations to be able to accept it.
  • fair: as anyone with children knows, this is a loaded word.  In this context, I mean “according to the rules agreed by the parties involved – which may include parties not included in the transaction, such as a regulatory body – and encoded into the process”.

This point about encoding rules into a process is a really, really major one, to which I intend to return at a later date, but for now let’s assume (somewhat naively, admittedly) that this is doable and risk-free.

One more rule: is there benefit to all the stakeholders?

This was a rule that I suggested, and which caused some discussion.  It seems to me that there are some processes where a move to a blockchain may benefit certain parties, but not others.  For example, the amount of additional work required by a small supplier of parts to a large automotive manufacturer might be such that there’s no obvious benefit to the supplier, however much benefit is manifestly applicable to the manufacturer.  At least one of the panellists was strongly of the view that there will always be benefit to all parties, but I’m not convinced that the impact of implementation will always outweight such benefit.

Conclusion: blockchain is cool, but…

… it’s not a perfect fit for every process.  Organisations – and collections of organisations – should always carefully consider how good a fit blockchain or DLT may be before jumping to a decision which may be more costly and less effective than they might expect from the hype.


*was “LinuxCon and ContainerCon”**.

**was “LinuxCon”.

***he has a surname, but I didn’t capture it in my notes****.

****yes, I write notes!

*****this is sometime referred to as the “hammer problem” (if all you’ve got is a hammer, then everything looks like a nail)

******actually, I didn’t: I missed out the second “as”, and so had to correct it in the first appearance of the phrase.

*******in the context of this post.  They’re all interesting in the own way, I know.

********this may sound like magic.  Given my grasp of the mathematics involved, I have to admit that it might as well be*********.

*********thank you Arthur C. Clarke.

Thinking like a (systems) architect

“…I know it when I see it, and the motion picture involved in this case is not that.” – Mr Justice Stewart.

My very first post on this blog, some six months ago*, was entitled “Systems security – why it matters“, and it turns out that posts where I talk** about architecture are popular, so I thought I’d go a bit further into what I think being an architect is about.  First of all, I’d like to reference two books which helped me come to some sort of understanding about the art of being an architect.  I read them a long time ago****, but I still dip into them from time to time.  I’m going to link to the publisher’s***** website, rather than to any particular bookseller:

What’s interesting about them is that they both have multiple points of view expressed in them: some contradictory – even within each book.  And this rather reflects the fact that I believe that being a systems architect is an art, or a discipline.  Different practitioners will have different views about it.  You can talk about Computer Science being a hard science, and there are parts of it which are, but much of software engineering (lower case intentional) goes beyond that.  The same, I think, is even more true for systems architecture: you may be able to grok what it is once you know it, but it’s very difficult to point to something – even a set of principles – and say, “that is systems architecture”.  Sometimes, the easiest way to define something is by defining what it’s not: search for “I know it when I see it, and the motion picture involved in this case is not that.”******

Let me, however, try to give some examples of the sort of things you should expect to see when someone (or a group of people) is doing good systems architecture:

  • pictures: if you can’t show the different components of a system in a picture, I don’t believe that you can fully describe what each does, or how they interact.  If you can’t separate them out, you don’t have a properly described system, so you have no architecture.  I know that I’m heavily visually oriented, but for me this feels like a sine qua non********.
  • a data description: if you don’t know what data is in your system, you don’t know what it does.
  • an entity description: components, users, printers, whatever: you need to know what’s doing what so that you can describe what the what is that’s being done to it, and what for*********.
  • an awareness of time: this may sound like a weird one, but all systems (of any use) process data through time.  If you don’t think about what will change, you won’t understand what will do the changing, and you won’t be able to consider what might go wrong if things get changed in ways you don’t expect, or by components that shouldn’t be doing the changing in the first place.
  • some thinking on failure modes: I’ve said it before, and I’ll say it again: “things will go wrong.”  You can’t be expected to imagine all the things that might go wrong, but you have a responsibility to consider what might happen to different components and data – and therefore to the operation of the system of the whole – if********** they fall over.

There are, of course, some very useful tools and methodologies (the use of UML views is a great example) which can help you with all of these.  But you don’t need to be an expert in all of them – or even any one of them – to be a good systems architect.

One last thing I’d add, though, and I’m going to call it the “bus and amnesiac dictum”***********:

  • In six months’ time, you’ll have forgotten the details or been hit by a bus: document it.  All of it.

You know it makes sense.


*this note is for nothing other than to catch those people who go straight to this section, hoping for a summary of the main article.  You know who you are, and you’re busted.

**well, write, I guess, but it feels like I’m chatting at people, so that’s how I think of it***

***yes, I’m going out of my way to make the notes even less info-filled than usual.  Deal with it.

****seven years is a long time, right?

*****almost “of course”, though I believe they’re getting out of the paper book publishing biz, sadly.

******this is a famous comment from a case called from the U.S. called “Jacobellis v. Ohio” which I’m absolutely not going to quote in full here, because although it might generate quite a lot of traffic, it’s not the sort of traffic I want on this blog*******.

*******I did some searching found the word: it’s apophasis.  I love that word.  Discovered it during some study once, forgot it.  Glad to have re-found it.

********I know: Greek and Latin in one post.  Sehr gut, ja?

********.*I realise that this is a complete mess of a sentence.  But it does have a charm to it, yes?  And you know what it means, you really do.

**********when.  I meant “when”.

***********more Latin.

Why I love technical debt

… if technical debt can be named, then it can be documented.

Not necessarily a title you’d expect for a blog post, I guess*, but I’m a fan of technical debt.  There are two reasons for this: a Bad Reason[tm] and a Good Reason[tm].  I’ll be up front about the Bad Reason first, and then explain why even that isn’t really a reason to love it.  I’ll then tackle the Good Reason, and you’ll nod along in agreement.

The Bad Reason I love technical debt

We’ll get this out of the way, then, shall we?  The Bad Reason is that, well, there’s just lots of it, it’s interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at.  I suppose those aren’t all bad things, and it can also be a bit depressing, because there’s always so much of it, it’s not always interesting, and sometimes I need to get involved even when I might have better things to do.

And what’s worse is that it almost always seems to be security-related, and it’s always there.  That’s the bad part.

Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time that it deserves, or done by people who have half an idea, but don’t quite fully grasp it.  I should be clear at this point: I’m not saying that this last reason is those people’s fault.  That people know they need security it fantastic.  If we (the security folks) or we (the organisation) haven’t done a good enough job in making security resources – whether people, training or sufficient visibility – available to those people who need it, then the fact that they’re trying is a great, and something we can work on.  Let’s call that a positive.  Or at least a reason for hope***.

The Good Reason I love technical debt

So let’s get on to the other reason: the legitimate reason.  I love technical debt when it’s named.

What does that mean?

So, we all get that technical debt is a bad thing.  It’s what happens when you make decisions for pragmatic reasons which are likely to come back and bite you later in a project’s lifecycle.  Here are a few classic examples that relate to security:

  • not getting round to applying authentication or authorisation controls on APIs which might at some point be public;
  • lumping capabilities together so that it’s difficult to separate out appropriate roles later on;
  • hard-coding roles in ways which don’t allow for customisation by people who may use your application in different ways to those that you initially considered;
  • hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later.

There are lots more, of course, but those are just a few which jump out at me, and which I’ve seen over the years.  Technical debt means making decisions which will mean more work later on to fix them.  And that can’t be good, can it?

Well, there are two words in the preceding paragraph or two which should make us happy: they are “decisions” and “pragmatic”.  Because in order for something to be named technical debt, I’d argue, it needs to have been subject to conscious decision-making, and for trade-offs to have been made – hopefully for rational reasons.  Those reasons may be many and various – lack of qualified resources; project deadlines; lack of sufficient requirement definition – but if they’ve been made consciously, then the technical debt can be named, and if technical debt can be named, then it can be documented.

And if they’re documented, then we’re half-way there.  As a security guy, I know that I can’t force everything that goes out of the door to meet all the requirements I’d like – but the same goes for the High Availability gal, the UX team, the performance folks, et al..  What we need – what we all need – is for there to exist documentation about why decisions were made, because then, when we return to the problem later on, we’ll know that it was thought about.  And, what’s more, the recording of that information might even make it into product documentation, too.  “This API is designed to be used in a protected environment, and should not be exposed on the public Internet” is a great piece of documentation.  It may not be what a customer is looking for, but at least they know how to deploy the product now, and, crucially, it’s an opportunity for them to come back to the product manager and say, “we’d really like to deploy that particular API in this way: could you please add this as a feature request?”.  Product managers like that.  Very much****.

The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code*****, then there’s a half-way to decent chance that they’ll record some ideas about how this should be done in the future.  If you’re really lucky, they might even add some hooks in the code to make it easier (an “auth” parameter on the API which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file which only accepts one option now, but is at least checked by the code).

I’ve been a bit disingenuous, I know, by defining technical debt as named technical debt.  But honestly, if it’s not named, then you can’t know what it is, and until you know what it is, you can’t fix it*******.  My advice is this: when you’re doing a release close-down (or in your weekly stand-up: EVERY weekly stand-up), have an item to record technical debt.  Name it, document it, be proud, sleep at night.

 


*well, apart from the obvious clickbait reason – for which I’m (a little) sorry.

**I nearly wrote “poke my nose into”.

***work with me here.

****if you’re software engineer/coder/hacker – here’s a piece of advice: learn to talk to product managers like real people, and treat them nicely.  They (the better ones, at least) are invaluable allies when you need to prioritise features or have tricky trade-offs to make.

*****do this.  Just do it.  Documentation which isn’t at least mirrored in code isn’t real documentation******.

******don’t believe me?  Talk to developers.  “Who reads product documentation?”  “Oh, the spec?  I skimmed it.  A few releases back.  I think.”  “I looked in the header file: couldn’t see it there.”

*******or decide not to fix it, which may also be an entirely appropriate decision.

Security for the rest of us (them…)

“I think I’ve just been got by a phishing email…”

I attended Black Hat USA a few weeks ago in Las Vegas*.  I also spent some time at B-Sides LV and DEFCON.  These were my first visits to all of them, and they were interesting experiences.  There were some seriously clever people talking about some seriously complex things, some of which were way beyond my level of knowledge.   There were also some seriously odd things and odd people**.  There was one particular speaker who did a great job, and whose keynote made me think: Alex Stamos, CSO of Facebook.

The reason that I want to talk about Stamos’ talk is that I got a phone call a few minutes back from a member of my family.  It was about his iCloud account, which he was having problems accessing.  Now: I don’t use Apple products***, so I wasn’t able to help. But the background was the interesting point.  I’d had a call last week ago from the same family member.  He’s not … techno-savvy.   You know the one I’m talking about: that family member.  He was phoning me on last week in something of a fluster.

“I think I’ve just been got by a phishing email,” he started.

Now: this is a win.  Somebody – whether me or the media – has got him to understand what a phishing email is.  I’m not saying he could spell it correctly, mind you – or that he’s not going to get hit by one – but at least he knows.

“OK,” I said.

“It said that it was from Apple, and if I didn’t change my password within 72 hours, I’d lose all of my data,” he explained.

Ah, I thought, one of those.

“So I clicked on the link and changed my password.  But I realised after about 5 minutes and changed it again,” he continued.

“Where did you change it that time?” I asked.

“On the Apple site.”

“apple.com?”

“Yes.”

“Then you’re probably OK.”  I gave him some advice on things to check, and suggested ringing Apple and maybe his bank to let them know.  I also gave him the Stern Talk[tm] that we’ve all given users – the one about never clicking through a link on an email, and always entering it by hand.*****  He called me back a few hours later to tell me that the guy he’d spoken to at Apple had reassured him that his bank details weren’t in danger, and that a subsequent notification he’d got that someone was trying to use his account from an unidentified device was a good sign, because it meant that the extra layers of security that Apple had put in place were doing their job.  He was significantly (and rightly) relieved.

“So what has this to do with Stamos’ keynote?” you’re probably asking.  Well, Stamos talked about how many of the attacks and vulnerabilities that we worry about much of the time – zero days, privilege escalations, network segment isolation – make up the tiniest tip of the huge pyramid of security issues that affect our users.  Most of the problems are around misuse of accounts or services.  And most of the users in the world aren’t uberhackers or even script kiddies – they’re not even people like those in the audience****** – but people with sub-$100******* smartphones.

And he’s right.  We need to think about these people, too.  I’m not saying that we shouldn’t worry about all the complex and scary technical issues that we’re paid to understand, fix and mitigate.  They are important, and if we don’t fix them, we’re in for a world of pain.  But our jobs – what we get paid for – should also include thinking about the other people: people who Facebook and Apple make a great deal of money from, and who they quite rightly care about.  The question is: do we, the rest of the industry?  And how are we going to know that we’re thinking like a 68 year old woman in India or a 15 year old boy in Brazil?  (Hint: part of the answer is around diversity in our industry.)

Apple didn’t do too bad a job, I think – though my family member is still struggling with the impact of the password reset.  And the organisation I talked about in my previous post on the simple things we should do absolutely didn’t.  So, from now on, I’m going to try to think a little harder about what impact the recommendations, architectures and designs I come up with might have on the “hidden users” – not the sysadmins, not the developers, not the expert users, but people like my family members.  We need to think about security for them just as much as for security for people like us.


*weird place, right?  And hot.  Too hot.

**I walked out of one session at DEFCON after six minutes as it was getting more and more difficult to resist the temptation to approach the speaker at the podium and punch him on the nose.

***no, I’m not going to explain.  I just don’t: let’s leave it at that for now, OK?  I’m not judging you if you do.****

****of course I’m judging you.  But you’ll be fine.

*****clearly whoever had explained about phishing attacks hadn’t done quite as good a job as I’d hoped.

******who, he seemed to assume, were mainly Good Guys & Gals[tm].

*******approximately sub-€85 or sub-£80 at time of going to press********: please substitute your favoured currency here and convert as required.

********I’m guessing around 0.0000000000000001 bitcoins.  I don’t follow the conversion rate, to be brutally honest.

 

 

The simple things, sometimes…

I (re-)learned an important lesson this week: if you’re an attacker, start at the front door.

This week I’ve had an interesting conversation with an organisation with which I’m involved*.  My involvement is as a volunteer, and has nothing to do with my day job – in other words, I have nothing to do with the organisation’s security.  However, I got an email from them telling me that in order to perform a particular action, I should now fill in an online form, which would then record the information that they needed.

So this week’s blog entry will be about entering information on an online form.  One of the simplest tasks that you might want to design – and secure – for any website.  I wish I could say that it’s going to be a happy tale.

I had look at this form, and then I looked at the URL they’d given me.  It wasn’t a fully qualified URL, in that it had no protocol component, so I copied and pasted it into a browser to find out what would happen. I had a hope that it might automatically redirect to an https-served page.  It didn’t.  It was an http-served page.

Well, not necessarily so bad, except that … it wanted some personal information.  Ah.

So, I cheated: I changed the http:// … to an https:// and tried again**.  And got an error.  The certificate was invalid.  So even if they changed the URL, it wasn’t going to help.

So what did I do?  I got in touch with my contact at the organisation, advising them that there was a possibility that they might be in breach of their obligations under Data Protection legislation.

I got a phone call a little later.  Not from a technical person – though there was a techie in the background.  They said that they’d spoken with the IT and security departments, and that there wasn’t a problem.  I disagreed, and tried to explain.

The first argument was whether there was any confidential information being entered.  They said that there was no linkage between the information being entered and the confidential information held in a separate system (I’m assuming database). So I stepped back, and asked about the first piece of information requested on the form: my name.  I tried a question: “Could the fact that I’m a member of this organisation be considered confidential in any situation?”

“Yes, it could.”

So, that’s one issue out of the way.

But it turns out that the information is stored encrypted on the organisation’s systems.  “Great,” I said, “but while it’s in transit, while it’s being transmitted to those systems, then somebody could read it.”

And this is where communication stopped.  I tried to explain that unless the information from the form is transmitted over https, then people could read it.  I tried to explain that if I sent it over my phone, then people at my mobile provider could read it.  I tried a simple example: I tried to explain that if I transmitted it from a laptop in a Starbucks, then people who run the Starbucks systems – or even possibly other Starbucks customers – could see it.  But I couldn’t get through.

In the end, I gave up.  It turns out that I can avoid using the form if I want to.  And the organisation is of the firm opinion that it’s not at risk: that all the data that is collected is safe.  It was quite clear that I wasn’t going to have an opportunity to argue this with their IT or security people: although I did try to explain that this is an area in which I have some expertise, they’re not going to let any Tom, Dick or Harry*** bother their IT people****.

There’s no real end to this story, other than to say that sometimes it’s the small stuff we need to worry about.  The issues that, as security professionals, we feel are cut and dried, are sometimes the places where there’s still lots of work to be done.  I wish it weren’t the case, because frankly, I’d like to spend my time educating people on the really tricky things, and explaining complex concepts around cryptographic protocols, trust domains and identity, but I (re-)learned an important lesson this week: if you’re an attacker, start at the front door.  It’s probably not even closed: let alone locked.


*I’m not going to identify the organisation: it wouldn’t be fair or appropriate.  Suffice to say that they should know about this sort of issue.

**I know: all the skillz!

***Or “J. Random User”.  Insert your preferred non-specific identifier here.

****I have some sympathy with this point of view: you don’t want to have all of their time taken up by random “experts”.  The problem is when there really _are_ problems.  And the people calling them maybe do know their thing.