No, really: what’s the point of it? I’m currently at the Openstack Summit in Sydney, and was just thrown out of the exhibition area as it’s closed for preparations for an event. A fairly large gentleman in a dark-coloured uniform made it clear that if I didn’t have taken particular sticker on my badge, then I wasn’t allowed to stay. Now, as Red Hat* is an exhibitor, and I’m our booth manager’s a buddy**, she immediately offered me the relevant sticker, but as I needed a sit down and a cup of tea***, I made my way to the exit, feeling only slightly disgruntled.
“What’s the point of this story?” you’re probably asking. Well, the point is this: I’m 100% certain that if I’d asked the security guard why he was throwing me out, he wouldn’t have had a good reason. Oh, he’d probably have given me a reason, and it might have seemed good to him, but I’m really pretty sure that it wouldn’t have satisfied me. And I don’t mean the “slightly annoyed, jet-lagged me who doesn’t want to move”, but the rational, security-aware me who can hopefully reason sensibly about such things. There may even have been such a reason, but I doubt that he was privy to it.
My point, then, really comes down to this. We need be able to explain the reasons for security policies in ways which:
- Can be expressed by those enforcing them;
- Can be understood by uninformed users;
- Can be understood and queried by informed users.
In the first point, I don’t even necessarily mean people: sometimes – often – it’s systems that are doing the enforcing. This makes things even more difficult in ways: you need to think about UX***** in ways that are appropriate for two very, very different constituencies.
I’ve written before about how frustrating it can be when you can’t engage with the people who have come up with a security policy, or implemented a particular control. If there’s no explanation of why a control is in place – the policy behind it – and it’s an impediment to the user*******, then it’s highly likely that people will either work around it, or stop using the system.
Unless. Unless you can prove that the security control is relevant, proportionate and beneficial to the user. This is going to be difficult in many cases. But I have seen some good examples, which means that I’m hopeful that we can generally do it if we try hard enough. The problem is that most designers of systems don’t bother trying. They don’t bother to have any description, usually, but as for making that description show the relevance, proportionality and benefits to the user? Rare, very rare.
This is another argument for security folks being more embedded in systems design because, like it or not, your users are part of your system, and they need to be included in the design. Which means that you need to educate your UX people, too, because if you can’t convince them of the need to do security, you’re sure as heck not going to be able to convince your users.
And when you work with them on the personae and use case modelling, make sure that you include users who are like you: knowledgeable security experts who want to know why they should submit themselves to the controls that you’re including in the system.
Now, they’ve opened up the exhibitor hall again: I’ve got to go and grab myself a drink before they start kicking people out again.
*my employer.
**she’s American.
***I’m British****.
****and the conference is in Australia, which meant that I was actually able to get a decent cup of the same.
*****this, I believe, stands for “User eXperience”. I find it almost unbearably ironic that the people tasked with communicating clearly to us can’t fully grasp the idea of initial letters standing for something.
******and most security controls are.