Learn to hack online – h4x0rz and pros

Removing these videos hinders defenders much more significantly than it impairs the attackers.

Over the past week, there has been a minor furore over YouTube’s decision to block certain “hacking” videos.  According to The Register, the policy first appeared on the 5th April 2019:

“Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Now, I can see why they’ve done this: it’s basic backside-covering.  YouTube – and many or the other social media outlets – come under lots of pressure from governments and other groups for failing to regulate certain content.  The sort of content to which such groups most typically object is fake news, certain pornography or child abuse material: and quite rightly.  I sympathise, sometimes, with the social media giants as they try to regulate a tidal wave of this sort or material – and I have great respect for those employees who have to view some of it – having written policies to ban this sort of thing may not deter many people from posting it, but it does mean that the social media companies have a cast-iron excuse for excising it when they come across it.

Having a similar policy to ban these types of video feels, at first blush, like the same sort of thing: you can point to your policy when groups complain that you’re hosting material that they don’t like – “those dangerous hacking videos”.

Hacking[3] videos are different, though.  The most important point is that they have a  legitimate didactic function: in other words, they’re useful for teaching.  Nor do I think that there’s a public outcry from groups wanting them banned.  In fact, they’re vital for teaching and learning about IT security, and most IT security professionals and organisations get that.  Many cybersecurity techniques are difficult to understand properly when presented as theoretical attacks and, more importantly, they are difficult to defend against without detailed explanation and knowledge.  This is the point: these instructional videos are indispensable tools to allow people not just to understand, but to invent, apply and maintain defences and mitigations against known attacks.  IT security is hard, and we need access to knowledge to help us defeat the Bad Folks[tm] who we know are out there.

“But these same Bad Folks[tm] will see these videos online and use them against us!” certain people will protest.  Well, yes, some will.  But if we ban and wipe them from widely available social media platforms, where they are available for legitimate users to study, they will be pushed underground, and although fewer people may find them, the nature of our digital infrastructure means that the reach of those few people is still enormous.

And there is an imbalance between attackers and defenders: this move exacerbates it.  Most defenders look after small numbers of systems, but most serious attackers have the ability to go after many, many systems.  By pushing these videos away from places that many defenders can learn from them, we have removed the opportunity for those who most need access to this information, whilst, at the most, raising the bar for those against who we are trying to protect.

I’m sure there are numbers of “script-kiddy” type attackers who may be deterred or have their access to these videos denied, but they are significantly less of a worry than motivated, resourced attackers: the ones that haunt many IT security folks’ worst dreams.  We shouldn’t use a mitigation against (relatively) low-risk attackers remove our ability to defend against higher risk attackers.

We know that sharing of information can be risky, but this is one of those cases in which the risks can be understood and measured against others, and it seems like a pretty simple calculation this time round.  To be clear: the (good) many need access to these videos to protect against the (malicious) few.  Removing these videos hinders the good much more significantly than it impairs the malicious, and we, as a community, should push back against this trend.


1 – it’s pronounced “few-ROAR-ray”.  And “NEEsh”.  And “CLEEK”[2].

2 – yes, I should probably calm down.

3 – I’d much rather refer to these as “cracking” videos, but I feel that we probably lost that particular battle about 20 years ago now.

Why Chatham House Rulez for security

Security sometimes requires sharing – but not attribution

In June 1927, someone had a brilliant idea.  Or, at least, that’s when the idea was first codified, at a meeting of Royal Institute of International Affairs at Chatham House in London.  The idea was this: all attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

This is brilliantly clever.  It allows at least two things:

  1. for the sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  2. for the sharing of views or opinions which, when associated with a particular person or organisation, might cause wider issues or problems.

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations.  This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

The Chatham House Rule and open source

What has this got to do with open source, though?  My answer is: a lot.

Security is one of those areas which can have an interesting relationship with open source.  I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult.  The first is to do with data, and the second is to do with perceived expertise.

Why data is difficult

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data.  There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities – aggregated health information is great, but people aren’t happy about their personal health records being exposed.  The same goes for financial data – aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber-attacks – successful and unsuccessful – against companies?  The types that we most successful?  The techniques that were used to mitigate?  All of these are vastly useful to the wider community, and there’s a need to share them more widely.  We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time, however, when particular examples are needed.  And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directio

ns, which may include: their own organisation, their lawyers, their board, their customers and future attackers, who can use that information to their advantage.  This is where the Chatham House Rule can help: it allows experts to give their view and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organisations – or their reputations – on the line.  Open source needs this, and there are times when those involved in open source security, in particular, needs to be able to share the information  they know in a way which doesn’t put their organisations in danger.

Why expertise is difficult

Another area of difficulty is expertise, or more specifically, trust in expertise.  Most organisations aim for a meritocratic approach – or say they do – at least within that organisation.  But the world is full of bias, particularly between organisations.  I may be biased against views held or expressed by a particular organisation, just because of their past history and my interactions with that company, but it is quite possible that there are views held and expressed by individuals from that company which, if separated from their attribution, I might take seriously.  I may be biased against a particular person, based on my previous interactions with him/her, or just on my underlying prejudices.  It only needs one person who does not hold my biases to represent those views, as long as they personally trust the organisation, or even just the person, expressing them, to allow me to process and value those views myself, gaining valuable insight from them.  The Chatham House Rule can allow that to happen.

In fact, the same goes for intra-organisation biases: maybe product management isn’t interested in the views of marketing, but what if there are important things to learn from within that department, that product management can’t hear because of that bias?  The Chatham House Rule allows an opportunity to get past that.

To return to open source, many contributors are employed by a particular organisation, and it can be very difficult for them to express opinions around open source when that organisation may not hold the same views, however carefully they try to separate themselves from the official line.  Even more important, in terms of security, it very well be that they can bring insights which are relevant to a particular security issue which their company is not happy about being publicly known, but which could benefit one or more open source projects.  To be clear: I’m not talking, here, about exposing information which is specifically confidential, but about sharing information with the permission of the organisation, but within specific constraints.

More on open source

There are all sorts of biases within society, and open source is, alas, not without its own.  When a group of people gets to know each other well, however, it is often the case that members of that group can forge a respect for each other which goes beyond gender, age, academic expertise, sexuality, race or the like.  This is a perfect opportunity for meetings under the Chatham House Rule: it gives this group the chance to discuss and form opinions which can be represented to their peers – or the rest of the world – without having to worry so much about any prejudices or biases that might be aimed at particular members.

Finally – a note of caution

The Chatham House Rule provides a great opportunity to share expertise and knowledge, but there is also a danger that it can allow undue weight to be expressed to anecdotes.  Stories are a great way of imparting information, but without data to back them up, they are not as trustworthy as they might be.  Because the Chatham House Rule inhibits external attribution, this does not mean that due diligence should not be applied within such a meeting to ensure that information is backed up by data.

Helping our governments – differently

… we may live in a new security and terrorism landscape

Two weeks ago, I didn’t write a full post, because the Manchester arena bombing was too raw.  We are only a few days on from the London Bridge attack, and I could make the same decision, but think it’s time to recognise that we have a new reality that we need to face in Britain: that we may live in a new security and terrorism landscape.  The sorts of attacks – atrocities – that have been perpetrated over the past few weeks (and the police and security services say that despite three succeeding, they’ve foiled another five) are likely to keep happening.

And they’re difficult to predict, which means that they’re difficult to stop.  There are already renewed calls for tech companies* to provide tools to allow the Good Guys[tm**] to read the correspondence of the people who are going to commit terrorist acts.  The problem is that the preferred approach requested/demanded by governments seems to be backdoors in encryption and/or communications software, which just doesn’t work – see my post The Backdoor Fallacy – explaining it slowly for governments.  I understand that “reasonable people” believe that this is a solution, but it really isn’t, for all sorts of reasons, most of which aren’t really that technical at all.

So what can we do?  Three things spring to mind, and before I go into them, I’d like to make something clear, and it’s that I have a huge amount of respect for the men and women who make up our security services and intelligence community.  All those who I’ve met have a strong desire to perform their job to the best of their ability, and to help protect us from people and threats which could damage us, our property, and our way of life.  Many of these people and threats we know nothing about, and neither do we need to.  The job that the people in the security services do is vital, and I really don’t see any conspiracy to harm us or take huge amounts of power because it’s there for the taking.  I’m all for helping them, but not at the expense of the rights and freedoms that we hold dear.  So back to the question of what we can do.  And by “we” I mean the nebulous Security Community****.  Please treat these people with respect, and be aware they they work very, very hard, and often in difficult and stressful jobs*****.

The first is to be more aware of our environment.  We’re encouraged to do this in our daily lives (“Report unaccompanied luggage”…), but what more could we do in our professional lives?  Or what could we do in our daily lives by applying our professional capabilities and expertise to everyday activities?  What suspicious activities – from traffic on networks from unexpected place to new malware – might be a precursor to something else?  I’m not saying that we’re likely to spot the next terrorism attack – though we might – but helping to combat other crime more effectively both reduces the attack surface for terrorists and increases the available resourcing for counter-intelligence.

Second: there are, I’m sure, many techniques that are available to the intelligence community that we don’t know about.  But there is a great deal of innovation within enterprise, health and telco (to choose three sectors that I happen to know quite well******) that could well benefit our security services.  Maybe your new network analysis tool, intrusion detector, data aggregator has some clever smarts in it, or creates information which might be of interest to the security community.  I think we need to be more open to the idea of sharing these projects, products and skills – proactively.

The third is information sharing.  I work for Red Hat, an Open Source company which also tries to foster open thinking and open management styles.  We’re used to sharing, and industry, in general, is getting better about sharing information with other organisations, government and the security services.  We need to get better at sharing both active data from systems which are running as designed and bad data from systems that are failing, under attack or compromised.  Open, I firmly believe, should be our default state*******.

If we get better at sharing information and expertise which can help the intelligence services in ways which don’t impinge negatively on our existing freedoms, maybe we can reduce the calls for laws that will do so.  And maybe we can help stop more injuries, maimings and deaths.  Stand tall, stand proud.  We will win.


*who isn’t a tech company, these days, though?  If you sell home-made birthday cards on Etsy, or send invoices via email, are you a tech company?  Who knows.

**this an ironic tm***

***not that I don’t think that there are good guys – and gals – but just that it’s difficult to define them.  Read on: you’ll see.

****I’ve talked about this before – some day I’ll define it.

*****and most likely for less money than most of the rest of us.

******feel free to add or substitute your own.

*******OK, DROP for firewall and authorisation rules, but you get my point.