Who do you trust: your data, or your enemy’s?

Our security logs define our organisational memory.

Imagine, just imagine, that you’re head of an organisation, and you suspect that there’s been some sort of attack on your systems[1].  You don’t know for sure, but a bunch of your cleverest folks are pretty sure that there are sufficient signs to merit an investigation.  So, you agree to allow that investigation, and it’s pretty clear from the outset – and becomes yet more clear as the investigation unfolds – that there was an attack on your organisation.  And as this investigation draws close to its completion, you happen to meet the head of another organisation, which happens to be not only your competitor, but also the party that your investigators are certain was behind the attack.  That person – the leader of your competitor – tells you that they absolutely didn’t perform an attack: no, sirree.  Who do you believe?  Your people, who have been laboring[2] away for months, or your competitor?

What a ridiculous question: of course you’d believe your own people over your competitor, right?

So, having set up such an absurd scenario[4], let’s look at a scenario which is actually much more likely.  Your systems have been acting strangely, and there seems to be something going on.  Based on the available information, you believe that you’ve been attacked, but you’re not sure.  Your experts think it’s pretty likely, so you approve an investigation.  And then one of your investigatory team come to you to tell you some really bad news about the data.  “What’s the problem?” you ask.  “Is there no data?”

“No,” they reply, “it’s worse than that.  We’ve got loads of data, but we don’t know which is real.”

Our logs are our memory

There is a literary trope – one of my favourite examples is Margery Allingham’s The Traitor’s Purse – where a character realises that he or she is somebody other than who they think they are when they start to question their memories.  We are defined by our memories, and the same goes for organisational security.  Our security logs define our organisational memory.  If you cannot prove that the data you are looking at is[5] correct, then you cannot be sure what led to the state you are in now.

Let’s give a quick example.  In my organisation, I am careful to log every time I upgrade a piece of software, but I begin to wonder whether a particular application is behaving as expected.  Maybe I see some traffic to an external IP address which I’ve never seen before, for instance.  Well, the first thing I’m going to do is to check the logs to see whether somebody has updated the software with a malicious version.  Assuming that somebody has installed a malicious version, there are three likely outcomes at this point:

  1. you check the logs, and it’s clear that a non-authorised version was installed.  This isn’t good, because you now know that somebody has unauthorised access to your system, but at least you can do something about it.
  2. at least some of the logs are missing.  This is less good, because you really can’t be sure what’s gone on, but you know have a pretty strong suspicion that somebody with unauthorised access has deleted some logs to cover up their tracks, which means that you have a starting point for remediation.
  3. there’s nothing wrong.  All the logs look fine.  You’re now really concerned, as you can’t be sure of your own data – your organisation’s memories.  You may be looking at correct data, or you may be looking at incorrect data: data which has been written by your enemy.  Attackers can – and do – go into log files and change the data in them to obscure the fact that they have performed particular actions.  It’s actually one of the first steps that a competent attacker will perform.

In the scenario as defined, things probably aren’t too bad: you can check the checksum or hash of the installed software, and check that it’s the version you expect.  But what if the attacker has also changed the version of your checksum- or hash-checker so that, for packages associated with this particular application, they always return what you expect to see?  This is not a theoretical attack, and nor is it the only way approach that attackers have to try to muddy the waters as to what they’ve done. And if it has happened, they you have no way of confirming that you have the correct version of the application.  You can try updating the checksum- or hash-checker, but what if the attacker has meddled with the software installer to ensure that it always installs their version…?

It’s a slippery slope, and bar wiping the entire system and reinstalling[6], there’s little you can do to be certain that you’ve cleared things up properly.  And in some cases, it may be too late: what if the attacker’s main aim was to exfiltrate some of your proprietary data, for example?

Lessons to learn, actions to take

Well, the key lesson is that your logs are important, and they need to be protected.  If you can’t trust your logs, then it can be very, very difficult not only to identify the extent of an attack, but also to remediate it or, in the worst case, even to know that you’ve been attacked at all.

And what can you do?  Well, there are many techniques that you can employ, and the best combination will depend on a number of questions, including your regulatory regime, your security posture, and what attackers you decide to defend against.  I’d start with a fairly simple combination:

  • move your most important logs off-system.  Where possible, host logs on different systems to the ones that are doing the reporting.  If you’re an attacker, it’s more difficult to jump from one system to another than it is to make changes to logs on a system which you’ve already compromised;
  • make your logs write-only.  This sounds crazy – how are you supposed to check logs if they can’t be read?  What this really means is that you separate read and write privileges on your logs so that only those with a need to read them can do so.  Writing is less worrisome – though there are attacks here, including filesystem starvation – because if you can’t see what you need to change, then it’s almost impossible to do so.  If you’re an attacker, you might be able to wipe some logs – see our case 2 above – but obscuring what you’ve actually done is more difficult.

Exactly what steps you take are up to you, but remember: if you can’t trust your logs, you can’t trust your data, and if you can’t trust your data, you don’t know what has happened.  That’s what your enemy wants: confusion.


1 – like that’s ever going to happen.

2- on this, very rare, occasion, I’m going to countenance[2] a US spelling.  I think you can guess why.

3 – contenance?

4 – I know, I know.

5 – are?

6 – you checked the firmware, right?  Hmm – maybe safer just to buy completely new hardware.

 

What are they attacking me for?

There are three main types of motivations: advantages to them; disadvantages to us; resources.

I wrote an article a few weeks ago called What’s a State Actor, and should I care?, and a number of readers asked if I could pull apart a number of the pieces that I presented there into separate discussions[1].  One of those pieces was the question of who is actually likely to attack me.

I presented a brief list thus:

  • insiders
  • script-kiddies
  • competitors
  • trouble-makers
  • hacktivists
  • … and more.

One specific “more” that I mentioned was State Actors.  If you look around, you’ll find all manner of lists.  Other attacker types that I didn’t mention in my initial list include:

  • members of organised crime groups
  • terrorists
  • “mercenary” hackers.

I suspect that you could come up with more supersets or subsets if you tried hard enough.

This is all very well, but what’s the value in knowing who’s likely to attack you in the first place[3]?  There’s a useful dictum: “No system is secure against a sufficiently resourced and motivated attacker.”[5]  This gives us a starting point, because it causes us to ask the question

  • what motivates the attacker?

In other words: what do they want to achieve?  What, in fact, are they trying to do or get when they attack us?  This is the core theme of this article.

There are three main types of motivations:

  1. advantages to them
  2. disadvantages to us
  3. resources.

There is overlap between the three, but I think that they are sufficiently separate to warrant separate discussion.

Advantages to them

Any successful attack is arguably a disadvantage to us, the attacked, but that does not mean that the primary motivation of an attacker is necessarily to cause harm.  There are a number of other common motivations, including:

  • reputation or “bragging rights” – a successful attack may well be used to prove the skills of an attacker to other parties.
  • information to share – sometimes attackers wish to gain information about our systems to share with others, whether for gain or to enhance their reputation (see above).  Such attacks may be painted a security research, but if they occur outside an ethical framework (such as provided by academic institutions) and without consent, it is difficult to consider them anything other than hostile.
  • information to keep – attackers may gain information and keep it for themselves for later use, either against our systems or against similarly configured systems elsewhere.
  • practice/challenge – there are attacks which are undertaken solely to practice techniques or as a personal challenge (where an external challenge is made, I would categorise them under “reputation”).  Harmless as this motivation may seem to some parts of the community, such attacks still cause damage and require mitigation, and should be considered hostile.
  • for money – some attacks are undertaken at the request of others, with the primary motivation of the attacker being that money or other material recompense (though the motivation of the party commissioning that attack likely to be one of these other ones listed)[6].

Disadvantages to us

Attacks which focus on causing negative impact to the individual or organisation attacked can be listed in the following categories:

  • business impact – impact to the normal functioning of the organisation or individual attacked: causing orders to be disrupted, processes to be slowed, etc..
  • financial impact – direct impact to the financial functioning of the attacked party: fraud, for instance.
  • reputational impact – there have been many attacks where the intention has clearly been to damage the reputation of the attacked party.  Whether it is leaking information about someone’s use of a dating website, disseminating customer information or solely replacing text or images on a corporate website, the intention is the same: to damage the standing of those being attacked.  Such damage may be indirect – for instance if an attacker were to cause the failure of an oil pipeline, affecting the reputation of the owner or operator of that pipeline.
  • personal impact – subtly different from reputational or business impact, this is where the attack intends to damage the self-esteem of an individual, or their ability to function professionally, physically, personally or emotionally.  This could cover a wide range of attacks such as “doxxing” or use of vulnerabilities in insulin pumps.
  • ecosystem impact – this type of motivation is less about affecting the ability of the individual or organisation to function normally, and more about affecting the ecosystem that exists around it.  Impacting the quality control checks of a company that made batteries might impact the ability of a mobile phone company to function, for instance, or attacking a water supply might impact the ability of a fire service to respond to incidents.

Resources

The motivations for some attacks may be partly or solely to get access to resources.  These resources might include:

  • financial resources – by getting access to company accounts, attackers might be able to purchase items for themselves or others or otherwise defraud the company.
  • compute resources – access to compute resources can lead to further attacks or be used for purposes such as cryptocurrency mining.
  • storage resources – attackers may wish to store illegal or compromising material on others’ systems.
  • network resources – access to network resources allows attackers to launch attacks elsewhere or to stream information with little traceability.
  • human resources – access to some systems may allow human resources to be deployed in ways unintended by the party being attacked: deploying police officers to a scene a long distance away from a planned physical attack, for instance.
  • physical resources – access to some systems may also allow physical resources to be deployed in ways unintended by the party being attacked: sending ammunition to the wrong front in a war, might, for example, lead to military force becoming weakened.

Conclusion

It may seem unimportant to consider the motivations of those attacking us, but if we can understand what it is that they are looking for, we can decide what we should defend, and sometimes what types of defence we should put in place.  As always, I welcome comments on this article: I’m sure that I’ve missed out some points, or misrepresented others, so please do get in touch and let me know your thoughts.


1 – I considered this a kind and polite way of saying “you stuffed too much into a single article: what were you thinking?”[2]

2 – and I don’t necessarily disagree.

3 – unless you’re just trying to scare senior management[4].

4 – which may be enjoyable, but is ultimately likely to backfire if you’re doing it without evidence and for a good reason.

5 – I made a (brief) attempt to track the origins of this phrase: I’m happy to attribute if someone can find the original.

6 – hat-tip to Reddit user poopin for spotting that I’d missed this one out.