Merry “sorting out relatives’ IT problems” Day

Today’s the day – or the season – when your mother-in-law asks you to fix her five year old laptop, unclog the wifi (it’s usually her husband, “stealing it all”) or explain why her mouse mat is actually easily large enough – she just needs to lift the mouse up and place it back in the middle if she can’t get the cursor to go any further right.

Lucky me: I didn’t even have to wait till Christmas Day this year: my m-in-law called us at home a couple of days ago to complain that “the email thingy isn’t working on my tablet and the Chrome has gone”. After establishing that her Chrome Book (upstairs) was fine, and she just couldn’t be bothered to ascend the stairs to use it for the two days before we came to visit and I could debug her tablet problem in person, I proceeded to debug the problem over the crackly wireless DECT phone they keep attached to their land line, instead[1].

Despite the difficulty in making out approximately 25% of the words down the line, I became more and more convinced that even if her tablet was having problems, then a reboot of her router was probably due.

Me: so you know which one the router is?

Her: umm…

Me: it’s the little box where the Internet comes in.

Her: is it in the hall?

Me, to the wife, who’s smirking, since she managed to offload this call to me: could it be in the hall?

Wife: yes, it’s in the hall.

Me: yes, it’s in the hall.

Mother-in-law: OK.

Me: there should be a power button on the front or the back, or you can just pull the power lead out if that’s easiest.

Her, clearly bending over to look at it: shall I just turn it off at the wall? That might be simplest?

Me: well, OK.

Her: Right, I’m doing that n… BEEEEEEEEEEEEEEP.

It turns out that her DECT phone hub is plugged into the same socket. Of course. This is my life. This is OUR life.

Merry Christmas, and a Happy New Year to you all.


1 – this, folks, is how to stay married for 23 and a half years[2].

2 – and counting.

The Twelve Days of Work-Life Balance

12 tips for working at home over the holidays

Disclaimer: the author refuses to take any blame for any resulting disciplinary or legal action taken against readers who follow any of the suggestions in this article.

There’s a good chance that things slow down for the holiday season at your organisation or company[1], and as they do, there’s a corresponding[2] chance that you may end up not going into the office for some of the upcoming days.  Some workplaces expect you to turn up to work in the office unless you’re officially on holiday, while others allow or encourage workers to be based at home and perform their duties there for all or some of the period.  It’s those people who will be spending time at home who are targeted by this article.

Working from home is an opportunity to bunk off and is a complete wheeze a privilege and responsibility to be taken seriously.  There are, however, some important techniques that you should take on board to ensure not only that you continue to be productive but, even more important, that you continue to be seen to be productive.  I’ve split my tips up into helpful headings for your ease of use.

Video calls

Tip 1 (for those who shave): you don’t need to.  Yes, the resolution on webcams has increased significantly, but who’s going to care if it looks like you’ve just rolled out of bed?  The fact that you’ve even bothered shows your commitment to the meeting you’re attending.

Tip 2 (for those who wear make-up): far be it from me to dictate whether you wear make-up or not to meetings.  But if you choose to, there’s no need to refresh last night’s make-up in the morning.  If you’ve staggered home late, you may not have got round to removing your party lipstick and mascara, and it may even have smudged or run a bit: don’t worry.  It’ll look “festive” in the morning, and will encourage a relaxed atmosphere at the meeting.

Tip 3 (for coffee drinkers): you may need an extra cup of coffee in the afternoon, to get you through the day.  Who’s going to know if you add a shot to it?  It’ll keep you warm, and possibly upright.  My wife swears by Baileys.  Or Irish Whiskey.  Or gin.  Pretty much anything, in fact.

Tip 4 (for non-coffee drinkers): cocktails are a no-no for video conferences, unless approved by management (sorry).  However, there are a number of other options to explore.  A Long Island Iced Tea looks like, well, and iced tea.  Whisky (or whiskey) can look like normal tea, and my personal favourite, cherry brandy, looks like a child’s fruit cordial that you didn’t dilute sufficiently.  And tastes fairly similar.

Tip 5 (for clothes wearers): few organisations (of which I’m aware, anyway) have adopted a non-clothing policy for video-calls.  Clothes are required.  My favourite option is to wear a festive jumper, but this is only fun if it has woven-in flashing lights which can distract your fellow participants[3].  Coordinating the periodicity of the flashing with colleagues gets you bonus points.

Tip 6 (also for clothes wearers): wear something on your bottom half.  I know you think you may not be getting up during the call, but when a small child vomits in the background, the postal worker arrives at the door, or you just need another cup of coffee or “beverage” (see tips 3 & 4) to get you through the next two hours, you’ll be grateful for this advice.

Teleconferences (non-video)

Tip 7: use a chat channel to have a side conversation with your peers. You can have hilarious discussions about the intellectual capacity and likely parentage of your management, or even better, play a game of meeting bingo.

Tip 8: the mute button is for cowards.  Yes, wind can be a problem after an over-indulgence at the pub, club or party the night before, and microphones can be quite sensitive these days, but who’s to know it’s you[4]?

Emails

Tip 9: the best time to respond to an email is when you receive it, right?  This will show everybody how devoted you are to your job.  So if it arrives after you’ve already partaken of a brew or two down the pub, or sampled the herbal opportunities recently decriminalised in your state in your bedroom, then replying immediately will almost certainly be considered responsible and professional.  And auto-correct will almost certainly act in your favour: after all, your boss really is a complete duck, yes?

Tip 10: pepper your emails with poor festive puns[5].  It’s just what you do.

Family

Tip 11: you may have agreed to “work” over this period as an excuse to avoid spending too much time with the family[6], but there’s always the chance that they will barge into your office, throw up in the hall (see tip 6), or just fall asleep on your keyboard[7].  Invest in a lock on your office door, or work somewhere out of range[8].  Your work is important, and you must guard against unwanted interruptions, such as being awoken from an important doze.

Productivity

Tip 12: it’s your responsibility, when working from home, to ensure that you maintain your productivity.  But breaks are important.  There’s a tricky balance here between protecting your time from the family (you don’t want them to notice that you’re not online 100% of the time) and taking sensible amounts of breaks.  Assuming that you’ve taken my advice about locking your office door, then placing an XBox or similar gaming console on your desk next to your work computer is a great way of allowing yourself some downtime without risking the wrath of your family (assuming careful monitor placement and controller handling).

Tip 12a (extra tip!): if you’re not careful, too much time hidden away from the family will get you in trouble.  My piece of advice here is to offer to help.  But on your own terms.  Rushing out of your office[9], looking harried and then announcing “I’ve got ten minutes until my next call, and I’m feeling guilty: is there anything I can do?” can gain you useful credit without the risk of your having to do anything too taxing.

Summary[10]

You can maintain a productive and professional workplace at home if called to do so by your organisation.  It is your responsibility to balance the needs of work with your needs and, of course, the needs of your family.

Have a Merry Christmas (or other festival) and a Happy New Year (whenever it falls for you)!


1 – this is generally a Judaeo-Christian set of holidays, but I hope that this article is relevant to most holidays: religious, national, regional or secular.

2 – and possibly correlateable, though check out one of my favourite XKCD comics: https://xkcd.com/552/.

3 – owners of luxuriant beards or heads of hair may prefer to weave flashing fairy lights into their hair for a similar effect.

4 – except for the small matter of the little indicator against each participant’s name which shows who’s “talking”.

5 – “Your presents is requested.”  “But wait—there’s myrrh.” You get the idea.

6 – “Sorry, darling, I know your parents are here, but a really critical bug came in, and I’m the only one who can look at it in time…”

7 – mainly a problem for owners of cats or teenagers.

8 – try searching for “Where’s a pub near me that’s open now?”.

9 – don’t forget to lock it again, in case a child notices and purloins that gaming console.

10 – in the Southern Hemisphere, at least.  Sorry: see [5].

 

What’s a certificate?

For want of a nail, the kingdom was lost.

There was a huge story in the UK last week about how an expired certificate basically brought an entire mobile phone network (O2) to its knees. But what is a certificate, why do they expire, and why would that have such a big impact? In order to understand, let’s step back a bit and look at why you need certificates in the first place.

Let’s assume that two people, Alice and Bob, want to exchange some secret information. Let’s go further, and say that Bob is actually Bobcorp, Alice’s bank, and she wants to be able send and receive her bank statements in encrypted form. There are well established ways to do this, and the easiest way is for them to agree on a shared key that they use to both encrypt and decrypt each others’ messages. How do they agrees this key? Luckily, there are some clever ways in which they can manage a “handshake” between they two of them, even if they’ve not communicated before, which ends in their both having a copy of the key, without the chance of anybody else getting hold of it.

The problem is that Alice can’t actually be sure that she’s talking to Bobcorp (or vice versa). Bobcorp probably doesn’t mind, at this point, because he can ask Alice to provide her login credentials, which will allow him to authenticate her. But Alice really does care: she certainly shouldn’t be handing her login details to somebody – let’s call her “Eve” – who’s just pretending to be Bob.

The solution to this problems comes in two parts: certificates and Certificate Authorities (CAs). A CA is a well-known and trusted party with whom Bobcorp has already established a relationship: typically by providing company details, website details and the like. Bobcorp also creates and sends the CA a special key and very specific information about itself (maybe including the business name, address and website information). The CA, having established Bobcorp’s bona fides, creates a certificate for Bobcorp, incorporating the information that was requested – in fact, some of the information that Bobcorp sends the CA is usually in the form of a “self-signed certificate”, so pretty much all that the CA needs to do is provide its own signature.

Astute readers will be asking themselves: “How did this help? Alice still needs to trust the CA, right?” The answer is that she does. But there will typically be a very small number of CAs in comparison to Bobcorp-type companies, so all Alice needs to do is ensure that she can trust a few CAs, and she’s now good to go. In a Web-browsing scenario, Alice will usually have downloaded a browser which already has appropriate trust relationships with the main CAs built in. She can now perform safe handshakes with lots of companies, and as long as she (or her browser) checks that they provide certificates signed by a CA that she trusts, she’s relatively safe.

But there’s a twist. The certificates that the CA issues to Bobcorp (and others) typically have an expiration date on them. This isn’t just to provide the CA with a recurring revenue stream – though I’m sure that’s a nice benefit – but also in case Bobcorp’s situation changes: what if it has gone bankrupt, for instance, or changed its country of business?[1] So after a period of time (typically in the time frame of a year or two, but maybe less or more), Bobcorp needs to reapply to get a new certificate.

What if Bobcorp forgets? Well, when Alice visits Bobcorp’s site and the browser notices an expired certificate, it should want her not to proceed, and she shouldn’t give them any information until it’s renewed. This sounds like a pain, and it is: Bobcorp and its customers are going to be severely inconvenienced. Somebody within Bobcorp whose job it was to renew the certificate is going to be in trouble.

Life is even worse in the case where no actual people are involved. If, instead of Alice, we have an automated system A, and instead of Bob, we have an automated system, B. A still needs to trust that it’s talking to the real B in case an evil system E is pretending to be B, so certificates are still required. In this case, if, B’s certificate expires, A should quite rightly refuse to connect to it. This seems to have been what happened to cause the mobile data outage that O2 is blaming on Ericsson, one of its suppliers. There was no easy way to fix the problem, or tell the many, many A-type systems that may have been trying to communicate with the B system(s) to carry on regardless. And so, for want of a nail, the kingdom was lost.

The lesson? Avoid single points of failure, think about fall-back modes. And be ready to move to remedy unexpected errors. Quickly.


1 – there are also various mechanisms to revoke, or cancel, certificates but hey they are typically complex, ill-implemented in many cases, and consequently little-used.

Playing a game

I’m at an EU Cybercrime Summit today and yesterday.  This may not sound very exciting, and, maybe considering this, the organisers arranged a game for us to play yesterday.  It was a simulation of a couple of different, but connected scenarios, and there was a web interface via which we could all interact with the engine.

The first scenario revolved around managing an attack on a piece of critical national infrastructure: we were acting as the CISO, and trying to work out our best course of action in terms of managing responses and communicating with various external agencies.  The second scenario had us as the head of a national cybersecurity agency, watching and trying to manage an emerging set of issues, including a Meltdown/Spectre-type vulnerability and various piggy-backed attacks.

At each stage, there was some explanation, and then a serious of options for us to choose, along with a countdown, adding an element of tension to the process as we had to submit our answers[1].  Sometimes we had to choose a single option – “How serious is this issue? Not an issue; minor; substantial; significant; major; national crisis; international crisis” – and sometimes we could choose two or more – “Who will you inform about this?  Internal only; trusted parties; national cybersecurity bodies; national intelligence; international parties.”  We were encouraged to discuss our choices with our neighbours before we made them, and once all the answers were in, bar charts were displayed on the screen in front of us (and on our devices) showing how everbody had voted.

At the beginning of the process, we had been asked to enter some basic information about ourselves such as sector (public, industry, academic, etc.) and expertise (security, policy, justice, etc.), and for some of the bar charts, a further breakdown as given, showing how the sector and expertise voting had gone.  Experts at the front gave their opinions of the “correct” answers, and reactions as to how people had voted.

I’d not participated in a game/simulation like this before, and had no idea either how it would go, nor how beneficial it might be.  To my surprise, I enjoyed it and found it both interesting and educational.  The scenarios were broad enough that they were unlikely to be many people in the room who had expertise across all of the different issues, I enjoyed discussing my thoughts with a neighbour who I’d only met earlier that day, and then discussing with him whether we agreed with the experts’ views.

In short, it was a very useful exercise, and I’m wondering how I could apply it to my work in different contexts.  It was educational, fun, provided opportunities for forming relationships – we found ourselves discussing scenarios and issues with others around us – and allows for further analysis after the evene.t


1 – I’m not often one to link to external products, but this one seemed good, and has a free pricing tier: it was called Mentimeter. The game was run by AIT (Austrian Institute of Technology), Center for Digital Safety & Security.

Encryption “backdoor”? No, it’s an gaping archway.

Backdoors are just a non-starter.

Note: this is probably one of those posts where I should point out that the views expressed in this article aren’t necessarily those of my employer, Red Hat.  Though I hope that they are.

I understand that governments don’t like encryption. Well, to be fair, they like encryption for their stuff, but they don’t want criminals, or people who might be criminals to have it. The problem is that “people who might be criminals” means you and me[1]. I need encryption, and you need encryption. For banking, but business, for health records, for lots of things. This isn’t the first time I’ve blogged on this issue, and I actually compiled a list in a previous post, giving some examples of perfectly legal, perfectly appropriate reasons for “us” to be using encryption. I’ve even written about the importance of helping governments go about their business.

Unluckily, it seems that the Government of Australia has been paying insufficient attention to the points that I have[2] been making. It seems that they are hell-bent on passing a law that would require relevant organisations (the types of organisations listed are broad and ill-defined, in the coverage that I’ve seen) to provide a backdoor into individuals’ encrypted messages. Only for individuals, you’ll note, not blanket decryption.  Well, that’s a relief. And that was sarcasm.

The problem? Mathematics. Cryptography is based on mathematics. Much of it is actually quite simple, though some of it is admittedly complex. But you don’t argue with mathematics, and the mathematics say that you can’t just create a backdoor and have the rest of the scheme continue to be as secure.

Most existing encryption/decryption schemes[3] allow one party to send encrypted data to another with a single shared key. To decrypt, you need that key. In order to get that key, you either need to be one of the two parties (typically referred to as “Alice” and “Bob”), or hope that, as a malicious[5] third party (typically referred to as “Eve”[6]), you can do one of the following:

  1. get Alice or Bob to give you their key;
  2. get access to the key by looking at some or all of the encrypted messages;
  3. use a weakness in the encryption process to decrypt the messages.

Now, number 1 isn’t great if you don’t want Alice or Bob to know that you’re snooping on their messages. Number 2 is a protocol weakness, and designers of cryptographic protocols try very, very hard to avoid them. Number 3 is an implementation weakness, and reputable application developers will be try very, very hard to avoid those. What’s more, for applications which are open source, anyone can have a look at them, so putting them in on purpose isn’t likely to last for long.

Both 2 and 3 can lead to backdoors. But they’re not single-use backdoors, they’re gaping archways that anyone can find out about and exploit.

Would it be possible to design protocols that allowed a third party to hold a key for each encryption session, allowing individual sessions to be decrypted by a “trusted party” such as law enforcement? Yes, it would. But a) no-one with half a brain would knowingly use such a scheme[7]; b) the operational overhead of running such a scheme would be unmanageable; and c) it would only a matter of time before untrusted parties got access to the systems behind the scheme and misused it.

Backdoors are just a non-starter. Governments need to find sensible ways to perform legally approved surveillance, but encryption backdoors are not one of them.


1 – and I’m not even intentionally addressing any criminals who might be reading this article.

2 – quite eloquently, in my humble opinion.

3 – the two tend to go together as there isn’t much point in one without the other[4] .

4 – in most cases. You’d be surprised, though.

5 – at least as far as Alice and Bob are concerned.

6 – guess where the name of this blog originated?

7 – hint: not me, not you, and certainly not criminals.

I’m turning off your security.

“Don’t worry, I know what I’m doing.”

Today’s security story is people turning security off.  For me, the fact that it’s even a story is the story.  This particular story is covered in The Register, who explain (to nobody’s surprise) that some of the patches to fix issues identified in CPU’s (think Spectre, Meltdown, etc.) can actually slow down the applications running on them.  The problem is that, in some cases, they don’t slow them down a little bit, but rather a lot.  By which I mean up to 50%.  And if you’ve bought expensive hardware – or rented it [1] – then you’d generally prefer it if it runs your applications/programs/workloads quickly, rather than just half as fast as they might run.

And so you turn off the security patches.  Your decision: fine.

No, stop: this isn’t what has happened.

The mythical “you”, the person running the workload, isn’t the person who makes the decision, in most cases, because it’s been made for you.  This is the real story.

Linus Torvalds, and a bunch of other experts in the Linux kernel[2], have decided that although the patch that could make your workloads secure is available, the functionality that does it should be “off” by default.  They reason – quite correctly, in my opinion – that the vast majority of people running workloads, won’t easily be able to turn this functionality on themselves

They also reason – again, correctly, in my opinion – that most people will care more about how quickly their workloads run than about how secure they are.  I’m not happy about this, but that’s the way it is.

What I worry about is the final step in the logic to making the decision.  I’m going to quote Linus:

“Have you seen any actual realistic attacks for normal human users?” he asked. “Things where the kernel should actually care? The JavaScript thing is for the browser to fix up, not for the kernel to say ‘now everything should run up to 50 per cent slower.'”

I get the reasoning behind this, but I don’t like it.  To give some context, somebody came up with an example attack which could compromise certain workloads, and Linus points out that there are better ways to fix this attack than fixing it in the kernel. My concerns are two-fold:

  1. although there may be better places to fix that particular attack, a kernel-level fix is likely to fix an entire class of attacks, meaning better protection for users who are using any application which might include an attack vector.
  2. pointing out that there haven’t been any attacks yet not only ignores the fact that there is a future out there[3] but also points malicious actors in the direction of a likely attack vector.

Now, I know that the more dedicated malicious actors are already looking for these things, but do we really need to advertise?

What’s my fix?

I don’t have one, or at least not an easy one.

Somebody, somewhere, needs to decide whether security is turned on or off.  What I’d honestly like to see is an easier set of controls to allow people to turn on or off security, and to understand the trade-offs when they do that.  The problems with that are:

  • the trade-offs are often much more complex than just “fast and insecure” or “slow and secure”, and are really difficult to explain.
  • in order to make a sensible decision about trade-offs, people need to understand risk.  And people are awful at understanding risk.

And there’s a “chicken and egg problem”[7] here: people won’t understand risk until they are offered the chance to make decisions, but there’s little incentive to offer them complex decisions unless they understand risk.

My plea?  Where possible, expose risk, and explain what it is.  And if you’re turning off security-related functionality, make it easy to turn back on for those who need it.


1 – a quick heads-up: this is what “deploying to the cloud” actually is.

2 – what sits at the bottom of many of the workloads that are running in servers.

3 – hopefully.  If the Three Minute Warning[4] sounds while you’re reading this, you may wish to duck and cover.  You can come back to it later[6].

4 – “… sounds like this …”[5].

5 – 80s reference.

6 – or not.  See [3].

7 – for non-native English readers, this means “a problem where the solution requires two pieces, both of which are dependent on each other”.

Why security policies are worthless

A policy, to have any utility at all, needs to exist in a larger context.

“We need a policy on that.” This is a phrase that seems to act as a universal panacea to too many managers. When a problem is identified, and the blame game has been exhausted, the way to sort things out, they believe, is to create a policy. If you’ve got a policy, then everything will be fine, because everything will be clear, and everyone will obey the policy, and nothing can go wrong.

Right[1].

The problem is that policy, on its own, is worthless.

A policy, to have any utility at all, needs to exist in a larger context, or, to think of it in a different way, to sit in a chain of artefacts.  It is its place in this chain that actually gives it meaning.  Let me explain.

When that manager said that they wanted a policy, what did that actually mean?  That rather depends on how wise the manager is[2].  Hopefully, the reason that the manager identified the need for a policy was because:

  1. they noticed that something had gone wrong that shouldn’t have done and;
  2. they wanted to have a way to make sure it didn’t happen again.

The problem with policy on its own is that it doesn’t actually help with either of those points.  What use does it have, then?

Governance

Let’s look at those pieces separately.  When we say that “something had gone wrong that shouldn’t have done“, what we’re saying is that there is some sort of model for what the world should look like, giving us often general advice on our preferred state.  Sometimes this is a legal framework, sometimes it’s a regulatory framework, and sometimes it’s a looser governance model.  The sort of thing I’d expect to see at this level would be statements like:

  • patient data must be secured against theft;
  • details of unannounced mergers must not be made available to employees who are not directors of the company;
  • only authorised persons may access the military base[4].

These are high level requirements, and are statements of intent.  They don’t tell you what to do in order to make these things happen (or not happen), they just tell you that you have to do them (or not do them).  I’m going to call collections of these types of requirements “governance models”.

Processes

At the other end of the spectrum, you’ve got the actual processes (in the broader sense of the term) required to make the general intent happen.  For the examples above, these might include:

  • AES-256 encryption using OpenSSL version 1.1.1 FIPS[5], with key patient-sym-current for all data on database patients-20162019;
  • documents Indigo-1, Indigo-3, Indigo-4 to be collected after meeting and locked in cabinet Socrates by CEO or CFO;
  • guards on duty at post Alpha must report any expired passes to the base commander and refuse entry to all those producing them.

These are concrete processes that must be followed, which will hopefully allow the statements of intent that we noted above to be carried out.

Policies and audit

The problem with both the governance statements and the processes identified is that they’re both insufficient.  The governance statements don’t tell you how to do what needs to be done, and the processes are context-less, which means that you can’t tell what they relate to, or how they’re helping.  It’s also difficult to know what to do if they fail: what happens, for example, if the base commander turns up with an expired pass?

Policies are what sit in the middle.  They provide guidance as to how to implement the governance model, and provide context for the processes.  They should give you enough detail to cover all eventualities, even if that’s to say “consult the Legal Department when unsure[6]”.  What’s more, they should be auditable.  If you can’t audit your security measures, then how can you be sure that your governance model is being followed?  But governance models, as we’ve already discovered, are at the level of intent – not the sort of thing that can be audited.

Policies, then, should provide enough detail that they can be auditable, but they should also preferably be separated enough from implementation that rules can be written that are applicable in different circumstances.  Here are a few policies that we might create to follow the first governance model examples above:

  • patient data must be secured against theft;
    • Policy 1: all data at rest must be encrypted by a symmetric key of at least 256 bits or equivalent;
    • Policy 2: all storage keys must be rotated at least once a month;
    • Policy 3: in the event of a suspected key compromise, all keys at the same level in that department must be rotated.

You can argue that these policies are too detailed, or you can argue that they’re not detailed enough: both arguments are fair, and level of detail (or granularity) should depend on the context and the use to which they are being put.  However, though I’m sure that all of the example policies I’ve given could be improved, I hope that they are all:

  • auditable;
  • able to be implemented by one or more well-defined processes;
  • understandable by both those who concerned with the governance level and those involved at the process implementation and operations level.

The value of auditing

I’ve written about auditing before (Confessions of an auditor).  I think it’s really important.  Done well, it should allow you to discover whether:

  1. your processes are covering all of the eventualities of your policies;
  2. whether your policies are actually being implemented correctly.

Auditing may also address whether your policies fully meet your governance model.  Auditing well is a skill, but in order to help your auditor – whether they are good at it or bad at it – having a clearly defined set of policies is a must.  But, as I pointed out at the beginning of this article, policies for policies’ sake are worthless: put them together with governance and processes, however, and they provide technical and business value.


1 – this is sarcasm.

2 – yes, I know.  But let’s not be rude if we can avoid it.  We want to help managers, because the more clue we deliver to them, the easier our lives will be[3].

3 – and you never know: one day, even you might be a manager.

4 – which is probably not a US military base, given the spelling of “authorised”.

5 – example only…

6 – this, in my experience, is the correct answer to many questions.