Resolutions for this New Year:
- DNS (preferably DNSSEC)
- 1600dpi (mouse)
- WUXGA (1920×1200)
- I was planning to add an audio resolution, but I’m dithering a bit on this one
I’ll get my coat.
Happy New Year!
Resolutions for this New Year:
I’ll get my coat.
Happy New Year!
Normal people generally just want things to work.
Most people don’t realise quite how much fun security is, or exactly how sexy security expertise makes you to other people. We know that it’s engrossing, engaging and cool, they don’t. For this reason, when security people go to the other people (let’s just call them “normal people” for the purposes of this article), and tell them that they’re doing something wrong, and that they can’t launch their product, or deploy their application, or that they must stop taking sales orders immediately, and probably for the next couple of days until this are fixed, then those normal people don’t always react with the levels of gratefulness that we feel is appropriate.
Sometimes, in fact, they will exhibit negative responses – even quite personal negative responses – to these suggestions.
The problem is this: security folks know how things should be, and that’s secure. They’ve taken the training, they’ve attended the sessions, they’ve read the articles, they’ve skimmed the heavy books, and all of these sources are quite clear: everything must be secure. And secure generally means “closed” – particularly if the security folks weren’t sufficiently involved in the design, implementation and operations processes. Normal people, on the other hand, generally just want things to work. There’s a fundamental disjoint between those two points of view that we’re not going to get fixed until security is the very top requirement for any project from its inception to its ending.
Now, normal people aren’t stupid. They know that things can’t always work perfectly: but they would like them to work as well as they can. This is the gap that we need to cross. I’ve talked about managed degradation as a concept before, in the post Service degradation: actually a good thing, and this is part of the story. One of the things that we security people should be ready to do is explain that there are risks to be mitigated. For security people, those risks should be mitigated by “failing closed”. It’s easy to stop risk: you just stop system operation, and there’s no risk that it can be misused. But for many people, there are other risks: an example being that the organisation may in fact go completely out of business because some ** security person turned the ordering system off. If they’d offered me the choice to balance the risk of stopping taking orders against the risk of losing some internal company data, would I have taken it? Well yes, I might have. But if I’m not offered the option, and the risk isn’t explained, then I have no choice. These are the sorts of words that I’d like to hear if I’m running a business.
It’s not just this type of risk, though. Coming to a project meeting two weeks before launch and announcing that the project can’t be deployed because this calls against this API aren’t being authenticated is no good at all. To anybody. As a developer, though, I have a different vocabulary – and different concerns – to those of the business owner. How about, instead of saying “you need to use authentication on this API, or you can’t proceed”, the security person asks “what would happen if data that was provided on this API was incorrect, or provided by someone who wanted to disrupt system operation?”? In my experience, most developers are interested – are invested – in the correct operation of the system that they’re running, and of the data that it processes. Asking questions that show the possible impact of lack of security is much more likely to garner positive reactions than an initial “discussion” that basically amounts to a “no”.
Don’t get me wrong; there are times when we, as security people, need to be firm, and stick to our guns, but in the end, it’s the owners – of systems, or organisations, of business units, of resources – who get to make the final decision. It’s our job to talk to them in words they can understand and ensure that they are as well informed as we can possibly make them. Without just saying “no”.
1. by which I mean “those poor unfortunate souls who don’t read these posts, unlike you, dear and intelligent reader”.
2. my wife, sadly, seems to fall into this category.
3. which usually have a picture of a lock on the cover.
4. and good luck with that.
5. while we’ve all met our fair share of stupid normal people, I’m betting that you’ve met your fair share of stupid security people, too, so it balances out.
6. probably more than balances out. Let’s leave it there.
8. insert your favourite adjectival expletive here.
9. figuratively: I don’t condone bringing any weapons, including firearms, to your place of work.
Who is saying “hello world?”: you, or the computer?
I don’t yet have one of those Google or Amazon talking speaker thingies in my house or office. A large part of this is that I’m just not happy about the security side: I know that the respective companies swear that they’re only “listening” when you say the device’s trigger word, but even if that’s the case, I like to pretend that I have at least some semblance of privacy in my life. Another reason, however, is that I’m not sure that I like what happens to people when they pretend that there’s a person listening to them, but it’s really just a machine.
It’s not just Alexa and the OK, Google persona, however. When I connect to an automated phone-answering service, I worry when I hear “I’ll direct your call” from a non-human. Who is “I”? “We’ll direct your call” is better – “we” could be the organisation with whom I’m interacting. But “I”? “I” is the pronoun that people use. When I hear “I”, I’m expecting sentience: if it’s a machine I’m expecting AI – preferably fully Turing-compliant.
There’s a more important point here, though. I’m entirely aware that there’s no sentience behind that “I”, but there’s an important issue about agency that we should unpack.
What, then, is “agency”? I’m talking about the ability of an entity to act on its or another’s behalf, and I touched on this this in a previous post, “Wow: autonomous agents!“. When somebody writes some code, what they’re doing is giving ability to the system that will run that code to do something – that’s the first part. But the agency doesn’t really occur, I’d say, until that code is run/instantiated/executed. At this point, I would argue, the software instance has agency.
But whose agency, exactly? For whom is this software acting?
Here are some answers. I honestly don’t think that any of them is right.
Another way to think of this problem is to ask: when you write and execute a program, who is saying “hello world?”: you, or the computer?
There are some really interesting questions that come out of this. Here are a couple that come to mind, which seem to me to be closely connected.
Why does this all matter? Well, one of the more pressing reasons is because of self-driving cars. Whose fault is it when one goes wrong and injures or kills someone? What about autonomous defence systems?
And here’s the question that really interests – and vexes – me: is this different when the program which is executing can learn. I don’t even mean strong AI: just that it can change what it does based on the behaviour it “sees”, “hears” or otherwise senses. It feels to me that there’s a substantive difference between:
a) actions carried out at the explicit (asynchronous) request of a human operator, or according to sets of rules coded into a program
b) actions carried out in response to rules that have been formed by the operation of the program itself. There is what I’d called synchronous intent within the program.
You can argue that b) has pretty much always been around, in basic forms, but it seems to me to be different when programs are being created with the expectation that humans will not necessarily be able to decode the rules, and where the intent of the human designers is to allow rulesets to be created in this way.
There is some discussion about at the moment as to how and/or whether rulesets generated by open source projects should be shared. I think the general feeling is that there’s no requirement for them to be – in the same way that material I write using an open source text editor shouldn’t automatically be considered open source – but open data is valuable, and finding ways to share it is a good idea, IMHO.
In Wargames, that is the key difference between the system as originally planned, and what it ends up doing: Joshua has synchronous intent.
I really don’t think this is all bad: we need these systems, and they’re going to improve our lives significantly. But I do feel that it’s important that you and I start thinking hard about what is acting for whom, and how.
Now, if you wouldn’t mind opening the Pod bay doors, HAL…
1. and yes, I know it’s a pretense.
3. go on – re-watch it: you know you want to.
4. and if you’ve never watched it, then stop reading this article and go and watch it NOW.
5. I think you know the problem just as well as I do, Dave.
This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.
Once a year or so, one of the big UK tech magazines or websites does a survey where they send a group of people to one of the big London train stations and ask travellers for their password. The deal is that every traveller who gives up their password gets a pencil, a chocolate bar or similar.
I’ve always been sad that I’ve never managed to be at the requisite station for one of these polls. I would love to get a free pencil – or even better a chocolate bar – for lying about a password. Or even, frankly, for giving them one of my actual passwords, which would be completely useless to them without some identifying information about me. Which I obviously wouldn’t give them. Or again, would pretend to give them, but lie.
The point of this exercise is supposed to be to expose the fact that people are very bad about protecting their passwords. What it actually identifies is that a good percentage of the British travelling public are either very bad about protecting their passwords, or are entirely capable of making informed (or false) statements in order to get a free pencil or chocolate bar. Good on the British travelling public, say I.
Now, everybody agrees that passwords are on their way out, as they have been on their way out for a good 15-20 years, so that’s nice. People misuse them, reuse them, don’t change them often enough, etc., etc.. But it turns out that it’s not the passwords that are the real problem. This week, more than one British MP admitted – seemingly without any realisation that they were doing anything wrong – that they share their passwords with their staff, or just leave their machines unlocked so that anyone on their staff can answer their email or perform other actions on their behalf.
This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.
People seem to think that, in a corporate or government setting, the point of passwords is to stop people looking at things they shouldn’t.
That’s wrong. The point of passwords is to allow different accounts for different people, so that the appropriate people can exercise the appropriate actions, and be audited as having done so. It is, basically, a matching of authority to responsibility – as I discussed in last week’s post Explained: five misused security words – with a bit of auditing thrown in.
Now, looking at things you shouldn’t is one action that a person may have responsibility for, certainly, but it’s not the main thing. But if you misuse accounts in the way that has been exposed in the UK parliament, then worse things are going to happen. If you willingly bypass accounts, you are removing the ability of those who have a responsibility to ensure correct responsibility-authority pairings to track and audit actions. You are, in fact, setting yourself up with excuses before the fact, but also making it very difficult to prove wrongdoing by other people who may misuse an account. A culture that allows such behaviour is one which doesn’t allow misuse to be tracked. This is bad enough in a company or a school – but in our seat of government? Unacceptable. You just have to hope that there are free pencils. Or chocolate bars.
1. I can’t remember which, and I’m not going to do them the service of referencing them, or even looking them up, for reasons that should be clear once you read the main text.
2. I’m trialling a new form or footnote referencing. Please let me know whether you like it.
3. I guess their email password, but again, I can’t remember and I’m not going to look it up.
4. Or similar.
5. I say “point”…
Untangling responsibility, authority, authorisation, authentication and identification.
I took them out of the title, because otherwise it was going to be huge, with lots of polysyllabic words. You might, therefore, expect a complicated post – but that’s not my intention*. What I’d like to do it try to explain these five important concepts in security, as they’re often confused or bound up with one another. They are, however, separate concepts, and it’s important to be able to disentangle what each means, and how they might be applied in a system. Today’s words are:
Let’s start with responsibility.
Confused with: function; authority.
If you’re responsible for something, it means that you need to do it, or if something goes wrong. You can be responsible for a product launching on time, or for the smooth functioning of a team. If we’re going to ensure we’re really clear about it, I’d suggest using it only for people. It’s not usually a formal description of a role in a system, though it’s sometimes used as short-hand for describing what a role does. This short-hand can be confusing. “The storage module is responsible for ensuring that writes complete transactionally” or “the crypto here is responsible for encrypting this set of bytes” is just a description of the function of the component, and doesn’t truly denote responsibility.
Also, just because you’re responsible for something doesn’t mean that you can make it happen. One of the most frequent confusions, then, is with authority. If you can’t ensure that something happens, but it’s your responsibility to make it happen, you have responsibility without authority***.
Confused with: responsibility, authorisation.
If you have authority over something, then you can make it happen****. This is another word which is best restricted to use about people. As noted above, it is possible to have authority but no responsibility*****.
Once we start talking about systems, phrases like “this component has the authority to kill these processes” really means “has sufficient privilege within the system”, and should best be avoided. What we may need to check, however, is whether a component should be given authorisation to hold a particular level of privilege, or to perform certain tasks.
Confused with: authority; authentication.
If a component has authorisation to perform a certain task or set of tasks, then it has been granted power within the system to do those things. It can be useful to think of roles and personae in this case. If you are modelling a system on personae, then you will wish to grant a particular role authorisation to perform tasks that, in real life, the person modelled by that role has the authority to do. Authorisation is an instantiation or realisation of that authority. A component is granted the authorisation appropriate to the person it represents. Not all authorisations can be so easily mapped, however, and may be more granular. You may have a file manager which has authorisation to change a read-only permission to read-write: something you might struggle to map to a specific role or persona.
If authorisation is the granting of power or capability to a component representing a person, the question that precedes it is “how do I know that I should grant that power or capability to this person or component?”. That process is authentication – authorisation should be the result of a successful authentication.
Confused with: authorisation; identification.
If I’ve checked that you’re allowed to perform and action, then I’ve authenticated you: this process is authentication. A system, then, before granting authorisation to a person or component, must check that they should be allowed the power or capability that comes with that authorisation – that are appropriate to that role. Successful authentication leads to authorisation. Unsuccessful authentication leads to blocking of authorisation******.
With the exception of anonymous roles, the core of an authentication process is checking that the person or component is who he, she or it says they are, or claims to be (although anonymous roles can be appropriate for some capabilities within some systems). This checking of who or what a person or component is authentication, whereas the identification is the claim and the mapping of an identity to a role.
Confused with: authentication.
I can identify that a particular person exists without being sure that the specific person in front of me is that person. They may identify themselves to me – this is identification – and the checking that they are who they profess to be is the authentication step. In systems, we need to map a known identity to the appropriate capabilities, and the presentation of a component with identity allows us to apply the appropriate checks to instantiate that mapping.
Just because you know whom I am doesn’t mean that you’re going to let me do something. I can identify my children over the telephone*******, but that doesn’t mean that I’m going to authorise them to use my credit card********. Let’s say, however, that I might give my wife my online video account password over the phone, but not my children. How might the steps in this play out?
First of all, I have responsibility to ensure that my account isn’t abused. I also have authority to use it, as granted by the Terms and Conditions of the providing company (I’ve decided not to mention a particular service here, mainly in case I misrepresent their Ts&Cs).
“Hi, darling, it’s me, your darling wife*********. I need the video account password.” Identification – she has told me who she claims to be, and I know that such a person exists.
“Is it really you, and not one of the kids? You’ve got a cold, and sound a bit odd.” This is my trying to do authentication.
“Don’t be an idiot, of course it’s me. Give it to me or I’ll pour your best whisky down the drain.” It’s her. Definitely her.
“OK, darling, here’s the password: it’s il0v3myw1fe.” By giving her the password, I’ve performed authorisation.
It’s important to understand these different concepts, as they’re often conflated or confused, but if you can’t separate them, it’s difficult not only to design systems to function correctly, but also to log and audit the different processes as they occur.
*we’ll have to see how well I manage, however. I know that I’m prone to long-windedness**
**ask my wife. Or don’t.
***and a significant problem.
****in a perfect world. Sometimes people don’t do what they ought to.
*****this is much, much nicer than responsibility without authority.
******and logging. In both cases. Lots of logging. And possibly flashing lights, security guards and sirens on failure, if you’re into that sort of thing.
*******most of the time: sometimes they sound like my wife. This is confusing.
********neither should you assume that I’m going to let my wife use it, either.*********
*********not to suggest that she can’t use a credit card: it’s just that we have separate ones, mainly for logging purposes.
**********we don’t usually talk like this on the phone.
… algorithms, we know, are not always correctly implemented …
Imagine that you’re about to play a boardgame which involves using dice. I don’t know: Monopoly, Yahtzee, Cluedo, Dungeons & Dragons*. In most cases, at least where you’re interested in playing a fair game, you want to be pretty sure that there’s a random distribution of the dice roll results. In other words, for a 6-sided dice, you’d hope that, for each roll, there’s an equal chance that any of the numbers 1 through 6 will appear. This seems like a fairly simple thing to want to define, and, like many things which seem to be simple when you first look at them, mathematicians have managed to conjure an entire field of study around it, making it vastly complicated in the process****.
Let’s move to computers. As opposed to boardgames, you generally want computers to do the same thing every time you ask them to do it, assuming that give them the same inputs: you want their behaviour to be deterministic when presented with the same initial conditions. Random behaviour is generally not a good thing for computers. There are, of course, exceptions to this rule, and the first is when you want to use computers to play games, as things get very boring very quickly if there’s no variation in gameplay.
There’s another big exception: cryptography. In fact, it’s not all of cryptography: you definitely want a single plaintext to be encrypted to a single ciphertext under the same key in almost all cases. But there is one area where randomness is important: and that’s in the creation of the cryptographic key(s) you’re going to be using to perform those operations. It turns out that you need to have quite a lot of randomness available to create a key which is truly unique – and keys really need to be truly unique – and that if you don’t have enough randomness, then not only will you possible generate the same key (or set of them) repeatedly, but other people may do so as well, allowing them to guess what keys you’re using, and thereby be able do things like read your messages or pretend to be you.
Given that these are exactly the sorts of things that cryptography tries to stop, it is clearly very important that you do have lots of randomness.
Luckily, mathematicians and physicists have come to our rescue. Their word for randomness is “entropy”. In fact, what mathematicians and physicists mean when they talk about entropy is – as far as my understanding goes – to be a much deeper and complex issue than just randomness. But if we can find a good source of entropy, and convert it into something that computers can use, then we should have enough randomness to do all things that we want to do with cryptographic key generation*****. The problem in the last sentence is the “if” and the “should”.
First, we need to find a good source of entropy, and prove that it is good. The good thing about this is that there are, in fact, lots of natural sources of entropy. Airflow is often random enough around computers that temperature variances can be measured that will provide good enough entropy. Human interactions with peripherals such as mouse movements or keyboard strokes can provide more entropy. In the past, variances between network packets receive times were used, but there’s been some concern that these are actually less random than previously thought, and may be measurable by outside parties******. There are algorithms that allow us to measure quite how random entropy sources are – though they can’t make predictions about future randomness, of course.
Let’s assume, though, that we have a good source of entropy. Or let’s not: let’s assume that we’ve got several pretty good sources of entropy, and that we believe that when we combine them, they’ll be good enough as a group.
And this is what computers – and Operating Systems such – generally do. They gather data from various entropy sources, and then convert it to a stream of bits – your computer’s favourite language of 1s and 0s – that can then be used to provide random numbers. The problem arises when they don’t do it well enough.
This can occur for a variety of reasons, the main two being bad sampling and bad combination. Even if your sources of entropy are good, if you don’t sample them in an appropriate manner, then what you actually get won’t reflect the “goodness” of that entropy source: that’s a sampling problem. This is bad enough, but the combination algorithms are supposed to smooth out this sort of issue, assuming it’s not too bad and you have enough sources of entropy. However, when you have an algorithm which isn’t actually doing that, or isn’t combining even well-sampled, good sources, then you have a real issue. And algorithms, we know, are not always correctly implemented – and there have even been allegations that some government security services have managed to introduce weakened algorithms – with weaknesses that only they know about, and can exploit – into systems around the world. There have been some very high profile examples of poor implementation in both the proprietary and open source worlds, which have led to real problems in actual deployments. At least, when you have an open source implementation, you have the chance to fix it.
That problem is compounded when – as is often the case – these algorithms are embedded in hardware such as a chip on a motherboard. In this case, it’s very difficult to fix, as you generally can’t just replace all the affected chips, and may also be difficult to trace. Whether you are operating in hardware or software, however, the impact of a bad algorithm which isn’t spotted – at least by the Good Guys and Gals[tm] – for quite a while is that you may have many millions of weak keys out there, which are doing a very bad job of protecting identities or private data. Even if you manage to replace these keys, what about all of the historical encryptions which, if recorded, can now be read? What if I could forge the identity of the person who signed a transaction buying a house several years ago, to make it look like I now owned it, for instance?
Entropy, then, can be difficult to manage, and when we have a problem, the impact of that problem can be much larger than we might immediately imagine.
*I’m sure that there are trademarks associated with these games**
**I’m also aware that Dungeons & Dragons*** isn’t really a boardgame
***I used to be a Dungeon Master!
****for an example, try reading just the first paragraph of the entry for stochastic process on Wikipedia.
******another good source of entropy is gained by measuring radioactive decay, but you generally don’t want to be insisting that computers – or there human operators – require a radioactive source near enough to them to be useful.
The first thing to know about blockchain smart contracts is they’re not contracts, smart or necessarily on a blockchain.
The first thing to know about blockchain smart contracts is they’re not contracts, smart or necessarily on a blockchain. They are, in fact, singularly ill-named*. Let’s address these issues in reverse order, and we should find out exactly what a smart contract actually is along the way. First, and introduction to what transactions are, and things which aren’t transactions.
The best known blockchains are crypto-currencies like bitcoin**. The thing about currencies – virtual or not – is that what you mainly want to do is buy or sell things using them. What you want is a simple transaction model: “once I provide you with this service, you’ll give me this amount of currency.” We know how this works, because every time we buy something in a shop or online, that’s what happens: the starting state is that I have x amount, and the state after completion of the transaction is that I have x-y amount, and you have y amount****. It’s the moving from one state to another that you care about before you complete the transaction. Most crypto-currencies are set up to support this type of construct.
This is great, but some clever people realised that there are actually many different ways to do this. Ethereum was where non-transactional constructs made it big time, and Solidity is the best known example. Both, I’m pleased to say, are open source projects. Why not have a more complex set of conditions that need to be met before I hand over whatever it is that I’m handing over? And – here’s the clever bit – why not write those in code that can be executed by computers? You might want the currency – or whatever – only to be released after a certain amount of time, or if a stock price keeps within a particular set of boundaries, or if a certain person continues to be prime minister*****, or if there’s no unexpected eclipse within the next five days******. You could have complex dependencies, too: only complete if I write a new post three weeks in a row and nobody writes unpleasant comments about any of them*******. Write this code, and if the conditions are met, then you move to the next state.
Let’s start addressing those “not” statements.
Now, in a blockchain, the important thing is that once the state has changed, you then ensure that it’s recorded on the blockchain so that it’s public and nobody can change or challenge it. But there are other uses for blockchain technology, as I explained in Is blockchain a security topic? Permissionless systems, often referred to as DLTs, or “Distributed Ledger Technologies” are a great fit for non-transactional state models, largely because the sort of people who are interested in them are closed groups of organisations who want to have complex sets of conditions met before they move to the next state. These aren’t, by the tightest definition, blockchains. Banks and other financial institutions may be the most obvious examples where DLTs are gaining traction, but they are very useful in supply chain sectors, for instance, where you may have conditions around changing market rates, availability and shipping times or costs which may all play into the final price of the commodity or service being provided.
Smart contracts could, I suppose, be smart, but for me, that means complex and able to react to unexpected or unlikely situations. I think that people call them “smart” because they’re embodied in code, not for the reasons I’ve suggested above.
That’s actually a very good thing, I think, because I don’t think we want them to mean what I was talking out. Most of the usages that I’m aware of for “smart contracts” are where two or more organisations agree on a set of possible outcomes of a system based on a set of known and well-constrained conditions. This is what contracts are generally about, and although I’m about to argue with that part of the nomenclature as well, in this context it’s fairly apposite. What you want, generally, is not unexpected or unlikely situations and smart processing in an AI/ML type way, because if you do, then the parties involved are likely to get outcomes that at least one or more of them are surprised, and likely unhappy, about. Simple – or at least easily defined – is a key behaviour that you’re going to want built into the system. The Solidity project, for example, seems aware of at least some of these pitfalls, and suggests that people employing smart contracts employ formal verification, but as we’ll see below, that just scratches the surface of the problem.
Of course, there are some contracts – IRL contracts – that exist to manage complex and unexpected conditions. And they exist within a clear legal jurisdiction. The words and phrases that make them up are subject to specific and well-defined processes, with known sanctions and punishments where the conditions of the contract are not met or are broken. There are often instances where these are challenged, but again, clear mechanisms exist for such challenges.
For now, “smart contracts” just don’t fit this description of a contract. Just mapping legal contractual wording to computer code is a very complex process, and the types of errors to which processing of code is prone don’t have a good analogue within the justice system. There’s the question, as well, of what jurisdiction is relevant. This is usually described in the contract terms, but what if the processing of the “smart contract” takes place in a different jurisdiction to that of the parties involved, or even in an unknown jurisdiction. Should this matter? Could this matter? I don’t know, and I also don’t know what other types of issue are going to crawl out of the woodwork once people start relying on these constructs in legally-enforceable ways, but I doubt they’re going to be welcome.
We’re not helped, as well, by the fact that when IT people talk about software contracts, they’re talking about something completely different: it’s the advertised behaviour of a system in the context of known inputs and starting conditions.
Once a transaction – or “smart contract” has completed, and made its way onto the blockchain or distributed ledger, it is immutable, pretty much by definition. But what about before then? Well, simple transactions of the type described at the beginning of this post are atomic – they happen or they don’t, and they are “indivisible and irreducible” to use the jargon. They are, for most purposes, instantaneous.
The same is not true for “smart contracts”. They require processing, and therefore exist over time. This means that while they are being processed, they are subject to all the sorts of attacks to which any system may be vulnerable. The standard list is:
This post started with what may have seemed to be a pedantic attack on a naming convention. As I think will probably be clear********, I’m not comfortable with the phrase “smart contract”, and that’s mainly because I think it has caused some people to think that these constructs are things that they’re not. This, in turn, is likely to mean that people will use them in contexts where they’re not appropriate.
I also worry that because the use of words means that they bring baggage with them, this will lead to people not fully thinking through the impact of security on these constructs. And I think that the impact can be very major. So, if you’re looking into these constructs, please do so with your eyes open. I’ve not talked much in this article about mitigations, but some exist: keep an eye on future posts for more.
*I like to think that the late, lamented authors Terry Pratchett and Douglas Adams would both appreciate smart contracts for exactly this reason.
**the first thing you’ll find many bitcoin commentators saying is “I wish I’d bought in early: I’d be a multi-millionaire by now.”***
***I wish I’d bought in early: I’d be a multi-millionaire by now.
****less taxes or house cut. Sorry – that’s just the way the world works.
******I’m not expecting one. I’d tell you, honest.
*******this is not an invitation.
********if it isn’t by now, either you’ve not read this carefully enough, or I’ve done a bad job of explaining. Try reading it again, and if that doesn’t help, write a comment and I’ll try to explain better.