In a recent post, I talked about how most people are loath to discuss collapse, some even believing that talking about it makes it more likely to happen. After all, Business As Usual is working just fine and everything is going to be alright. Right?
In contrast to this sort of head-in-the-sand optimism there are people (admittedly a much smaller number of us) who like to focus on existential threats—things that promise, at the very least, to wipe out a large chunk of our human population and, at the worst, to bring an end to life on earth. When you start looking into this you'll find that there are quite a variety of such threats. In my next two posts, I'll take a look at a selection of them. I'll explain why I think that the kind of collapse that I've been talking about is the threat most worthy of most of our attention. And in the process we'll get a clearer picture of what kind of collapse that is.
This post, though, is about the virtues of worrying and how to evaluate existential threats.
What "virtues of worrying", you ask? Worry certainly isn't in fashion these days. On social media one frequently sees this little flowchart about when to worry. All paths seem to leads to the same conclusion—"don't worry".
Now, I admit to being somewhat of a worry wart. Perhaps because of that I can see several things wrong with this flowchart. First, when you don't know, you need to find out. While you are finding out, worry serves as an incentive. And at the bottom of the chart the possibility that there is a problem, and that you can do something about it, is very much under emphasized. Worry serves as an incentive to find out what you can do, make a plan and then execute it. When you've set that in motion, I guess maybe you could quit worrying, but instead I would swing back around to the top left of the chart and see if there is anything else to worry about.
On the other hand, it is true that you can waste a great deal of time and mental anguish worrying about things over which you have no real influence. You have to identify problems that you can actually do something about and concentrate your efforts there. Of course, different people will reach different conclusions—it's a big world and there is lots of room for disagreement. We can't really determine what the right thing to do is without a lot of trial and error, so a diversity of response is a good thing in that it makes it more likely that some of those responses will be more or less successful.
I'll borrow a "new word" from John Michael Greer—"dissensus". The opposite of consensus, dissensus means agreeing to disagree and wishing the other guy all the best even if you think his ideas are outright crazy or stupid. Provided, of course, that he extends a similar courtesy to you. I've noticed that when people are willing to do this, and then find themselves faced with a serious threat, it often turns out that on important points like "what the heck do we do next" there is a remarkable degree of agreement. Ideological differences can be set aside when we are dealing with more immediate problems.
What I am expressing here on this blog is my own point of view, which you are free to disagree with. I do wish you all the best in pursuing your own point of view. And if we find ourselves coming up with similar plans, it may be that we can help each other to put them into action.
What is my point of view? Well, I have a great deal of faith in the scientific consensus—we really don't have any better way than science of finding out about the world around us, and in the last few hundred years science has built up a pretty useful picture of that world.
Some will no doubt ask, "How can you question BAU and expect it to collapse and yet still be in favour of the scientific consensus?"
It is a common error to conflate the scientific consensus with the "official stories" that are the basic myths of Business As Usual. You can hardly blame anyone for jumping to the conclusion that BAU and science are on the same side, since every effort is made to use science to legitimize the ideas of BAU. Those myths are pushed by politicians, economists and business. They are dressed up in the kind of pseudoscientific costumes that make them hard to distinguish from reality. The "Biggest Lie" that I talked about recently, the idea that our population and consumption can go on growing forever on a finite planet, is at the heart of this false worldview.
There are lots of people who don't completely buy into BAU. And there are multi-billion dollar per year businesses (organic farming, health food, and alternative medicine to name just a few) who take advantage of that, spending a great deal on propaganda and doing a good job of positioning themselves as being in opposition to Business as Usual. There is money to be made in that business, but the pseudoscience they are selling is just as bad as the myths from regular BAU. The people pushing both of these ideologies are very adept at finding the parts of science that happen to agree with their positions and flogging them for all they are worth to further their cause.
The idea of these two conflicting ideologies, both of which are wrong, is central to what I am talking about on this blog and you'll find it coming up again and again. Last year I wrote a series of posts on the subject:
- Business as Usual, Crunchiness and Woo, Part 1
- Business as Usual, Crunchiness and Woo, Part 2: BAU and The Religion of Progress
- Business as Usual, Crunchiness and Woo, Part 2b: More on what's wrong with Business as Usual
- Business as Usual, Crunchiness and Woo, Part 3: Focusing on the Woo in Crunchiness
- Business as Usual, Crunchiness and Woo, Part 4: A Reality Based Approach
If anybody can suggest a better term than "Crunchy", something less pejorative and more mellifluous, I'd sure be happy to use it. Setting aside all the pseudoscience for a moment, Crunchiness, in its opposition to BAU, is on the right track.
Anyway, if you actually take the time and make the effort to understand how science works and what the current scientific consensus is, you'll realize that it does not particularly support either of these ideologies. But for a great many people, who don't have any real background in science, the combination of conflicting ideologies and pseudoscience is extremely misleading.
One unfortunate side effect of this is that a great deal of worry and effort is wasted on problems that nothing need be done about (because the risk is vanishingly small), or that nothing can be done about (because solutions are beyond our reach). Risk assessment is the key to avoiding this sort of thing.
Ask yourself four things when considering any particular problem or threat:
- Risk: what is the likelihood of this happening?
- Severity: what are the consequences if this does happen?
- Difficulty: how hard will it be to do something about this?
- Timescale: how soon will this happen?
If you study up on any existential threat, you'll find reliable experts who have already considered the problem and have a lot of wisdom to offer.
Based on the answers you find to each of those questions, you will decide to worry or not:
- If risk is small, there isn't much to worry about and little need to plan a response.
- If the severity is small, same conclusion.
- If it would be easy to do something about the threat, you may want to take some action even if the risk and/or severity are small. If response is difficult then it will require detailed planning and the mobilization of forces beyond yourself. And you will need time to mount a response.
- Sometimes it is simply not possible to stop a threat from happening, so the action we can realistically take consists of preparing to cope its effects.
- If the timescale is short, you'll want to plan and act immediately. Preferably to draw on resources you already have in place.
- If the timescale is long then you may use that time to plan and mobilize your response, or, you may decide to just watch and wait until it is more clear what's going to happen and when. Of course, ignoring threats that are on a long timeline is a tempting but dangerous approach. Eventually that timeline will get a lot shorter.
You'll plan a response based on the nature of the threat and follow up with action, or go and look for something else to worry about. After the first few times you run through your list of threats, you will already have made plans and started to implement them, so the time for worry is over. Of course, you'll always want to keep a "weather eye" out for trouble that you haven't anticipated, or established threats that have changed and now require a different response.
There are a few challenges involved with this approach that we should consider here.
How to identify a reliable expert is certainly one of those challenges. Unfortunately the letters "Dr." in front of a name is no guarantee that someone is either an expert or reliable. I can recommend only skepticism, critical thinking and learning to identifying the many types of bias and the sort of dirty tricks used by those producing pseudoscience. After a while you will develop at sort of "BS" detector that goes off when you are confronted with pseudoscience. A big part of that is knowing what the current scientific consensus says and being skeptical about claims that contradict that consensus. Extraordinary claims require extraordinary proof. And it is reassuring to find many researchers turning up the same findings, and interpreting them in similar ways.
Some will be eager to point out that scientists working for business concerns are certainly not reliable and their work just can't be trusted, as it will be biased to the advantage of the company. Sometimes this is true, but just as often it is not true. Your evaluation of that work must be based on the evidence, not on ideology—theirs, or yours.
Evaluating risk can also be quite challenging. In my experience there are a couple of particular pitfalls that people encounter. There are probably more, but these are the ones I know personally.
People often look at risk as being "monotonic". That is, if something is dangerous in large quantities, it must also be dangerous in small quantities—it may take longer for the harm to become evident, but there is still harm. This certainly sounds reasonable, but in most cases it is simply not true. Take radiation as an example. There is no doubt that ionizing radiation can kill in large quantities. This makes it frightening and since it can't be seen and is poorly understood, many people don't want to have anything to do with it, assuming that any release of radiation will affect them negatively.
But life on earth has been dealing with small quantities of radiation since day one and has evolved mechanisms for coping with the "background radiation". Most releases of radiation result in a barely detectable increase in the background and are not a serious concern. Of course, if you work in the nuclear industry, where there is the chance of exposure to significant amounts of radiation, you should take safety procedures seriously. And that brings me to my second risk evaluation pitfall.
The majority of the people I worked with during my career in the electrical transmission and distribution industry were quite brave. We often worked in close proximity to serious hazards and while a healthy respect was vital, outright fear would have been crippling. So far, so good. But some of the hazards we encountered were less straightforward. If there is a one in ten chance of serious injury, essentially everyone will take the appropriate precautions. But at some level of decreasing risk, many people will decide to just accept the risk, rather than do much about it. Especially if the precautions they are expected to take (procedures and protective equipment) are rather onerous.
At what level of risk is that a reasonable response? One in a million? Probably. But what about one chance in a thousand? My experience is that there is a range of risk that is significant, but many people find hard to take seriously. The trouble is, if a large population is exposed to the risk on a regular basis, the odds good are that someone is going to get hurt fairly soon. That's why the safety rules are written and why supervisors (me at one point) have to enforce them, even if they didn't take them so serious when they were workers. Something to keep in mind when evaluating risks.
One final thing I should point out—it seems to me that mankind as a whole is at or just past the peak of our ability to respond to large existential threats. From here on in, as collapse proceeds, it's all a bumpy downhill ride. The best we will be able to do, in a great many cases, is to mitigate the effects of what is coming. And since it appears that governments aren't interested in, and increasing don't have the resources to organize such a response, this will have to be done on an individual, family, or at most, community level.
So, what are some of these threats? I've divided them into two groups: non-anthropogenic (not manmade) and anthropogenic (manmade), and I'll be covering them in my next two posts.