Sunday, May 08, 2016

Why a "modern" can't understand the risks we face

In my previous piece, I discussed why it is useless to argue with a person clinging to what I called the "religion" of modernism. I summarized four main tenets of the modern outlook as follows:

  1. Humans are in one category and nature is in another.
  2. Scale doesn't matter.
  3. History can be safely ignored since modern society has seen through the delusions of the past.
  4. Science is a unified, coherent field that explains the rational principles by which we can manage the physical world.

These assumptions make modern humans particularly susceptible to becoming captives of the bell curve. Our understanding of risk is mediated by a misleading picture of regularity in the physical world and in human society. Moderns believe that nearly all risks--and certainly the nontrivial ones relating to our survival as species--can be easily calculated and managed.

The truth about risk is actually much more disturbing. The generator of events in the universe is hidden from us humans. We see the results and make up theories about the causes and the processes. Some theories work well such as those relating to the prediction of the orbits of planets, for example. But, others have a challenged track record. Economist John Kenneth Galbraith remarking on his own profession once said: "The only function of economic forecasting is to make astrology look respectable."

The idea that the study of human psychology, sociology and economics would yield theories as powerful as those we have for predicting the orbits of planets has long since been abandoned (except by economists, it seems). Humans remain quite unpredictable. And, the trends in the societies in which we live are all the more difficult to perceive and forecast since there are so many people interacting with each other using our worldwide communications and logistics system, each pursuing their individual aims.

Now let's return to the bell curve, a famous statistical construct. Many phenomena in nature when tallied on a graph result in a bell curve. Such a curve can be quite useful for understanding distributions of physical characteristics that are constrained by the laws of physics and biology. For example, we can reasonably predict that a distribution of human height will fall along something resembling a bell curve. The constraints of biology and gravity imply a range for the stature of humans. We might expect to see very few adult humans who are either 3 feet tall or 7 feet tall, but many in between. We would, however, expect to see none who are 100 feet tall. And, we could easily arrive at an average that would not be far from any individual, say, 5 feet.

Social phenomena, such as wealth distribution, are not governed by the laws of physics in the usual sense. While one might find quite a few people at a social gathering who are near 5 feet in height, there would be no one who is 5,000 feet tall. On the other hand, it is quite possible for one person in a room to have a net worth of $50,000 and another to have 1,000 times that or $50 million. There is no physical constraint on the creation of money other than the energy required by a clerk to type instructions into a computer at a central bank.

While social phenomena such as wealth distribution do not follow the same pattern as physical phenomena, they can still be quantified and illustrated.

So far, we've been talking about things which we can readily measure, and we have said nothing about the future. This is where things get sticky. Risk is all about judging the likelihood of something happening in the future--and we can know nothing about the future for certain. (Even the orbit of a planet might be altered by its collision with a comet or a rogue planet. This is unlikely in a short time frame, but grows ever more likely with time--admittedly long spans of time.)

Now, it is one thing to say that in the future adult humans are very likely to remain mostly between 3 feet and 7 feet tall with a few outliers, but none 100 or 1,000 feet tall (unless the laws of biology and physics change). It is quite another to predict the stock market, predict world oil supplies 40 years from now, predict the date of the next world war (which we'd have to define since there are wars going on all the time) or predict human population 1,000 years from now.

There are so many variables which affect predictions such as these that all we might do is hazard a guess. If we end up being right, it will be more a matter of luck than method.

But a "modern" might make generalized, but confident predictions about some of these. The stock market will go up in the long run, say, over the next 50 years, because economic growth will continue apace during that time--growth resulting from the deployment of many new technologies and new abundant, cheap energy sources.

A modern might predict that oil supplies will be irrelevant 40 years from now or predict that they will continue upward during the next 40 years because of--you guessed it--new technologies.

A modern might predict that human population will be larger in 1,000 years as the human ability to provide for greater populations with much higher efficiency continues to develop.

Part of what is lacking in these pronouncements is an understanding or even acknowledgement of the risks inherent in the technology that will allow these felicitous (depending on your point of view) outcomes.

Since we cannot view the generator of events in the world, we can only theorize about causes and effects, never know. While the interactions among unpredictable humans make social forecasting very difficult, adding that unpredictability to human interactions with the physical environment makes long-term forecasting in human affairs as a practical matter impossible.

And here we must acknowledge that our understanding of the physical world is very limited, however much we may think it is comprehensive. Scientists in all disciplines continue to discover relationships and processes which challenge long held views. If such revelations happen over just one lifetime, and we are basing our projections on our current understanding, then we simply cannot fathom how perceptions of the world around us will change over long periods--or whether those new perceptions will tell us that we are getting ever closer to a complete picture of the universe or that we will never arrive at one.

The modern seems unaware of what I've called the chief intellectual challenge of our age, namely, that we live in complex systems, but we don't understand complexity. I alluded to complexity as a double-edged sword in my previous piece, both a tool for adaptation and barrier to it.

The failure to understand how little we know about the world we live in and the inability to see that the world cannot be reduced to an engineering problem have led us to deploy inventions the consequences of which we cannot know--and more important, which threaten systemic ruin for human civilization.

A friend of mine calls this the Midgley Effect after the noted mechanical engineer and chemist, Thomas Midgley Jr. Midgley was responsible for two major inventions which are no longer in use because they were so injurious.

One, lead in gasoline, has had myriad well-documented public health effects. Yet, at the time of its invention, lead was heralded as an innocuous additive to gasoline to improve engine performance. Almost no thought was given to where the lead would go once it exited the tailpipes of the world's gasoline-powered transportation fleet.

This theme carried over into Midgley's other now infamous invention, chlorofluorocarbons, known by the trade name Freon. The world needed a liquid that would be highly volatile and chemically inert to aid the spread of refrigeration. Early refrigerators used toxic, flammable and corrosive liquids to transfer heat from the inside to the outside of the refrigerator. Chlorofluorocarbons as a nontoxic and nonflammable refrigerant seemed an ideal solution.

The problem, of course, was that no one thought about the systemic risks of releasing chlorofluorocarbons into the environment, substances which were designed to persist over decades.

If it were not for the efforts of one curious scientist, F. Sherwood Rowland, in the early 1970s, we might not have learned about the emerging catastrophic interaction between chlorofluorocarbons and the ozone layer. Rowland asked a simple question: Where do chlorofluorocarbons go after they are released into the environment?

The answer was shocking. They were reaching the ozone layer and destroying it thereby threatening all life on Earth, life which had evolved under the ozone layer's protection from the sun's ultraviolet radiation. This was really a case of potential catastrophic ruin that might have gone undetected until the damage was far more advanced.

Rowland's research led to the Montreal Protocol in 1987, a worldwide agreement to phase out the use of ozone-destroying chemicals.

But the inventor of chlorofluorocarbons was widely lauded during his lifetime, winning several top awards for his achievements in chemistry and even serving as president of the American Chemical Society.

Since then, we have had many examples of worldwide systemic releases of dangerous chemicals which were thought to be innocuous or at least "safe" by the standards of the day.

Ignoring all this the modern pretends that we've learned our lessons and now couldn't possibly do things which could bring down civilization, that is, pose the risk of systemic ruin.

Everyone feared the destruction which a nuclear war might bring. But it wasn't until computer modelers suggested that total nuclear war between the United States and the Soviet Union could bring on dramatic summer cooling of 20 to 35 degrees C that the full systemic consequences of a such a war were understood. The shroud, known as nuclear winter, that would envelope the sky would initially block out 99 percent of the natural radiation. It would mean a wipeout for the world's food supply and the end of civilization and possibly many species, including perhaps humans.

Such a nuclear exchange seems unlikely today. But it is still possible.

We humans continue to flirt with systemic ruin by touting the benefits of those things which could cause it. Genetically engineered crops (often called genetically modified organisms or GMOs) have been introduced worldwide with virtually no testing on how such novel genes might interact with the natural environment. As author on risk Nassim Nicholas Taleb has explained, where there is repeated use of a technology with a nonzero risk of systemic ruin, that ruin over time becomes almost certain.

If you do something which has a 1 in 10,000 chance of killing you and you do it only one time, you will probably survive. But if you do it 10,000 times, you will almost surely end up in your grave. That is the problem with GMOs, and we have no way of even calculating the risk. We face the possibility of a wipeout of the food system for reasons which we cannot anticipate--that come from the hidden risks accompanying the spread of novel interspecies gene transfer without any understanding of the dynamics of such transfers once released. If we stopped now, perhaps we would avoid such a wipeout. But if we continue, we are only playing a more elaborate version of Russian roulette with gene-splicing technology.

Others have noted the systemic dangers of creating self-replicating nanobots, possibly leading to the so-called gray goo problem in which nanobots consume significant portions of the biosphere in order to feed and replicate.

Some systemic risks are more passive. We've created a worldwide electrical system which we now know is vulnerable to solar storms. It is only a matter of time before one capable of shutting down much of the world's electric power generation hits. So critical is electricity to the daily functioning of our global communications and logistics systems and to everyday systems such as water purification and wastewater treatment, that a denial of electricity to much of the world for more than a few weeks might very well lead to mass death and the end of modern technical civilization. Yet, we as a species have done little to prepare for this event.

What the modern believes is that such scenarios are so unlikely that we should ignore them. He or she believes that the bell curve (normal distribution) of outcomes applies to such risks, when, in fact, we cannot calculate their probability since we cannot quantify what might cause them in the first place.

The point about systemic risk is not that any one of these scenarios is likely. It is that any one of a thousand unlikely systemic risks could seriously endanger all of society. We don't need all of them to take place to experience catastrophe. We just need one. Climate change comes to mind.

And so, as we pile risk of systemic ruin upon risk, we are doing nothing more than whistling past the graveyard, lost in modernist denial--obliviously believing that we know far more about and have far more control over our environment than we do.

Kurt Cobb is an author, speaker, and columnist focusing on energy and the environment. He is a regular contributor to the Energy Voices section of The Christian Science Monitor and author of the peak-oil-themed novel Prelude. In addition, he has written columns for the Paris-based science news site Scitizen, and his work has been featured on Energy Bulletin (now, The Oil Drum,, Econ Matters, Peak Oil Review, 321energy, Common Dreams, Le Monde Diplomatique and many other sites. He maintains a blog called Resource Insights and can be contacted at


blackTom said...

We are the gray goo! No need to imagine nanobots consuming the Earth, Homo sapiens is already doing it.

Anonymous said...

The guy who identified the ozone layer erosion is actually F. Sherwood Rowland, rather than Rowland Sherwood.

Kurt Cobb said...

Thanks for catching the reversal of Rowland's name. I've corrected it.

James R. Martin said...

That your four item list doesn't include anthropocentrism makes it an inadequate list, I think. You almost touched upon it in saying "Humans are in one category and nature is in another." Bug that's actually the description of another problem.