Criticising Taleb’s Precautionary Principle Paper
November 16, 2019 Leave a comment
Nassim Nicholas Taleb has written an essay about his own variant of the precautionary principle (PP). I’m going to point out some problems with the essay and Taleb’s variant of the PP and then criticise Taleb’s argument against genetically modified organisms (GMOs).
In Section 1 Taleb writes:
The PP states that if an action or policy has a suspected risk of causing severe harm to the public domain (such as general health or the environment), and in the absence of scientific near-certainty about the safety of the action, the burden of proof about absence of harm falls on those proposing the action. It is meant to deal with effects of absence of evidence and the incompleteness of scientific knowledge in some risky domains.
In Section 2.2 Taleb writes:
The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems [1]. A ruin problem is one where outcomes of risks have a non zero probability of resulting in unrecoverable losses. An often-cited illustrative case is that of a gambler who loses his entire fortune and so cannot return to the game. In biology, an example would be a species that has gone extinct. For nature, “ruin” is ecocide: an irreversible termination of life at some scale, which could be planetwide. The large majority of variations that occur within a system, even drastic ones, fundamentally differ from ruin problems: a system that achieves ruin cannot recover. As long as the instance is bounded, e.g. a gambler can work to gain additional resources, there may be some hope of reversing the misfortune. This is not the case when it is global.
In The Beginning of Infinity Chapter 9, David Deutsch writes:
Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often called the precautionary principle, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocates either of these two as a universal policy, but their assumptions and their arguments are common, and often creep into people’s planning.
Deutsch then criticises the PP at some length. I’m not going to reproduce the entire criticism, but I’ll explain the basic point. The PP assumes that new innovations will make the world worse and so that current knowledge is basically okay and not riddled with flaws that might lead to the destruction of civilisation. But our knowledge is riddled with flaws that might destroy civilisation. Human beings are fallible so any piece of knowledge we have might be mistaken. And those mistakes can be arbitrarily large in their consequences because otherwise we would know we were right every time we made a decision above the maximum mistake size. In addition, we can be mistaken about the consequences of a decision so a mistake we think is small might turn out to be a large mistake. The only way to deal with the fact that our knowledge might be wrong is to improve our ability to invent and criticise new ideas so we can solve problems faster. Taleb doesn’t address any of these points in his paper. He doesn’t refer to BoI. Nor do any of the arguments in his paper address Deutsch’s criticisms of the PP.
Taleb also makes an argument criticising the use of GMOs (Section 10.3):
The systemic global impacts of GMOs arise from a combination of (1) engineered genetic modifications, (2) monoculture—the use of single crops over large areas. Global monoculture itself is of concern for potential global harm, but the evolutionary context of traditional crops provides important assurances (see Figure 8). Invasive species are frequently a problem but one might at least argue that the long term evolutionary testing of harmful impacts of organisms on local ecological systems mitigates if not eliminates the largest potential risks. Monoculture in combination with genetic engineering dramatically increases the risks being taken. Instead of a long history of evolutionary selection, these modifications rely not just on naive engineering strategies that do not appropriately consider risk in complex environments, but also explicitly reductionist approaches that ignore unintended consequences and employ very limited empirical testing.
Biological evolution doesn’t limit harmful impacts of species. Variations on genes arise as a result of mutation and any particular gene either manages to copy itself or not. The knowledge created in genes is just as fallible as the knowledge created by human beings. So there is no particular reason why a species should not evolve that would cause a disaster. This has happened in the past. For example, the black death killed somewhere between 30 and 60 per cent of Europe’s population. We should develop the knowledge of how to manipulate genes partly so we can try to stop events like that from happening in the future.