In one of the best commencement speeches ever, the Australian composer, lyricist, and comedian, Tim Minchin told students at University of Western Australia, “We must think critically and not just about the ideas of others. Be hard on your beliefs. Take them out onto the veranda and hit them with a cricket bat.”
I was reminded of that advice when I participated in the San Francisco March for Science back in April. There were great signs and inspiring speakers, but on many topics — including climate change — there was also no shortage of strong claims made by people who were not following Minchin’s rule: claims that we could easily switch to 100% renewable energy, that such a switch would “create jobs”, that nuclear power must not (or must) be part of a climate strategy, and that there is a known threshold of atmospheric CO2 concentration beyond which catastrophic effects would follow.
None of these claims stands up to close scrutiny. Not that they are clearly wrong, but that they are not supported by existing research. Working on the economics of energy and climate change, similar wince-inducing statements of “fact” are everywhere:
- The CEO of a multinational oil company who claims that “the market” will solve climate change without government regulation. Evidence: cheaper natural gas from fracking has reduced coal burning and lowered GHGs. Problem: Even if all coal were displaced by natural gas, GHG levels would continue to rise. And what if the market finds a much cheaper way to extract coal (as it did in the mid-1900s with mountaintop mining). There is no market for a pollution externality, which is why the market can’t solve the problem.
- The leaders of some environmental groups who have argued that restricting refinery GHG emissions will prevent asthma in the surrounding neighborhoods, even though CO2 and methane (the major GHGs) don’t cause asthma. Evidence: Local pollutants that do cause asthma are correlated with GHG emissions. Problem: They are also correlated with the presence of freeways and heavy trucks, low-quality housing, and poor healthcare.
Such advocacy presented as “facts” raises at least three different concerns:
The first one is obvious, a simple correlate of Minchin’s rule: if you only believe in science when it supports your prior position, you don’t believe in science. If research doesn’t regularly cause you to adjust your beliefs, you either are omniscient (nope, I checked, you aren’t), or you are not really interested in evidence-based arguments.
The second concern is perhaps more subtle, or at least frequently lost in media coverage: One research study almost never establishes a fact. That’s not how knowledge advances. Rather, new research contributes to the larger body of knowledge on a subject and changes the probability of something being true. Over time, as research confirms or conflicts with existing beliefs, our understanding evolves closer to certainty, that is, facts.
Believing in science (or economics, or any empirical inquiry) means taking those probabilities seriously even when we are well short of certainty; not asserting that unlikely conclusions hold simply because they haven’t been 100% ruled out; and not asserting that likely conclusions are already indisputable “facts.”
For a few horrible years, we had no idea what caused AIDS. Then some scientists linked it to HIV, and now with more information experts view the probability that HIV causes AIDS as near certainty. But the recognition of a high probability in the 1980s made it possible to harness research and public policy to create effective AIDS treatments before near certainty had been established. Today it has even led to an early-stage vaccine. Waiting until there was certainty on the HIV-AIDS link before making policy would have permitted many more tragic deaths than have occurred.
Which leads to my third point: Every decision to take action or not is made under uncertainty. We are constantly trading off the costs and benefits of waiting for additional information. If you wait for near certainty on any important issue, you have almost surely waited too long, because the costs of waiting are seldom trivial.
If you or a loved one has faced cancer, you know this. You know that the treatment options are all about probabilities. New information on the effectiveness and the side effects of treatment options arrives every year, and preferred treatments change. But when the diagnosis arrives, you can’t wait until you are certain of how you will respond to each possible treatment before making a decision.
Which connects back to my first point. Even after we have made a decision, we have to be open to new information that suggests it’s not the right decision. It’s way too easy to become committed to the view that the cost of energy storage technology will inexorably decline, or nuclear power is (or isn’t) the cheapest way to reduce GHGs, or intermittent renewables will (or won’t) be costly to integrate into the grid. It’s hard not to want your best (or most optimistic) guesses to be fulfilled, so much so that we ignore conflicting evidence (as famously led to the explosion of the Shuttle Challenger). But as new evidence appears, we have to be willing to reconsider the best path forward.
Obviously the basic discussion of the existence and cause of climate change has been a victim of these three points. Early on, a small group of the population locked into the view that it wasn’t real, or that we weren’t certain it’s real, and ever since they have been selecting data and studies to confirm their belief. On the other side, though, there are the anti-science claims that every weather catastrophe is caused by climate change, that we must support every possible carbon reducing technology no matter how expensive or implausible, and that we know with great certainty the exact relationship between atmospheric CO2 concentrations and climate change.
These black and white views of climate change make better bumper stickers than, say, “keep the 95% confidence interval of temperature change below 2 degrees Celsius”. I get that. Making the effects of climate change salient to the average voter is critical to mobilizing support for action. But when climate activists make unsupported claims of certainty about the damage or low-cost solutions, serious researchers get queasy and start to back away.
And such claims that stretch beyond the research often end up backfiring. Assertions based on cherry-picked or misinterpreted data get discredited; overly-confident predictions fail to materialize; or advocates reject new findings simply because they don’t support their world view. Some in the do-nothing camp compile these biased analyses as evidence that all research on climate change is flawed and misleading.
So, how do we stay true to real scientific method – openness to new research results, recognition of uncertainty, and understanding that imperfect knowledge shouldn’t be an excuse for inaction – while still communicating the (high probability) seriousness of the climate change challenge and building political support? I don’t have the answer, but I think we need a more serious discussion. And we need to spend more time swinging cricket bats at our own ideas.