What a ‘Reproducibility Crisis’ Committee Found When It Looked at Climate Science

The scientific community is working to make its predictions more accurate, but there’s still a long way to go.
Though climate studies broadly show that global warming is happening and human-driven, scientists are still working to zero in on exact forecasts for the future.

As debate in Washington heats up over climate change and transparency in science, the National Academies of Science, Engineering, and Medicine held a quiet meeting last week to discuss just how consistent the results are across climate studies.

The verdict, for those who follow the science, wasn’t too surprising. There’s broad agreement among climate studies that global warming is happening and human-driven. But as scientists work to zero in on exact forecasts of future temperatures and precipitation under a given amount of greenhouse gas emissions, they are still seeing a wide range of results. “The spread has gotten tighter, but it hasn’t gotten super tight,” is how NASA climate researcher Gavin Schmidt puts it.

During the meeting, a panel of experts provided updates on what the scientific community is doing to make its predictions more accurate. This was part of a larger project examining reproducibility in different fields of science. As mandated by the American Competitiveness and Innovation Act of 2017, the National Academies will produce a report about the state of the dependability of scientific findings after 18 months. They held their first meeting in December.

Scientists have several ongoing strategies for improving climate change predictions. They compare results from different methods of estimating the Earth’s prehistoric climates. These methods include using present-day clues such as tree rings and deep slices of ice taken from the Arctic or Antarctic. They also share the data and computer programs used in climate studies online so other research groups can verify the findings and spot bugs. And they run past climate data on computer models built to predict the planet’s future—because if the model works for the future, then it should work for the past too.

One major ongoing challenge for verifying climate work is that climate data sets can be massive, requiring a supercomputer to process. So, technically speaking, sure, a data set is freely available online. But in practical terms, how many people will be able to use it? “Maybe you have a petabyte—10 to the 15th bytes—of information standing behind your conclusion. Reproducing that ain’t gonna be cheap,” says Rich Loft, one of the panelists and a chief technology officer at the National Center for Atmospheric Research in Colorado. The community has to work on fixes that help groups more easily assess each other’s work, Schmidt says.

The National Academies meeting comes at a time when scientists, critics, and the public alike are deep in discussions about reproducing findings in science. There’s been vigorous debate among scientists about how well results in social science and clinical medicine hold up when others try to duplicate them. Distressing findings—one effort to reproduce 100 top psychology studies found less than half of them worked a second time—have led to calls for fixing the system.

Meanwhile, critics of the idea of human-driven climate change have long called for more transparent data as a kind of distraction technique, Schmidt says. This, some argue, is what’s happening now with a hotly contested proposed Environmental Protection Agency rule. The rule says the science used in EPA policy-making must be publicly available, purportedly so that others can reproduce it. But The Atlantic reports that the rule was written to target a foundational study that established the dangers of a certain kind of air pollution, but whose raw data cannot be made available because it involves study volunteers’ private health information. By forbidding the EPA from considering that and other health studies in regulations, the rule would give the agency far less power to prevent companies from polluting the environment with pesticides, soot, lead, and other harmful substances.

When it comes to climate science, the current state of transparency—”almost all” climate data is now public, Schmidt says—grew in part out of a real scandal and change of heart. In 2009, hackers stole and posted online a series of emails between leading climate scientists, which had been saved on servers at the University of East Anglia in the United Kingdom. The emails revealed conflicts of interest among climate scientists conducting peer review on papers that contradicted their own work, and a reluctance among climate researchers to share their data, the Guardian reported.

Although nothing in the “Climategate” emails suggested the basic conclusions of mainstream climate science are wrong, they provided fodder for skeptical politicians and likely shook the public’s trust in climate scientists. In response, members of the community became far more open and transparent about their data and techniques as well as when they were uncertain about results.

This did little to settle skeptics; the basic findings of climate science are just as politically controversial today as they were a decade ago. Yet it opened the door for important re-analysis by some unconventional actors, Schmidt says. He pointed to a 2013 paper by a biochemist—someone who would have been unlikely to have access to climate data in the old regime—that helped reveal flaws in a commonly used data set. “It clearly is the case that having data out there helps people who are interested get involved and do good things,” Schmidt says.

At last week’s meeting, Andrea Dutton, a scientist at the University of Florida who studies the Earth’s past climates, noted a silver lining to the hostile attention the field has received from folks who deny the reality of human-driven climate change: “This public scrutiny has, I think, helped us to up our game in all these areas and be better about being transparent.”

Related Posts