Why Trust in Science Is Critical: Five Questions for Naomi Oreskes

“Scientists are our designated experts for studying the world,” says Naomi Oreskes, a science historian at Harvard University, in her new book, “Why Trust Science?” And trusting them can be a matter of life and death. “If we cannot answer the question of why we should trust science,” she writes, “then we stand little chance of convincing our fellow citizens, much less our political leaders, that they should get their children vaccinated, floss their teeth, and act to prevent climate change.”

“Why Trust Science?” by Naomi Oreskes (Princeton University Press, 376 pages).

Based on the 2016 Tanner Lectures on Human Values she delivered at Princeton University, Oreskes’ book offers a history of scientific thought, presenting a reasoned case for how science has evolved and how its collaborative nature and diversity provide a foundation for trust. That foundation, she argues, rests on the process for vetting claims, which results in accepted scientific knowledge that is “fundamentally consensual and has “survived critical scrutiny.”

Trust in experts should not be limited to science, she emphasizes. “In the modern world we have to trust experts,” she says. “Society would come to a crashing halt if we didn’t – if we didn’t have trust in our car mechanic or dentist.” But trust in science is particularly critical if we want to ensure our survival, as well as the survival of the planet.

For this installment of the Undark Five, I spoke with Oreskes about how to trust science that may conflict with our moral or religious values and what we can do to prevent bias in scientific communities, among other topics. Here is our conversation, edited for length and clarity.

Undark: Why, especially at this moment in our history, is trust in science critical?

Naomi Oreskes: We have a number of issues that are truly life and death that hinge upon our understanding and taking to heart scientific evidence. The two obvious ones on the global scale? Climate change and vaccinations.

If we don’t address climate change in a very rigorous and vigorous way in the next decade or so, we’re likely to see massive dislocation, social, economic, political, environmental dislocations that will lead to loss of property, loss of life. We’re seeing it already in the recent events in the Bahamas, [and] with respect to deadly heat waves.

If we think about the problem of vaccine rejection, this is also a matter of life and death. People who don’t vaccinate their children put their own children at risk for serious childhood illnesses and put other people’s children at risk as well.

There are these particular issues where significant numbers of people have rejected scientific evidence. And what we know about those particular issues – and some others as well, like evolutionary biology – is that people reject scientific claims that they fear threaten their personal, political, or religious interests.

We also know that there have been organized campaigns designed to generate this distrust of science. That’s what I’m particularly concerned about.

We certainly had people rejecting science in the 19th century. Vaccination skepticism is a very old phenomenon – we can track it back pretty much as far as there have been vaccinations. But this phenomenon of organized professional disinformation designed to generate distrust in science by people whose interests are threatened is a relatively recent phenomenon, and a very frightening one – because of how cynical it is, how amoral it is, and how effective it has been.

UD: You argue that we don’t actually have a “scientific method.” Can you explain?

NO: Well, it’s a remarkably popular and persistent myth that there is a scientific method. And there are historical reasons why we think that there is a scientific method, but it’s not really true.

The thing that many people think is the scientific method is what philosophers call the hypothetico-deductive model. We develop a hypothesis, we deduce its consequences logically, and we do some kind of test to see if those consequences are true. Some kind of experiment, observation, clinical trial, et cetera.

But from a philosophical standpoint, that model doesn’t hold up – because we find, logically speaking, even if the prediction of the theory comes true, it doesn’t actually prove that that theory is correct. So if we think of the scientific method as necessarily demonstrating the truth about claims in a logical way, that model does not stand up to scrutiny.

Moreover, if we stepped back from logic and theory, and simply asked, “What do scientists actually do?” we find that scientists do a lot of different things. Some scientists are following the hypothetico-deductive model. But we can also find many cases where they’re doing other things.

So this tells us that if scientific knowledge is reliable, it’s not because scientists are following a unique, logically bulletproof method. It has to be something else. That led me to ask: What is that something else? And to conclude that that “something else” was not the methods by which claims are generated, or even tested by individuals, but this collective process by which claims are vetted.

UD: Science is an inherently social endeavor, and scientists are affected by the world they live in. In the past, women and minority groups have been left out, both in terms of being part of the scientific community, but also in the research itself. How can we trust science when we know it’s biased? Can we prevent bias?

NO: We don’t prevent it – we identify it and we try to weed it out. Science isn’t biased because it’s a social process – science is biased because it’s human beings. All human beings bring their biases, their preferences, their predilections, their “priors,’’ as statisticians call it, to any question. That cannot be eliminated. There’s simply no way for any human being to expunge all of their priors.

But we don’t rely on science [based] on the views of an individual. We have this collective process. So the social aspect of science is actually a potential remedy for bias.

What I argue, drawing on the work of a number of philosophers of science, including a number of important feminist philosophers of science, is that if the social process is working correctly and the community is diverse, so people are looking at the question from a number of different angles, then you have the opportunity to identify bias and correct for it. To say, “Hey, but maybe you didn’t look at this, or maybe you didn’t fully consider that, or maybe you discounted this body of evidence that I think you really need to pay attention to.”

So we’re not eliminating bias – that’s an impossible dream – but we’re finding a mechanism to correct for it.

UD: How can we trust science if it conflicts with our moral beliefs, or our religious teachings? For instance, why should we trust the science of evolution over creationism?

NO: How we speak about it and the words we choose are very important. I don’t believe that evolutionary biology conflicts with religious faith. I’m not alone in that – there are plenty of religious people who agree with me. The Pope agrees. The leaders of most modern religious organizations agree.

One of the things we know about evolutionary biology is that many evangelical Christians in the United States think or perceive that evolutionary biology conflicts with evangelical Christianity. And in part, they feel that way because they interpret evolutionary biology to say that life is meaningless, that it’s because it’s random, that therefore life is purposeless. But that doesn’t necessarily follow.

There are many wonderful theologians and biologists who have pointed out that that is actually a non sequitur. Whether my life has meaning is a separate and different question than how the biological function of my organism came to be today. So it’s possible to unpack those questions.

UD: What are your biggest concerns when it comes to the ways that scientists, or the scientific community, may not be impartial?

NO: There are a few things I worry about. One is methodological fetishism. I think that many scientists get fixated on the idea that there is a particular method – I don’t mean the scientific method, that hypothetico-deductive method – but some method that is uniquely powerful or useful. We see this with the contraceptive pill and also with dental flossing: This privileging of the double-blind clinical trial [where neither the participants nor the researchers know who is getting a treatment].

There’s no question that double-blind clinical trials are an excellent tool when you can do them. But there are many problems for which they’re simply not possible.

This came up recently with the claim that there’s no good evidence that eating less meat makes you healthier. This is a very clear example of methodological fetishism in action. We have huge amounts of evidence that eating less meat is better for almost all of us. But it’s impossible to do a double-blind clinical trial of meat-eating, because people know what they eat, and people lie about what they eat.

This is the main reason why nutrition is so difficult. It’s a very challenging science in which to get robust evidence. So we have to rely on patient studies or animal studies, and we have to look at the body of evidence. But when we just cherry pick evidence of a particular type and insist that that evidence is the only evidence that counts, then I think we will make serious mistakes.

I also worry about what I call performative diversity – communities that invite in people of color or women but treat them in a tokenistic way. So, yes, they’re there in the room, but they’re not being listened to. That happens and requires more attention.

Also, we have enormous amounts of evidence that funding influences outcomes, but a lot of scientists are reluctant to accept that because they feel that it’s an attack on their own personal objectivity. But it’s wrong to view this as an issue of whether I am an objective person. The question is, are the processes of the scientific community functioning well?

We have evidence that when a lot of funding has come from a vested party, that distorts the scientific research. The most well-documented example is tobacco. We have very robust studies that show that when research was funded by the tobacco industry, it was much less likely to find that tobacco caused cancer or other diseases than independently funded research. It contributed to the delay in implementing tobacco control – to the sense, for a long time, that the science wasn’t really settled.

People died. We can’t guarantee that those people would have quit smoking, but if we had had better information sooner, governments and other organizations might have taken steps sooner to discourage tobacco use. Tobacco taxes and bans on smoking in certain places are effective. So if governments had taken action, there’s every likelihood that we would have reduced mortality and morbidity from tobacco-related disease.

UPDATE: An earlier version of this article carried a headline suggesting that Oreskes was arguing for “faith” in science. Rather, she argues in her book that trust and faith are not the same. The headline has been updated to better reflect her argument.

Hope Reese is a writer and editor in Louisville, Kentucky. Her writing has appeared in Undark, The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Vox, and other publications.