Explaining basic science is an important job for a journalist. But it's not enough.

Just Add Science?

Earlier this month, we learned that President Donald Trump was considering creating a panel on vaccine safety. The likely chair of that panel, it seemed, would be Robert F. Kennedy Jr., the environmental attorney and activist. Kennedy was educated at Harvard and the University of Virginia, and he is clinical professor of environmental law at Pace University. He is also a notorious believer in the repeatedly debunked theory that vaccines cause autism.

tracker_final

The Trump team has since clarified that no official decision on the commission has been made, but the first reaction of the media was to collectively facepalm. “Can you BELIEVE this madness??” wrote Rowan Hooper, the managing editor of New Scientist, on Twitter. Science writer Ed Yong tweeted his response this way: “<screams into fist>”.

The second reaction was to nip any potential misconceptions in the bud. Science journalists in particular immediately set about the unglamorous business of explaining the science behind vaccines, and the quackery that led to the supposed autism link. This is how our field works: We identify harmful misunderstandings that may plague the public, then we seek to rectify them with evidence and logic.

Unfortunately, as many of us are coming to know, this well-intentioned approach rarely changes hearts or minds. If Election 2016 has taught us anything, it’s that journalists need to reconsider how they communicate science.

Think of it this way: When you come down with a cold, common wisdom recommends a healthy dose of Vitamin C. Eat oranges, sip orange juice, or take a supplement — the more, the better. The problem is, multiple peer-reviewed studies have found this essential nutrient to be essentially useless when it comes to curing the common cold. Vitamin C is no more effective than a placebo — and overdoing it on the supplements can actually be harmful to your health. Plus, if you’re focusing on administering Vitamin C, you’re probably not addressing the root cause behind your malady.

In science journalism, we face a similar predicament. The media has diagnosed America as suffering from a severe plague of anti-science attitudes; all around us are voices that deny the profound implications of science, from vaccination opponents and climate change deniers to Evangelicals who challenge the teaching of evolution in schools. Common wisdom suggests that the remedy for these incorrect beliefs is to simply provide more reporting on the science. If we could just explain the science better, louder, and with more authority, then everyone would agree. Or so the thinking goes.

The problem is, studies have shown time and time again that this strategy doesn’t work — and like Vitamin C, it can be counterproductive or even harmful when used in excess.

In academia, this strategy is known as the science deficit model, says Michael Cacciatore, a researcher who studies science and risk communication at the University of Georgia. It relies on the assumption that your audience is suffering from a cognitive flaw — a fundamental lack of understanding about scientific ideas, or an intellectual inability to grasp the science and its implications. This void, we think, can only be filled with more science.

“A lot of people think that, if you just educate people, well of course they’ll see it in the most rational way possible, and they’ll think the same way as experts in that space,” says Cacciatore.

Unfortunately, that isn’t actually what happens. Labeling those who oppose vaccinations, climate change science, or the teaching of evolution as “anti-science” is just too simplistic, as science writer Sarah Zhang recently argued in The Atlantic. This thinking fails to capture the intricate web of social and cultural factors that shape people’s convictions about how the world works.

In general, scientific knowledge is a poor predictor of where individuals come down on specific scientific questions. An in-depth analysis by the Pew Research Center last October on the politics of climate change, for instance, found that scientific literacy usually doesn’t change one’s views on scientific matters.

The Pew survey echoes a 2015 study by Yale University’s Dan Kahan, which also looked at attitudes toward climate change. Kahan concluded that:

[T]here is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes over societal risks is the contamination of the science-communication environment with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.

The implications of these and other analyses are clear: No matter the grounds on which an audience objects to certain scientific evidence — be they religious, cultural, moral, economic, or political — more scientific explanation is unlikely to persuade them. And in fact, it can make things worse.

Heidi Lawrence, an assistant professor at George Mason University who studies the rhetoric of science and technology, says journalists might think of that instinctive need to “just add science” as their own kind of cognitive flaw. She says she learned this quickly in her own sociological research on the effectiveness of rhetoric used in vaccination debates, which involves interviewing those who choose not to vaccinate themselves or their families.

“I can’t walk up to them and say, ‘I think you’re a terrible community member, will you talk to me?’” Lawrence says. “As a rhetorician, I look at language. If the language going around is one of blame, is one of punishment, is one of people saying ‘you’re stupid, you’re wrong, look at what you’ve caused,’ then that’s not going to make the people who are most fervently anti-vaccination more forthcoming.”

“In fact, it can send people further underground,” she adds. “Stigma causes silence.”

Cacciatore agrees. “It’s easy to dismiss when an outsider shows up and says here are all the reasons you’re living your life wrong and here are all the things you should change,” he says. “No one wants an outsider coming in and saying this is why you should pay attention to this issue and here is the solution.”


So what is the solution for science writers? A study by Cacciatore and colleagues offers some possible insights. Published last July in the journal Public Understanding of Science, the study surveyed almost 3,000 randomly selected Americans on where they turn to for scientific information. Among other things, Cacciatore and his co-authors write that we are entering a time when scientific advancements are increasingly raising very real moral and ethical questions for many Americans. These can range from CRISPR gene editing and synthetic biology to GMO foods and new technologies to address climate change.

The thing is, people often don’t consider these to be “science” questions at all.

“A lot of the science we’re talking about isn’t pure science,” says Cacciatore. “This is science with economic impacts, this is science with moral impacts, this is science that affects the United States’ standing in the world … For a lot of people, I would argue, it’s anything but a scientific question.” And as science continues to move into these “gray areas of morality,” Cacciatore adds, these tensions will only get worse.

With those findings in mind, science journalists might consider starting to look inward, and accepting that blind adherence to the Vitamin C solution is our own bad habit. Explaining the basics is important, of course, but we also need to diversify our approach to the coverage of science — particularly as it intersects with the matrix of cultural, religious, social, and political values of our readers.

We might also consider that some of our own core assumptions may not be correct either.

“What one ought to do as a journalist is recognize that, for all our efforts to be objective as scientists, no scientists really are objective,” says Ian Hutchinson, a professor of nuclear science and engineering at the Massachusetts Institute of Technology and the author of the 2011 book “Monopolizing Knowledge,” which argues against the view that science encompasses all the real knowledge there is. “Science is never a view from nowhere,” Hutchinson says, adding that before we look down on a community for being too trusting of their own favored authorities, journalists should remember that we all have our own biases and blind spots in need of correction.

As Lawrence says: “Communication always goes better if people come from a place of mutual understanding.”

A final step would then be to look outward. We need to do what journalists do best: Listen, be curious, and consider the non-science factors that shape people’s beliefs — because people’s beliefs shape policy, our society, and the world. This can mean wading into what for many of us is uncomfortable territory: spiritual beliefs, superstitions, emotions, tradition. Including these viewpoints in our articles doesn’t make us care less about science. It means we are acknowledging the many complex forces that shape our world today. And that makes us more effective communicators in the communities we serve.

“The assumption is that if someone thinks that vaccines cause autism, it’s because they’re deficient in scientific literacy. And if we just correct that deficiency, then they will all of a sudden be persuaded and start vaccinating their children,” says Lawrence. “Unfortunately, that hasn’t proven to be an effective form of persuasion.”

Rachel E. Gross is the online science editor at Smithsonian Magazine in Washington, DC. Until recently she was a writer for Slate Magazine, where she covered science and food, and her work has also appeared in The Atlantic, National Geographic News, New Scientist, Science, California Magazine, and The Jewish Daily Forward, among other publications. This essay was written after taking part in a journalistic ethics program called the Fellowships at Auschwitz for the Study of Professional Ethics.

An earlier version of this article misspelled the name of the managing editor of New Scientist magazine, Rowan Hooper. The story has been updated.