In 2021, anti-vaccine protesters gather to advocate for medical freedom and health choice in Minnesota.

Paul M. Sutter Thinks We’re Doing Science (and Journalism) Wrong

Republish

Astrophysicist Paul M. Sutter studies some of the largest features of the universe, including cosmic voids — vast, nearly empty chasms that separate clusters of galaxies from one another. “I enjoy becoming more and more of an expert on absolutely nothing,” joked Sutter, a visiting professor at Barnard College, Columbia University and a research professor at Stony Brook University.

“Rescuing Science: Restoring Trust In an Age of Doubt,” by Paul M. Sutter (Rowman & Littlefield, 248 pages).

While Sutter loves science, he believes there are deep problems with the way science is actually done. He began to notice some of those problems before the Covid-19 pandemic, but it was Covid that brought many of them to the fore. “I was watching, in real time, the erosion of trust in science as an institution,” he said, “and the difficulty scientists had in communicating with the public about a very urgent, very important matter that we were learning about as we were speaking about it.”

“I was watching just one by one,” he continued, as “people stopped trusting science.”

Sutter takes issue with the hyper-competitiveness of science, with peer review, with the journals, with the way scientists interact with the public, with the politicization of science, and more. But his new book, “Rescuing Science: Restoring Trust In an Age of Doubt,’’ published this month by Rowman & Littlefield, is more than a laundry list of grievances — it’s also filled with ideas about how science might be improved.

Our interview was conducted over Zoom and has been edited for length and clarity.

Undark: What’s the biggest problem with science today?

Paul Sutter: Honestly, I think it’s an inability for scientists to meaningfully engage with the public.

UD: Can you expand on that?

PS: I think scientists don’t appreciate the value that they have in the popular imagination and popular press and in political decisions, and how respected they are. And so we as scientists tend to take it for granted. And then we don’t really work to connect with people, to share — not just the results of science, but the actual methodology, and how we arrive at our decisions.

We don’t meaningfully engage with politicians and policymakers. I find that the voices of scientists are actually frequently taken from them and used by other people, and often for anti-science means. And so I want scientists to be out front and center as the representatives of science in the public sphere.

UD: Who is taking the words of scientists away from them?

PS: Science popularizers, who are not trained in science, are taking the results of science and then communicating science for themselves. Journalists who are not trained in science are taking the words of science and communicating it to the public. Politicians. Big companies are taking the words of science and using it for their own ends — whether to make a buck or to sell a certain political program. And scientists themselves, I believe, aren’t taking an active enough part in all of those conversations.

UD: What’s the downside of having scientists focus on getting papers published?

PS: We, as a community of scientists, are so obsessed with publishing papers — there is this mantra “publish or perish,” and it is the number one thing that is taught to you, as a young scientist, that you must publish a lot in very high profile journals. And that is your number one goal in life.

And what this is causing is an environment where scientific fraud can flourish unchecked. Because we are not doing our job, as scientists. We don’t have time to cross-check each other, we don’t have time to take our time, we don’t have time to be very slow and patient with our own research, because we are so focused with publishing as many papers as possible.

So we have seen, over the past few years, an explosion in the rise of fraud. And different kinds of fraud. There is the outright fabrication — the creating of data out of whole cloth. And then there’s also what I call “soft fraud” — lazy science, poorly done science. Massaging your results a little bit just so you can achieve a publishable result. That leads to a flooding of just junk, poorly done science.

And this connects to the public because the public sees the results of scientific research, including all the fraudulent stuff, including all the lazy stuff, including all the poorly done stuff. And so when they see contradictory result after contradictory result, I don’t blame the public for losing trust in science, because we have not worked to build our own trustability, because we are not policing fraud enough.

UD: You’re not a fan of “impact factors” and “h-indexes.” What are they and what do you find troubling about them?

PS: Starting one or two decades ago, scientists decided that we need to be able to measure each other’s success — you know, what sets a good scientist apart from a bad scientist? And we came up with some measures, like the h-index, which is a relative measure of how many citations your recent papers have gotten.

So a high number means you have a lot of papers that are getting a lot of citations. A low h-index means you’re not writing a lot of papers and/or they’re not getting a lot of citation. So therefore, according to this logic, someone with a high h-index is a better scientist; they are more likely to get into graduate school, to get research positions, to get faculty appointments, to get tenure, to get grants, to get promotion. A lot of this scientific career advancement is tied to this number, this metric.

What we have done is, by focusing so much on a metric like this — a single number, or a single set of numbers — we are incentivizing getting that number as high as possible. And when you incentivize getting your h-index as high as possible, that means publishing as much as possible. And publishing what you believe to be really giant, impactful research, game-changing research. So that makes you more inclined to, say, stretch your results beyond what the evidence can really support.

And we’re all doing it. We’re not really safeguarding — we’re not actually generating good [science] or good scientists, we’re just generating scientists who have high scores.

UD: And impact factors?

PS: The journal with a higher impact factor has more papers that have more citations over a certain time frame, typically two years. And so — aha! — if I want my research to have a big impact, to be read by lots of people, to have a higher chance of citation, I’m going to submit my article to that journal, because I believe it will pay off, it’ll be more visible, it’ll be more powerful.

The highest impact journals are the ones we hear about all the time. You have to make a big statement, you have to make a bold claim, you have to do something big — and more likely than not, big results are wrong, because that’s the nature of scientific investigation.

UD: What is p-hacking, and why is it problematic?

PS: There’s a statistical measure that’s commonly used called the p-value, which is a rough measure of how confident you are in your results. Of course, I’m skipping over a lot of statistics and mathematics. And the lower the p-value is, the more confident you are in this result.

P-hacking happens when you do some survey or you perform some study. And you’re looking for: does this cause this; does x cause y? And you run through your data, you do your analysis, and you get a very high p-value: It says, oh, x does not cause y. But I can’t publish that; I can’t write an article about that. No journal will accept that.

So instead, you look around your data — maybe as you were doing your data collection or your observation, you found some other variables, you found some other things. Oh, x doesn’t cause y, but look, x may cause z. You know, we weren’t looking for that; but look, it’s right there in the data, there’s a low p-value, I can just publish that.

The problem with that is, if you take enough data, if you collect enough data, then you have a very high chance of two variables just randomly being correlated and having a low p-value out of pure statistical luck, simply because you’ve collected enough data.

UD: You say the process of peer review is fundamentally broken? What’s wrong with it? And is it fixable?

PS: It’s treated as the gold standard, like, “Oh, this paper isn’t meaningful unless it’s gone through peer review.” But having been a peer reviewer and been subjected to peer review, dozens if not hundreds of times, what it really is, is: It’s a crapshoot, you don’t know what you’re gonna get.

Your peer, who is reviewing your work, may take a very take a long time to read your paper, understand your analysis, look for flaws or mistakes, help you out, give very useful suggestions to your work that does actually correct your paper.

That is, in my personal experience and the experience of people I’ve spoken to, the minority of the time. Most of the time, your peer reviewer is saying: “Oh, yeah, looks great. You missed a couple of citations here. But otherwise, it’s great.”

You don’t know if your paper is actually great or not from that peer review. Other times the peer reviewer is a rival of yours, or from a rival group, or believes in a different philosophy given this particular question, and will just use the opportunity to absolutely sink you and tear you down bit by bit.

The process of peer review — because we’re all so busy, we are writing so many papers of our own, peer review is not paid for, we are all volunteers — no one has the time to actually properly check these claims, to walk through the process of the paper, especially since most modern papers are based on so much computation and so much data analysis. It’s hard to wade through. And so peer review has come to be relatively meaningless in the 21st-century.

UD: You’re not a fan of the tenure system either why not?

PS: I’m not a fan of the tenure system, from two perspectives. One is, the actual process of getting tenure in almost all fields of science is long and grueling, and not exactly rewarding. And actually destabilizing to any kind of sane life. You go through graduate school, you get a postdoctoral research position typically that lasts two to five years, then you probably do another one, in many fields. And then you’re considered a potential candidate for faculty.

Then you have a five-year period, if you’re at a top-level research university. By the time you have actually secured a long-term position in science — what we would think of as tenure at a research intensive university — you are now in your mid 30s, and you have hopped around the country, if not the world up, to half a dozen times, often with very low pay, especially compared to your colleagues who got a Ph.D. and then went outside of academia.

So if you’re trying to have a family, own a home, create some stability in your life — which most people in their mid 30s, by that time are trying to do — the path to tenureship makes all of that extremely difficult.

And so we pretend it’s a meritocracy, where only the smartest researcher comes out and gets tenure — really, we’re selecting for people who are compatible with that kind of lifestyle.

And then, once you get tenure, you are more likely to get grants, you are more likely to win awards. You know, there’s plenty of data from the National Science Foundation that shows that senior people get awards at a higher rate than younger people, or they get awarded at a higher rate.

I believe this leads to an ossification of science, where, once you’re established in your research line, then you’re just going to continue or your research line until you retire. Instead, what I believe science needs is constant fresh infusion of new ideas. We actually need to favor young people and their ideas because they’re the ones who are who are thinking of new solutions for long-standing problems.

UD: You write about the politicization of science, and you cite the March for Science, which has been held each year since 2017, as an example. What’s wrong with marching for science?

PS: This is such an interesting case study, because on the surface, the March for Science is just simple, like: “Hey, everyone, please, support scientists. We need public funding. We do cool stuff. So please celebrate science.” That, I absolutely support. We need more scientists out there speaking to the public and advocating for the institution of science and the public funding of science.

The initial March for Science was very much a reaction to the Trump presidency and the inauguration of Donald Trump. And absolutely, Donald Trump, during his campaign, spoke very poorly of many aspects of science. Not all aspects of science, but many aspects of science, especially climate science. And his initial presidential budget requests to Congress did defund many, many scientific institutions.

So that’s absolutely true; Republicans in recent time have an established history of reducing funding in science or wanting reduced funding in science. But the initial March for Science was very, very much an anti-Trump march, and less so of a pro-science demonstration.

So what I believe happened is, anyone who was watching that, who was already leaning, or inclined to be against science, or distrusting of science, saw that as a pure political act, not as an act of, “Hey, let’s get more scientists out there. Let’s get robust funding for science.”

I believe that many Republicans were actually turned off by the March for Science, in that they were convinced: “Aha, science is just a Democrat thing. It’s not a Republican thing. Got it. These people are all Team Blue, and I’m Team Red, therefore I’m not going to support it.” I actually think the March for Science backfired.

What I wish is that science was robustly funded, regardless of who or was in the White House or who was in Congress.

Demonstrators hold signs during the 2017 March for Science in Washington, D.C. Visual: Paul Morigi/Getty Images

UD: But you also say scientists should be more political, or at least stop trying to be apolitical.

PS: There is this deep undercurrent in the halls of academia that “politics is dirty, politics is messy; we don’t get involved with that.” I think that’s a fiction. I think that’s a lie. Because almost all fundamental research in the United States — I’m talking like basic research, like basic physics, basic astronomy, basic chemistry — is funded by public institutions.

And if you’re funded by a public institution, guess what: you are political; you are a political entity. There is no divorcing of politics away from science. And so to pretend that it’s otherwise is to our detriment as scientists.

So I encourage scientists to be political, to have a political voice, to speak their minds, speak their views. And especially speak about the power and importance of science in everyday life. To go find people who are anti-science and try to understand where they’re coming from, empathize with them, and find ways to bring science to them, and show them how science is important.

But I think scientists need to be very careful when they do this. I believe scientists, when they speak about matters of important public concern, need to recognize that we are one voice at the table, where we have a unique perspective and a very powerful perspective on the world, and we absolutely need to bring that voice and that expertise to the table — but actually crafting policy, actually making decisions about climate change or pandemics, requires more than a scientific viewpoint, because we are communities of hundreds of millions or billions of people.

We’re trying to all collectively make a decision. And science is an important part of that, but not the sole part of that. And that scientists need to stay humble in our approach with the public.

You know, science is a very slow, contentious, deliberate process, where it’s very messy, and we fight with each other, and we argue with each other, and there are conflicting lines of evidence. And then slowly, over time, we arrive at a conclusion and a consensus and a solid understanding of what’s happening.

We need to be aware of that slowness of science as we speak to the public, so that when we are participating in the political process, we can say, “I don’t know yet. Here’s what we have so far, here’s where the evidence is leading. But it’s going to change because our minds are going to change.”

UD: You say you’re against “scientism” — what is that, and why do you take issue with it?

PS: Scientism is one of these made-up words that has many different definitions, depending on the context. My take on scientism is that [it] is a belief that science is the superior way of viewing the world, of approaching the world, and is better than other ways of approaching the world, like faith, or philosophy, or the humanities, or history, or art and music. And there is this strain, especially among popular communicators of science, that if you don’t look at the world through a scientific lens, then what’s the point; you’re just fooling yourself; you’re living in a world of delusions.

But my pushback to that is, I’m a professional scientist; the vast majority of decisions that I make in my everyday life are not based on the scientific method. When I’m trying to pick what to have for dinner tonight, or who to fall in love with, I’m not using the scientific method, I’m just following my gut —literally when it comes to dinner. I’m just using other tools than the scientific method to arrive at conclusions and decisions.

There are many, many questions that science does not have a solid answer on, and may not ever have a solid answer on. And it’s perfectly legitimate for people to turn to other modes of inquiry and investigation into this beautiful, messy world that we live in, to seek answers and comfort from that.

UD: At the same time, you warn against the dangers of pseudoscience.

PS: Absolutely, because science is really good at some things. The scientific method, this philosophical enterprise, is fantastic at exploring many aspects of the natural world and coming up with explanations for how many aspects of the natural world work.

And there is this surge in pseudoscience, which are beliefs or practices that look like science on the outside — they ape or mimic many of the qualities of science — but they miss the central components of science that make it so powerful. Science isn’t about the jargon. It’s not about the mathematics. It’s not about the lab coats and the experiments and the orbiting observatories.

Science is about curiosity. It’s about rigor. It’s about doubting yourself. It’s about doubting your peers. It’s about applying a strict methodology to problem solving, to arrive at results. That’s the soul of science. That’s what science is really all about. And that’s what many, or all, pseudoscientific beliefs lack.

UD: In the book, you suggest that people stop reading news stories, and also that people should follow the scientists themselves on social media, blogs, etc. Isn’t there something to be said for a curated selection of science news stories rather than expecting the public to choose some specific group of scientists to follow?

PS: There can be a place for, like you said, curated science stories to bring science out to the public. But we have to be very careful. And that’s because the goal of science, and the goal of science communication, is not necessarily aligned with the goal of a journalist. The goal of science is to understand how the natural world works. The goal of science communication is to share those results with the general public.

The goal of a journalist might align with that sometimes, [or] might not. We see a lot of really awful click-baity headlines; we see a lot of misquoted scientists; we see a lot of distortions of the results of a study in order to sensationalize it, to sell more advertising, to get more eyeballs.

Scientists should be the face of science. How do we increase diversity and representation within science? How about showing people what scientists actually look like. How do people actually understand the scientific method? What if scientists actually explained how they’re applying it in their everyday job.

People don’t understand why scientists are doing what they’re doing? They’re doing it because they love it, because they’re passionate about it, because they’re excited about it. People can connect with those kinds of human emotions, they can connect with passion and excitement. And if there’s a scientist out there on social media, or on a podcast, or a blog, talking with excitement about the most obscure random topic — how many people are going to fall in love, not just with that topic but with science itself? People connect to people.

Republish

Dan Falk (@danfalk) is a science journalist based in Toronto and a senior contributor to Undark. His books include “The Science of Shakespeare” and “In Search of Time.”