In 1979, there was a partial meltdown at a nuclear plant on Three Mile Island, in Dauphin County, Pennsylvania. I was a young newspaper editor at the time, and I was caught up in coverage of the resulting debate about whether nuclear power could ever be safe. I have long forgotten the details of that episode, except for one troubling thought that occurred to me in the middle of it: The experts we relied on to tell us whether a given design was safe, or indeed whether nuclear power generally was safe, were people with advanced degrees in nuclear engineering and experience running nuclear plants. That is, we were relying on people who made their living from nuclear power to tell us if nuclear power was safe. If they started saying out loud that anything about the nuclear enterprise was iffy, they risked putting themselves out of business.
I mention this not because I think the engineers lied to the public. I don’t. Nor do I think nuclear power is so dangerous it should be rejected as an energy source. I mention it because it shows how hard it can be to make sense of information from experts.
Here’s another example: Men with prostate cancer are often asked to choose between surgical treatment and radiation. Quite often, they find the surgeons they consult recommend an operation and the radiologists suggest radiological treatment. This real-life example of the old adage “Don’t ask the barber if you need a haircut” is just one of the reasons that dealing with prostate cancer can be so difficult.
And finally, the bedbug. In recent years there has been, it is alleged, an epidemic of bedbug infestations in New York City apartments. They have even been reported in offices and theaters. But the evidence comes largely from companies you can hire to deploy bedbug-sniffing dogs that react when the creatures are around. Other pest control experts often found no evidence of infestation. The bedbug folks attributed false positives to poor training of the dogs or their handlers, or the possibility that the dogs were picking up scents brought in on clothing or linens or wafting through ventilation systems from other apartments. Who knows who is right? One thing we do know: It is in the companies’ interest for people to believe bedbugs are on the march in Manhattan.
Having said that, I must add that it is unwise to reject information solely because the person offering it has a financial stake in it. This approach has provided a useful weapon for the disingenuous people who want to discredit one idea or another but find the facts are against them. They argue that the experts are speaking from positions of vested interest.
Climate change deniers routinely put this canard forward in their arguments against action on global warming, describing climate research generally as a closed shop whose members somehow profit from a flow of research funds from government agencies that have drunk their particular brand of Kool-Aid.
Creationists raise it in connection with the teaching of evolution — biology as an institution, they argue, has some kind of vested interest in evolution and will quash anyone who challenges it. I doubt it. Most scientists I know would love to be the one who pulls on the thread that causes climate worries to unravel, or frays the warp and woof of biology — assuming such threads exist in the real world.
Anyway, conflict of interest is just one of many factors to weigh when you are considering opinions, supposedly expert, on one issue or another.
One of the first things to consider is, who is making the claim? Are they people we should take seriously? In other words, are they experts? If you don’t at least try to answer this question, you may waste time or even delude yourself. As the sociologists of science Harry Collins and Robert Evans put it in their analysis of expertise, “Other things being equal, we ought to prefer the judgments of those who ‘know what they are talking about.’ ”
For example, at the Institute for Advanced Study, Freeman Dyson researched solid-state physics and related fields. Today, more people know him as a critic of climate change theories. Among other things, he asserts that ecological conditions on Earth are getting better, not worse, an assessment many people would contest. Dyson has a distinguished record in physics, but he is not doing the kind of work that would put him in the front rank of researchers on climate or the environment generally. As Kenneth Brower put it in a 2010 article for The Atlantic, “Many of Dyson’s facts on global warming are wrong.”
Plenty of other so-called experts have won fame by speaking out on subjects outside their fields. The occurrence I remember most vividly involved a 1998 story about the medical researcher Judah Folkman and his theory that a way to thwart cancerous tumors would be to prevent the growth of blood vessels they need to survive. The growth process is called angiogenesis, and Folkman was working on anti-angiogenesis drugs.
When I was science editor at The New York Times, one of our reporters heard about the work, which was exciting a lot of interest among cancer researchers and other scientists. The result was an article that ran on Page 1. The story was very carefully written. Folkman himself was not making great claims: As he said in the article, “The only thing we can tell you is if you have cancer, and you are a mouse, we can take very good care of you.”
But someone else we quoted was not so reticent. He was James Watson, the biologist who with Francis Crick elucidated the structure of DNA in 1953. “Judah Folkman will cure cancer in two years,” Watson declared. That really caught people’s attention. But tumor growth was not Watson’s field. He was not someone we should have sought out for comment on the work. And of course, he was wrong. Though anti-angiogenesis drugs have found wide use against several cancers, cancer remains unvanquished. Today, when people ask me if there is any story I regretted doing at The Times, I can answer truthfully that there is not. But I wish we had not cited Watson as an expert in that story.
But who is an expert? Massimo Pigliucci, a philosopher at the City University of New York, puts the question this way: “How is the average intelligent person (Socrates’ ‘wise man’) supposed to be able to distinguish between science and pseudoscience without becoming an expert in both?”
Even researchers struggle for ways to define leadership in their fields. Over the years, a number of metrics have come into use, but all of them have deficiencies, and they are not the kind of information the average person has (or necessarily wants). Still, you might want to know about at least a few of them, if only to get a sense of how difficult it is to judge.
The first of these metrics is the number of times a researcher or research paper is cited by other researchers. The problem here is that those who invent useful research techniques may find themselves cited repeatedly, even if they do not use the techniques to make stunning findings themselves. That is, their achievement is one of tool-making, not discovery-making. Tool-making is not trivial — the development of a way to culture kidney cells as a growing medium was key to the development of the polio vaccine, for example — but it does not alter our fundamental understanding of things.
Another such metric is the “impact factor” of the journal itself — the frequency with which articles in the journal are cited subsequently. This factor speaks to the quality of the journal, though, and not necessarily to the quality of any particular paper in it. A version of this measure called “evaluative informatics” gives greater weight to citations from papers that are themselves widely cited.
A final metric deserves mention here: the h-index. This measure, introduced in 2005, gives a researcher with 50 articles, each cited 50 times, an h-index of 50. More recent variants give added weight to newer or more widely cited articles.
Most scientific papers have multiple authors, however, and these methods do not necessarily tease out who was the intellectual engine of the report and who is merely tagging along in the lab.
What does all this mean to you? Only this: When you are trying to make sense of new scientific or technical claims, especially if they fall well outside the mainstream of accepted knowledge, you should consider whether the person making the claim has a track record of findings in the field that hold up, or that fellow researchers consider important and reliable. If you cannot make this judgment, just file the question away in your mental folder of things you’d prefer to know before making up your mind.
Does the person have any credentials? It’s good to know. But credentials can be deceiving. I wrote about such a case when I reported on Marcus Ross, a young earth creationist who earned a doctorate in geosciences from the University of Rhode Island. Ross, who said he believed that Earth was only a few thousand years old, wrote his thesis on a long-extinct creature that had lived millions of years ago. He reconciled the obvious discrepancy between his religious beliefs and his scientific work (“impeccable,” his notoriously demanding thesis adviser called it) by viewing them as representing different paradigms. But even before he earned his degree, he was featured on DVDs identifying himself (truthfully) as a young earth creationist trained in paleontology at URI. As Pigliucci notes, “it is always possible to find academics with sound credentials who will espouse all sorts of bizarre ideas, often with genuine conviction.”
Are the expert’s credentials relevant? Often people with legitimate credentials in one field begin opining about matters in other fields, in which their expertise may not be particularly helpful. “It is unfortunately not uncommon,” Pigliucci writes, “for people who win the Nobel (in whatever area, but especially in the sciences) to begin to pontificate about all sorts of subjects about which they know next to nothing.” This was our problem with James Watson.
Next, ask yourself where the expert fits in the field being discussed. Is this expert an outlier? That does not by itself invalidate anyone’s opinion. The history of science is full of outliers who came to be triumphantly proved right. Usually, though, outliers are wrong. What biases or conflicts of interest does the expert bring to the table? As we have seen, it can be hard to find out and, once you know, hard to calculate how big an effect, if any, these biases might have. But the issue should be on your mind when you consider whether to trust someone’s expertise.
Perhaps your supposed expert is actually a crank. When I was science editor of The Times, I often heard from people who wrote to say that they had disproved the germ theory or the atomic theory or some other gold-standard landmark of science. I dismissed them as cranks. And each time I worried that I was consigning a brilliant insight to the dustbin. Still, there are a few signals that someone may be, shall we say, somewhat loosely wrapped. Approach with caution those who claim that they do battle against the entire scientific establishment, or that enemies of some kind conspire to deprive them of the recognition they deserve.
Cornelia Dean is a science writer, former science editor for The New York Times, and a lecturer at Brown University. She is also the author of “Am I Making Myself Clear? A Scientist’s Guide to Talking to the Public.”