In 1979, there was a partial meltdown at a nuclear plant on Three Mile Island, in Dauphin County, Pennsylvania. I was a young newspaper editor at the time, and I was caught up in coverage of the resulting debate about whether nuclear power could ever be safe. I have long forgotten the details of that episode, except for one troubling thought that occurred to me in the middle of it: The experts we relied on to tell us whether a given design was safe, or indeed whether nuclear power generally was safe, were people with advanced degrees in nuclear engineering and experience running nuclear plants. That is, we were relying on people who made their living from nuclear power to tell us if nuclear power was safe. If they started saying out loud that anything about the nuclear enterprise was iffy, they risked putting themselves out of business.
I mention this not because I think the engineers lied to the public. I don’t. Nor do I think nuclear power is so dangerous it should be rejected as an energy source. I mention it because it shows how hard it can be to make sense of information from experts.
Here’s another example: Men with prostate cancer are often asked to choose between surgical treatment and radiation. Quite often, they find the surgeons they consult recommend an operation and the radiologists suggest radiological treatment. This real-life example of the old adage “Don’t ask the barber if you need a haircut” is just one of the reasons that dealing with prostate cancer can be so difficult.
And finally, the bedbug. In recent years there has been, it is alleged, an epidemic of bedbug infestations in New York City apartments. They have even been reported in offices and theaters. But the evidence comes largely from companies you can hire to deploy bedbug-sniffing dogs that react when the creatures are around. Other pest control experts often found no evidence of infestation. The bedbug folks attributed false positives to poor training of the dogs or their handlers, or the possibility that the dogs were picking up scents brought in on clothing or linens or wafting through ventilation systems from other apartments. Who knows who is right? One thing we do know: It is in the companies’ interest for people to believe bedbugs are on the march in Manhattan.
Having said that, I must add that it is unwise to reject information solely because the person offering it has a financial stake in it. This approach has provided a useful weapon for the disingenuous people who want to discredit one idea or another but find the facts are against them. They argue that the experts are speaking from positions of vested interest.
Climate change deniers routinely put this canard forward in their arguments against action on global warming, describing climate research generally as a closed shop whose members somehow profit from a flow of research funds from government agencies that have drunk their particular brand of Kool-Aid.
Creationists raise it in connection with the teaching of evolution — biology as an institution, they argue, has some kind of vested interest in evolution and will quash anyone who challenges it. I doubt it. Most scientists I know would love to be the one who pulls on the thread that causes climate worries to unravel, or frays the warp and woof of biology — assuming such threads exist in the real world.
Anyway, conflict of interest is just one of many factors to weigh when you are considering opinions, supposedly expert, on one issue or another.
One of the first things to consider is, who is making the claim? Are they people we should take seriously? In other words, are they experts? If you don’t at least try to answer this question, you may waste time or even delude yourself. As the sociologists of science Harry Collins and Robert Evans put it in their analysis of expertise, “Other things being equal, we ought to prefer the judgments of those who ‘know what they are talking about.’ ”
For example, at the Institute for Advanced Study, Freeman Dyson researched solid-state physics and related fields. Today, more people know him as a critic of climate change theories. Among other things, he asserts that ecological conditions on Earth are getting better, not worse, an assessment many people would contest. Dyson has a distinguished record in physics, but he is not doing the kind of work that would put him in the front rank of researchers on climate or the environment generally. As Kenneth Brower put it in a 2010 article for The Atlantic, “Many of Dyson’s facts on global warming are wrong.”
Plenty of other so-called experts have won fame by speaking out on subjects outside their fields. The occurrence I remember most vividly involved a 1998 story about the medical researcher Judah Folkman and his theory that a way to thwart cancerous tumors would be to prevent the growth of blood vessels they need to survive. The growth process is called angiogenesis, and Folkman was working on anti-angiogenesis drugs.
When I was science editor at The New York Times, one of our reporters heard about the work, which was exciting a lot of interest among cancer researchers and other scientists. The result was an article that ran on Page 1. The story was very carefully written. Folkman himself was not making great claims: As he said in the article, “The only thing we can tell you is if you have cancer, and you are a mouse, we can take very good care of you.”
But someone else we quoted was not so reticent. He was James Watson, the biologist who with Francis Crick elucidated the structure of DNA in 1953. “Judah Folkman will cure cancer in two years,” Watson declared. That really caught people’s attention. But tumor growth was not Watson’s field. He was not someone we should have sought out for comment on the work. And of course, he was wrong. Though anti-angiogenesis drugs have found wide use against several cancers, cancer remains unvanquished. Today, when people ask me if there is any story I regretted doing at The Times, I can answer truthfully that there is not. But I wish we had not cited Watson as an expert in that story.
But who is an expert? Massimo Pigliucci, a philosopher at the City University of New York, puts the question this way: “How is the average intelligent person (Socrates’ ‘wise man’) supposed to be able to distinguish between science and pseudoscience without becoming an expert in both?”
Even researchers struggle for ways to define leadership in their fields. Over the years, a number of metrics have come into use, but all of them have deficiencies, and they are not the kind of information the average person has (or necessarily wants). Still, you might want to know about at least a few of them, if only to get a sense of how difficult it is to judge.
The first of these metrics is the number of times a researcher or research paper is cited by other researchers. The problem here is that those who invent useful research techniques may find themselves cited repeatedly, even if they do not use the techniques to make stunning findings themselves. That is, their achievement is one of tool-making, not discovery-making. Tool-making is not trivial — the development of a way to culture kidney cells as a growing medium was key to the development of the polio vaccine, for example — but it does not alter our fundamental understanding of things.
Another such metric is the “impact factor” of the journal itself — the frequency with which articles in the journal are cited subsequently. This factor speaks to the quality of the journal, though, and not necessarily to the quality of any particular paper in it. A version of this measure called “evaluative informatics” gives greater weight to citations from papers that are themselves widely cited.
A final metric deserves mention here: the h-index. This measure, introduced in 2005, gives a researcher with 50 articles, each cited 50 times, an h-index of 50. More recent variants give added weight to newer or more widely cited articles.
Most scientific papers have multiple authors, however, and these methods do not necessarily tease out who was the intellectual engine of the report and who is merely tagging along in the lab.
What does all this mean to you? Only this: When you are trying to make sense of new scientific or technical claims, especially if they fall well outside the mainstream of accepted knowledge, you should consider whether the person making the claim has a track record of findings in the field that hold up, or that fellow researchers consider important and reliable. If you cannot make this judgment, just file the question away in your mental folder of things you’d prefer to know before making up your mind.
Does the person have any credentials? It’s good to know. But credentials can be deceiving. I wrote about such a case when I reported on Marcus Ross, a young earth creationist who earned a doctorate in geosciences from the University of Rhode Island. Ross, who said he believed that Earth was only a few thousand years old, wrote his thesis on a long-extinct creature that had lived millions of years ago. He reconciled the obvious discrepancy between his religious beliefs and his scientific work (“impeccable,” his notoriously demanding thesis adviser called it) by viewing them as representing different paradigms. But even before he earned his degree, he was featured on DVDs identifying himself (truthfully) as a young earth creationist trained in paleontology at URI. As Pigliucci notes, “it is always possible to find academics with sound credentials who will espouse all sorts of bizarre ideas, often with genuine conviction.”
Are the expert’s credentials relevant? Often people with legitimate credentials in one field begin opining about matters in other fields, in which their expertise may not be particularly helpful. “It is unfortunately not uncommon,” Pigliucci writes, “for people who win the Nobel (in whatever area, but especially in the sciences) to begin to pontificate about all sorts of subjects about which they know next to nothing.” This was our problem with James Watson.
Next, ask yourself where the expert fits in the field being discussed. Is this expert an outlier? That does not by itself invalidate anyone’s opinion. The history of science is full of outliers who came to be triumphantly proved right. Usually, though, outliers are wrong. What biases or conflicts of interest does the expert bring to the table? As we have seen, it can be hard to find out and, once you know, hard to calculate how big an effect, if any, these biases might have. But the issue should be on your mind when you consider whether to trust someone’s expertise.
Perhaps your supposed expert is actually a crank. When I was science editor of The Times, I often heard from people who wrote to say that they had disproved the germ theory or the atomic theory or some other gold-standard landmark of science. I dismissed them as cranks. And each time I worried that I was consigning a brilliant insight to the dustbin. Still, there are a few signals that someone may be, shall we say, somewhat loosely wrapped. Approach with caution those who claim that they do battle against the entire scientific establishment, or that enemies of some kind conspire to deprive them of the recognition they deserve.
Cornelia Dean is a science writer, former science editor for The New York Times, and a lecturer at Brown University. She is also the author of “Am I Making Myself Clear? A Scientist’s Guide to Talking to the Public.”
Comments are automatically closed one year after article publication. Archived comments are below.
I do science and philosophy of science and what is being said here is that unless you’re prepared to do a great deal of reading and thinking, you really can’t have an opinion about almost anything in science, this may be true but it goes against human nature.
Good that Harvard is publishing her book, which I will read. They should still be ashamed of themselves for this one:
http://www.npr.org/sections/thetwo-way/2016/09/13/493739074/50-years-ago-sugar-industry-quietly-paid-scientists-to-point-blame-at-fat
Dean’s “troubling thought” — “The experts we relied on to tell us whether a given design was safe, or indeed whether nuclear power generally was safe, were people with advanced degrees in nuclear engineering and experience running nuclear plants. That is, we were relying on people who made their living from nuclear power to tell us if nuclear power was safe. If they started saying out loud that anything about the nuclear enterprise was iffy, they risked putting themselves out of business” — has certain flaws that should have been obvious at the time.
Many of the nuclear power authorities we relied on then, and rely on now, have no skin in the game. They included Edward Teller, who famously claimed to be Three Mile Island’s only casualty, and now-a-days include James E. Hansen, author with Kharecha of “Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear Power”.
When authorities say things within their areas of expertise that you wish to reject, and you call upon the principle that purse strings make abject puppets of them, and they are nuclear *power* authorities, you are particularly silly, because the purse strings may well be trying to pull them in the direction of not saying those things.
Speaking *against* the financial interests of their paymasters is, in particular, something that is often done by government employees when they say fear-reducing things about nuclear power, because uranium replaces natural gas at about four cents on the dollar, and that dollar, had it been paid, would have included twenty cents or so for government.
Isn’t it interesting, that this sparks emotive comments among self-proclaimed “thinkers”.
Really, let’s be real. James Watson is probably brilliant – in his field. This is evident, but science also means that this must be open to criticism. But what his brilliance does not guarantee is being an expert in other areas.
The problem is this – who has the power to decide who is an “expert” and who not. We have enough experts on this planet, but those in power do not always select the true experts.
When I saw the headline I thought of a certain Nobel winner and former Vice President. Another measure that Cornelia Dean might consider – if we are being told that there is a crisis and immediate action is required, is the “expect” acting like it is a crisis? For example, I’ve heard that I must reduce my carbon footprint because the Earth is in the balance from climate change from experts who live huge mansions and fly around the globe in private jets to accept environmental awards. I’ll be convinced that the seas are rising when I see housing prices in Malibu begin to tumble.
You might consider that if the Earth is in the balance, real estate prices in Malibu won’t tumble because other destinations on Earth won’t be preferable. It doesn’t make sense for real estate prices to drop in Malibu or anywhere else on Earth unless real estate on some other planet opens up. Also, you’ve heard this from “experts who live huge mansions and fly around the globe in private jets to accept environmental awards” but also from so many of the country (and world)’s physicists and meteorologists who do not “fly around the globe” and who do not stand to gain economically from a public belief that climate change is occurring.
Idan, your initial logic doesn’t quite hold up. Even assuming uniform warming, it’s quite obvious that some locations/geographies would benefit more or be harmed less than others. E.g. If GW increases the incidence of large coastal storms, you would expect to see a differential come into play that would tend to increase the value of non-coastal real estate. Only in the unlikely case that all places warm identically and all warming causes precisely the same effect in terms of human comfort, habitability, and added expense, would you expect to see no relative impact on real estate prices.
Figures don’t lie, but liars figure. Of course, needless to say that one should not rush to a conclusion. Take everything with a pinch of salt! Given time TRUTH speaks for it self.
DARPA Summit 7/13/2017 I first heard about reverse engineering of the human brain at this Summit. I would love to join the best research on this topic.
Humbly, could you host a seminar that educates us about the Nobel Organization, Stockholm, Sweden?
another example: How the prosecutor and defense can somehow find expert witnesses in forensics or psychology that contradict each other.
As a doctor of everything, I call BS on this article. Everything online is true.
… yes, this article immediately struck me as frivolous and biased.
never heard of Cornelia Dean before, but I see she’s a typical leftist academic and journalist. Of course, she is unable to see her faulty thinking/bias — but proudly displays it here. No doubt this stuff sells well at the Ivy League… where it never receives objective review or criticism.
This comment is written well enough or poorly enough that I can’t tell if it is joking or serious. Haha (smiley).
I fear it’s serious…
A superficial article which would have criticized Aristotle for the errors in his studies on animals, and Einstein for his comment on “God doesn’t play dice”. If we were to venture only where Cornelia Dean doesn’t cast her hooded eyes, we would be at a serious loss in this world.
Suffice it to say that James Watson’s penetrating brilliance — which doesn’t render him infalliable — will ever be an inspiration and a beacon to thinkers, long after the tide has wiped lesser names from the sands. The hubris to slap down this genius is only matched by its spitefulness.
Another clue that an expert may be wrong is when the text is loaded with emotional appeal to their view, in other words, it reads like propaganda. Well written science can contain emotional terms, but the author should explain the reason for their emotional reaction.
I dont know about experts but what scientists do for a living is strive very very hard to be wrong. Its a seemingly odd mindset but it achieves extraordinary things.
Much like some of the comments here. Accusations of bias without any detail to back then up.