Revisiting the Pacific Standard Critique of Science Journalism
Earlier this week, Pacific Standard magazine published an article of mine that was critical of science journalism. I received warm responses from many science journalists. But others were upset.
On Twitter, some critics called my piece a “college-level gloss” that offered “nothing new or insightful.” Others called it “bad journalism,” while Pulitzer-winning New York University professor and Undark Advisory Board member Dan Fagin compared my article to a high school term paper. He also described the argument as “Trumpian.”
My article was far from perfect. But these negative responses seemed to underestimate the scope of the problem, and to ignore the piece’s most important points.
In Pacific Standard, I argued that science reporting remains stuck between the realms of journalism and public relations, a tension that dates back to the earliest days of the field. Journalists rarely focus on shoddy scientific practices, and they present a view of scientific inquiry that’s cleaner and more progress-oriented than the realities of research. As a result, few journalists do much to hold scientists accountable. Most present a misleading view of research to the public.
None of these claims are new. Versions of this argument have appeared many times, most recently from Brooke Borel in The Guardian, and in Paul Raeburn’s work for Nieman Reports.
Critics have identified two particular weaknesses in my argument. The first is that I lay the blame on individual journalists, rather than recognizing the pressures of a fast-moving, cash-strapped marketplace. The second is that the breadth of my piece sweeps up good reporters along with the bad.
“There’s probably no field of journalism that’s less skeptical, less critical, less given to investigative work, and less independent of its sources than science reporting,” I wrote. Understandably, many skilled, hardworking journalists were upset by that statement. It does paint the field with a broad brush, and it’s thinly sourced.
These are valid criticisms, but they don’t address the piece’s central concern, which was more about science than it was about science journalism. Right now, scientific communities are facing a series of crises related to publication, replication, and self-policing. Peer review can be a superb arbiter. But, like any watchdog, it cannot act alone. In far too many cases, peer reviewers are unable to catch fraud, identify conflicts of interest, or even catch major methodological errors. Meanwhile, there are systematic biases in selecting which results get published, and which don’t. Those biases can shape entire fields.
The question for science journalists seems to be clear: Will the field address these issues, or will it proceed as if they do not exist? That’s a broad way to frame the question, of course; many journalists will do the former, and others the latter. But I think it’s apparent that the field has skewed strongly in the direction of disengagement. Even readers of a nuanced, critical science section, such as that of The New York Times, could have limited exposure to the politics and controversies within scientific communities.
Meanwhile, the basic model of reporting — one that treats individual studies as breaking news items, and that rarely follows up with studies months or years down the line — seems fundamentally out of sync with the slower, messier cycle of scientific consensus construction.
Again, I’m not the first person to make these points. But they’re worth reiterating.
Conversations about retraction, peer review, and results bias are not an unfortunate blemish on the larger story of science. Right now, they are central to that story. Journalism is an extraordinary watchdog, but too often it does not seem to be playing that role in any regular and serious way within the scientific world. There are many, many reasons why that may be the case, but it seems plausible that a culture of uncritical science boosterism is at least one of them.
As I discuss in my Pacific Standard piece, many people are already doing great investigative work. Some are exploring new models of collaboration between scientists, journalists, and academic journals. These collaborations can combine the best of peer review with the best of journalistic work, and they need not be collusive. Collaborating with a source, and collaborating with someone who has the same job as your source, are two very different things.
Work like this can help make both science and science journalism stronger, and there’s potential for much more of it to take place.
Michael Schulson is a freelance writer and co-editor of The Cubit, a blog examining the intersections of science, religion, technology, and ethics. The opinions expressed here are his own.
Comments are automatically closed one year after article publication. Archived comments are below.
When I was in math graduate school at MIT, I learned that anytime I was tempted to say, “it is clear that…”, I really meant, “I am quite certain that… However, I can’t think of how to make an argument to justify it right now.” Disciplining myself to never say those words facilitated both communication with others and correct thinking for myself.
The same applies to, “it’s apparent that the field has skewed strongly in the direction of disengagement.” You have to marshall some evidence.
That said, PACE is a strong case of exactly what Schulman is arguing about here. Science journalists — including high-level specialists — completely fell down in reporting on this, acting as PR people and not digging into the criticisms. Even now that Tuller has done heroic (and unpaid) work showing the shocking flaws in the trial (virology.ws/mecfs), very few journalists have reported on the problems. So, my fellow science journalists, if Schulson has pissed you off here, prove him wrong: Do some serious reporting on the PACE trial. You can start with this relatively quick and readable introduction to the issues that I wrote for Slate: bit.ly/slate-cfs-pace.
One example of the flawed reporting by top science journalists is Jo Marchant’s recent piece in the Guardian: http://www.theguardian.com/society/2016/feb/15/it-was-like-being-buried-alive-victim-of-chronic-fatigue-syndrome. The only critics she cited were random patients on Facebook, instead of citing any of those who wrote carefully reasoned letters to medical journals. She included none of the very serious, major scientific flaws in this work. She also didn’t cite Tuller’s work. She justified this later by saying that this was a chapter excerpt from her book, and she wrote the book before Tuller’s pieces appeared. But Tuller drew on public criticism that she could easily have found had she looked.
Finally, Whipple’s responses above are a classic example of the dynamic. Mitchell gave carefully marshaled evidence to support his claims. Whipple disagreed with him, said he didn’t know anything about PACE, and said that “CFS is something people get very passionate about.”
One example of a serious breakdown in science journalism isn’t enough, of course, to make Schulson’s case as a whole. But it’d be a place to start.
My impression is that US science journalism is in better shape than what’s found in the UK. Here, we have the Science Media Centre heavily influencing the way science is reported, and most science journalists seem grateful for that, as it saves them time and trouble.
eg: Recently David Tuller, a US science journalist, has been engaged in picking apart some unusually influential, expensive and flawed UK research, and also the way the media covered it. His work was summarised and praised here (it includes links to Tuller’s pieces): http://www.stats.org/pace-research-sparked-patient-rebellion-challenged-medicine/ Tuller’s work led to follow up coverage in the Wall Street Journal, Science, Slate, NPR… but has largely been ignored in the UK media, where it does not fit with the narrative promoted by the Science Media Centre.
As Seth Borenstein mentioned the Associated Press, here is a copy of their coverage when the study’s results were first released: http://www.meassociation.org.uk/2011/02/pace-study-results-story-in-los-angeles-times-17-february-2011/ And a copy of Reuter’s more recent coverage of a mediation paper: https://news.yahoo.com/helping-chronic-fatigue-patients-over-fears-eases-symptoms-000313414.html There doesn’t seem to have been any attempt to engage critically with the research.
To some extent, the fact that US science journalists are concerned that they may not be doing enough ‘real’ journalism shows that there is still a culture that aspires to be critically engaging with the evidence and challenging the misrepresentations of those in authority. In the UK, it seems that most now see the highest aims of science journalism as being to promote, explain, and encourage people to trust ‘science’.
The limited resources available for ‘real’ journalism is an important part of the problem, but imo, the response to this should be to be honest about the limited work being done. If you only have time to summarise a press release then instead of writing ‘scientists said’ or ‘new research concludes’, report that ‘a press release claims’, and explain that researchers have incentives which would encourage them to exaggerate the value of their work. Too much science coverage seems written in a way which is intended to gloss over the problems with science journalism rather than reveal them to readers who can then make their own judgments about what they have read.
In response to Brandon Keim’s comment, I also just want to add that medical, psych and social research are the areas of research where misleading claims can have the most immediate and harmful impact on the behavior of the general public, how the make decisions about their own lives, and how they treat other people (with the exception of matters relating to global warming?). I take your point that other areas of research often have higher standards, and that does need to be recognised, but in a discussion about science journalism I think it is fair to focus on the areas where inaccurate reporting is likely to have the greatest influence on people’s lives, even if Schluson’s orginal piece was poorly phrased.
Tentatively, as a UK science writer, I’d like to respond to some of these points – in particular about the Scoence Media Centre, which seems to always find its way into these discussions.
First, there are some valid points. We don’t have the resources we may have once had. If you’re doing four stories in a day, you don’t always have time to contact secondary sources. For some of the smaller articles we do though I’m not sure that’s a bad thing. For what it’s worth, here are my rules – which I think are similar to most of my colleagues. 1. Always read the paper. 2. Always try to reach the authors (generally if I don’t speak to them it is time zone problems). 3. If it’s a 450 word article about monkeys liking alcohol, an outsider is not that necessary. If it’s an article of any length about Alzheimer’s, it certainly is.
Regarding problems in peer review etc, this is something we are all very much aware of – and have covered. But it does suffer from the formalism of news stories. To cover it each time requires something new rather than incremental to say. Also, it has to be interesting. Apologies if this is an unfashionable point, but we have to be read. Writing weekly about procedural issues in peer review is not going to get readers engaged – unless there is something new to say. Of course it is important and of course we cover it, but I have no problems at all in admitting to be guilty as charged in believing a large part of my job is to entertain and explain.
Anyway, the main point of replying is this: what is it that riles people so much about the SMC?! It’s always brought up in discussions like this, as if people have peered behind the curtain and exposed our secret truth. About once a month, when they have a briefing about a paper I want to cover, I’ll go along. They have great cookies. A couple of times a week I’ll find their round up of scientists’ views on a paper useful – normally because it helps me knock it down to my newsdesk. And yes, I’m aware of their pro-GM bias, although it’s also a bias shared by every scientist I’ve ever spoken to.
@ Tom Whipple: “Anyway, the main point of replying is this: what is it that riles people so much about the SMC?!”
I think that Fiona Fox is an untrustworthy propagandist, and that this affects the SMC’s work.
I think that the PACE trial is the piece of research which has received the greatest number of press briefings and expert reactions from the SMC, and the above investigative piece I linked to concludes: “It seems that the best we can glean from PACE is that study design is essential to good science, and the flaws in this design were enough to doom its results from the start.”
David Tuller introduced his investigative work on PACE with quotes from a range of experts:
Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:
Dr. Bruce Levin, Columbia University: “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”
Dr. Ronald Davis, Stanford University: “I’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.”
Dr. Arthur Reingold, University of California, Berkeley: “Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”
Dr. Jonathan Edwards, University College London: “It’s a mass of un-interpretability to me…All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”
Dr. Leonard Jason, DePaul University: “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.”
http://www.virology.ws/2015/10/21/trial-by-error-i/
Now compare that to the range of expert views provided by the SMC:
http://www.sciencemediacentre.org/expert-reaction-to-lancet-study-looking-at-treatments-for-chronic-fatigue-syndromeme-2-2/
Even after Tuller’s work had come out, the only expert the SMC provided to comment on a new PACE paper was a fellow biopsychosocial researcher, and friend of the researchers:
http://www.sciencemediacentre.org/expert-reaction-to-long-term-follow-up-study-from-the-pace-trial-on-rehabilitative-treatments-for-cfsme-and-accompanying-comment-piece/
Since the PACE paper was first published in 2011 the SMC have been pushing the view that patients’ concerns about this work, and the biopsychosocial approach to their treatment and care, were unreasonable and shaped by their own prejudices about mental health; that Freedom of Information requests should be understood as a form of harassment by militants (albeit ‘militants’ with a body count of 0); and that this was an example of ‘science’ coming under attack from anti-scientific ideologues. I cannot think of a single UK science journalist that properly investigated these claims. Instead, we have had a stream of stories that simply trusted the claims of these researchers and their colleagues, and promoted a range of unpleasant prejudices against patients concerned about the quality of research that was taking place into their condition. That’s bad, and the SMC played an important role in it.
Also, PACE is the only medical trial to have got funding from the DWP, and comes at a time when the DWP is using the biopsychosocial model of disability to justify controversial cuts and reforms to disability benefits. The substantial problems with the PACE trial have important implications for British politics, yet it is only American journalists who taken the time to investigate patients’ concerns about this trial, and the way in which results have been misrepresented. I think that reflects problems within British science journalism, and that the SMC is an important part of what has gone wrong.
This trial is clearly a very important issue for you, and I respect that. I know from experience of writing about chronic fatigue syndrome that it is something people get very passionate about. But…I have never been to an SMC briefing on it. It’s not something they have done much on – I can tell this by checking past emails. Since 2012 there have been five emails on it. (I wasn’t in their list in 2011).
I don’t speak for them, and I don’t cover things because of them. I do my job, they do theirs. The idea of them as puppet masters on this or anything else is not something I recognise. They are just another (generally useful) source.
I did not cover the PACE study, my colleagues on the health desk deal with the lancet (where I believe it was).
Every time we cover health issues they affect a section of readers deeply. But, equally, the same is true of dozens of stories every week. If we have underplayed the importance of both the original results and the criticisms then it would be because we made a mistake, rather than because of anything malign or some sort of hidden agenda. We certainly would not cover criticisms of a study in greater depth than the original study.
@Tom Whipple – I feel that you are reading into what I have said things that just were not there. I made no mention of a ‘puppet-master’. Believing that the SMC has influence over how stories are reported is not the same as believing that they are controlling people. Actually – this slightly reminds me of a conversation between Andrew Marr and Noam Chomsky, where Marr kept misinterpreting Chomsky’s views on the media as requiring a centralised conspiracy – could a similar misunderstanding have occurred here? Maybe you being unfamiliar with this topic makes my specific examples less useful?
A quick search of the SMC website found at least 22 entries on CFS since 2010, and that does not include much of the work that they did trying to portray researchers dealing with FOI requests as campaigns of harassment. The SMC themselves give this as an example of them “Seizing the agenda”: http://www.sciencemediacentre.org/wp-content/uploads/2013/03/Review-of-the-first-three-years-of-the-mental-health-research-function-at-the-Science-Media-Centre.pdf [This pdf did not come up in my search of their site]
The SMC had arranged a meeting on CFS ‘harassment’ at which biopsychosocial CFS researchers reported that FOI requests were the most serious form of harassment they faced. Since then, it has become increasingly widely accepted that the work that was attracting FOI requests was seriously flawed, and that the researchers are unreasonably fighting against legitimate requests for information: http://www.statnews.com/2015/12/23/sharing-data-science/
If politicians were complaining about being harassed by FOI requests related to a policy which was widely thought to have been flawed and based upon spun data, would journalists be particularly sympathetic to their hurt feelings? Following the SMC’s involvement, science journalists were utterly credulous when it came to reporting on militant CFS patients’ campaign of harassment. Michal Hanlon provided this example for the Times on the topic: http://www.thesundaytimes.co.uk/sto/Magazine/Features/article1252529.ece I think it was a front page feature for some supplement or other. It mentions “a ground-breaking paper was published in the journal Psychological Medicine” – this was the PACE recovery paper: the most obviously flawed and spun of the PACE trial’s papers.
I realise that you may not have any particular understanding of, or interest in, this topic, so may be dismissive of my concerns. It could be that everything I’ve told you is false, and the sources I’ve linked to untrustworthy. But if I’m right, then that would reflect badly upon the UK science media, and the SMC’s role within it, would it not?
I do think that it’s the details and the evidence that matters, and that broad discussions about whether peoples’ motives are malign or virtuous are unlikely to get anywhere. For most groups there are a mess of unknowable motivations and desires guiding behaviour and it’s more important to examine whether people are fairly representing the evidence or not. Are their claims accurate, or not. One major problem with much UK CFS coverage is that journalists seem uninterested in investigating the details, and instead seem to prefer reveling in irrelevant truisms about the neurological underpinnings of the mind or the importance of psychosocial factors to all human sensations. This is probably an easier way to to write than going through a medical paper checking its references support the claims being made?
“I know from experience of writing about chronic fatigue syndrome that it is something people get very passionate about.”
I am not saying that this has happened to you, but I have seen some reporters appear frustrated and bemused that their own poorly researched misrepresentations of the evidence leads to anger from patients. They think that they’re doing a good thing by unquestioningly passing on the views of researchers the SMC has said should be trusted, and assume it is patients who are unreasonable or misinformed for complaining about what they have written. I think that there is a history of quackery and prejudice surrounding CFS, particularly in the UK, and that many patients are upset by this. I don’t think that this dynamic is any different to similar cases from the past (linked by some researchers, like Isaac Marks), such as when nonblinded trials and subjective self-report outcomes were used to claim that behavioural interventions could treat homosexuality.
“This trial is clearly a very important issue for you, and I respect that.”
I don’t really know what that means. I don’t like spin or abuses of power, and wish that the UK media was better at fighting against them.
This trial is clearly a very important issue for you, and I respect that. I know from experience of writing about chronic fatigue syndrome that it is something people get very passionate about. But…I have never been to an SMC briefing on it. It’s not something they have done much on – I can tell this by checking past emails. Since 2012 there have been five emails on it. (I wasn’t in their list in 2011).
I don’t speak for them, and I don’t cover things because of them. I do my job, they do theirs. The idea of them as puppet masters on this or anything else is not something I recognise. They are just another (generally useful) source.
Every time we cover a health topic it is extremely important for some people. But…since I’ve been doing science we have done very little on cfs, maybe that is a mistake. Either way, there is no agenda, imposed or otherwise.
Yes, there are problems within what’s regarded as the mainstream science journalism, and also within what’s not. But the PS article was deeply problematic on a number of levels. I jotted down some thoughts on it — https://medium.com/@brandonkeim/whither-science-journalism-rant-98458e65f750#.b0n95y2gv — but am repeating here the most important problem:
The reproducibility crisis isn’t about the entirety of science; it’s mostly about areas of social science (especially psychology) and biomedical research (basic/preclinical research and molecular biology). And within the latter field, the problems are not evenly distributed, but most manifest in a) all-star journals like Science and Nature, which reward novelty to a problematic degree and are so prestigious that the competition to publish there is destructively fierce and b) low-end journals that nobody with any expertise really trusts anyways, for obvious reasons.
Pick up, say, Brain or Current Biology or NEJM, and “a substantial portion” of those findings are not going to be false — much less Conservation Biology or Mycology or Journal of Applied Physics. It’s just a bull$@#! notion. And getting it wrong to the degree the PS article did — to say something like, “A substantial portion — maybe the majority — of published scientific assertions are false” — is beyond careless. It’s inexcusable, even retraction-worthy.
I think it’s a mistake to think absence of discussion of reproducibility problems = absence of a reproducibility problem. Particular fields are currently discussing “crisis” around reproducibility, but that doesn’t mean there aren’t problems elsewhere. It’s hard to believe, for example, that these are the only fields where there are important data analysis issues. There are surely areas of science where the only reason there isn’t a crisis, is because there haven’t yet been enough attempts to validate or replicate findings, and there is not yet a tradition of systematic review and meta-research for the field.
Agreed. It’s not just social psychology. It’s oncology–the Amgen study in which only 6 of 53 landmark cancer papers could be replicated. It’s medicine (everything Ionnidis has published in the last 2 decades). It’s nutrition: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0076632, doi: http://dx.doi.org/10.1136/bmj.f6698. That suggests (but does not prove!) a systemic phenomenon. I think it’s reasonable to assume that many other fields that haven’t yet undertaken the difficult work of replication would find similar results.
@Kat @Hilda I think that’s fair (though in the spirit of being critical, that one author of the PLoS critique of nutrition & obesity research was funded by Coca-Cola does raise a few red flags ;)
And agreed that a big effort to reproduce findings in other fields would lead to some challenging & winnowing of conclusions now accepted. Not necessarily for the same reasons as in biomedicine/social science, though it’d be interesting to see. And, more subtly, lots of research — not just social science and biomedicine — reflects the limited knowledge and prevailing biases of the present moment.
That’s a good discussion to have. But it’s not a discussion fostered by the PS piece (and not to bag on that alone — there’s other articles in the same vein.) If anything, the PS piece clouds that discussion while conflating every field of science with social science and biomedicine. I think that’s what bugs me so much about it. Science spans the entirety of human inquiry, yet here was used interchangeably with “social science and biomedicine.” That’s dangerous.
And if someone drinks 64 ounces of Coca-Cola a day, their diabetes risk goes up. This we do know….
As usual, Brandon, everything you say here is thoughtful and smart! (Although if you ever want a contrarian opinion on the number of findings that are likely overblown or misinterpreted in oceanography, I’ll give you the phone number of my 91-year-old marine biologist father, who will turn your ear red ranting about the number of prominent papers and analyses that he and his colleagues are convinced are bullS*&&T that would never stand up to replication.) I’m more open to the broader message of that piece tho…I agree that there’s good science reporting out there, but there’s a lot that’s credulous and shallow. (I’d also say that’s true of most fields of journalism.)
My views on the larger issues facing science journalism don’t condense well into 140 characters, but those who are sufficiently interested (and patient) may want to read my keynote at the World Conference of Science Journalists in Seoul last year: http://danfagin.com/website/wp-content/uploads/2015/08/WCSJSpeech.pdf
My problem with your Pacific Standard piece is that it’s way too broad brush. You paint an entire journalism field with a few examples. First, that’s bad journalism. Quite bad. Second, it’s just not true. At The Associated Press where I am a science writer (along side the stellar but modest Malcolm Ritter), we always do our own version of peer review with scientific studies. We consult with outside experts. That’s just a must. So I don’t believe we wrote about any of your examples of bad studies. For example, I decided not to write the latest study by James Hansen on catastrophic climate change because five of the six outside scientists _ yes I got six outside experts, but that’s only because I sought out nine _ had problems with the study. That convinced me it wasn’t worth writing about.And Hansen has the big name and institution behind him. And by the way that embargo process from top journals you dismiss, that’s what allows people like me to get outside comment and our due diligence (it’s not investigative, it’s just smart practice). Oh and one other thing, your lead anecdote about This American Life, that’s not science journalism (you want that on public radio, listen to RadioLab) There are many good science writers out there who arent what you describe. Carl Zimmer, Lee Hotz, Andrew Freedman, Kenneth Chang, Joel Achenbach, Shankar Vedantam, Nell Greenfieldboyce, Natalie Angier, Andy Revkin, Maggie Koerth-Baker, Joe Palca, and Christie Aschwander to name just some (there are much more)
What Seth said.
Also, Schulson, you wrote: “It does paint the field with a broad brush, and it’s thinly sourced.” You do understand by now, I hope, that you failed to investigate your own thesis with the critical and skeptical reporting you claim science journalists lack.
Look, we science journalists are more aware than anyone of the credulous hacks who occasionally cover science. It drives us nuts. But the list Seth started of science writers who undermine your thesis could go on and on and on. Don’t insult them with your “broad and thinly sourced” claims about an entire field of journalism.