Brian Wansink: Data Masseur, Media Villain, Emblem of a Thornier Problem

If you look into the archives of your favorite journalism outlet, there’s a good chance you’ll find stories about Cornell’s “Food Psychology and Consumer Behavior” lab, led by marketing researcher Brian Wansink. For years, his click-bait findings on food consumption have received sensational media attention, including from Vox (we end up eating less marinara pasta when served on a white plate than a red plate); ScienceFriday (people enjoy wine more when they think it’s from California than North Dakota); and Slate (drinking wine out of a tall, narrow glass will reduce consumption by 10 percent).

viewpoints

In the last year, however, Wansink has gone from media darling to media villain. Some of the same news outlets that, for years, uncritically reported his research findings are now breathlessly reporting on Wansink’s sensational scientific misdeeds. Crowing about Wansink’s “bogus” and “rotten” research, and his “shoddy” and “manipulated data,” the fever pitch of news coverage, led in recent months by BuzzFeed News, has taken on an almost campaign-like quality. “A firestorm of criticism,” BuzzFeed noted late last month, “threatens the future of his three-decade career,” while Slate has announced “the death of a veggie salesman.”

The scientific community also appears to have turned against Wansink, with enterprising graduate students spending hundreds of hours picking apart his research and contacting Wansink — and journal editors — for explanations about the many discrepancies they’ve found. As journalists report out these findings in vivid color, however, it is difficult to view this story as a sterling example of the self-correcting role of science, or even the watchdog role of the fourth estate. It seems easier to process it as a failure on both fronts.

To be sure, Wansink’s mistakes appear grave and deserve scrutiny. He has not been accused of fraud or fabrication, but he has had more than a dozen of his published research articles corrected or retracted in the last year, a major black mark on his research. His misdeeds include self-plagiarism — publishing papers that contain passages he previously published — and very sloppy data reporting. His chief misdeed, however, concerns his apparent mining and massaging of data — essentially squeezing his studies until they showed results that were “statistically significant,” the almighty threshold for publication of scientific research.

And yet, not all scientists are sure his misdeeds are so unique. Some degree of data massaging is thought to be highly prevalent in science, and understandably so; it has long been tacitly encouraged by research institutions and academic journals. To succeed as a research scientist means publishing prolifically (“publish or perish”), and publishing means finding statistically significant results.

Further, if we scrutinized all research scientists (or science journalists for that matter) the same way we are scrutinizing Wansink, who knows what we would find.

BuzzFeed, for example, filed a public records request to gain access to eight years of his emails, pulling out dazzling quotes from Wansink and his collaborators about “data torturing” and “wizardry.” The publication also quoted other scientists passing judgment on the content of Wansink’s correspondence, a remarkable feat of reporting given how public academics have long chafed at public records requests that ask for their professional correspondence. They’ve previously cited invasions of privacy, the potential chilling effect it can have on science, and how scientists’ words can be taken out of context. But with Wansink, those concerns appear to have gone out the window.

University of Virginia psychologist Brian Nosek — also executive director of the Center for Open Science — is one of the three scientists who reviewed Wansink’s emails for BuzzFeed. He acknowledged to me he felt “discomfort” in being asked to read another scientist’s emails, but, knowing that they were going to be published anyway, he felt a duty to offer context and perspective.

When asked about the decided limitations in his review — only examining a subset of Wansink’s emails, not his entire correspondence — Nosek said he doubted that reviewing additional emails would change his assessment of scientific misconduct. Wansink had “basically said the same thing, with less dramatic language, in [previous] public blog posts about his research practices,” Nosek told me.

That raises a second point. If the emails aren’t adding substantive content to the Wansink saga, how much news value does this present? Is this breaking new ground, or just piling on — and at what cost? Does pillorying Wansink help to shore up the scientific discipline, or does it risk putting all scientists on the defensive — afraid to discuss and debate their own scientific and statistical methods for fear of being labeled as bad actors?


Acouple of years ago, Harvard University’s Amy Cuddy became the poster child of scientific misconduct — or, at least, some data massaging — when her famous “power pose” research about self-confidence was vilified as a statistical fluke. A long, sympathetic profile of Cuddy in The New York Times Magazine last year examined the bloodthirsty bullying and public shaming that foundered her academic research career.

As a similar hue and cry engulfs Wansink, it’s hard to imagine this will be the last such scandal, or that future scandals will be interrogated differently. While pillorying and shaming data masseurs may offer a sense of catharsis, it’s also a facile response to a far more complicated problem in which there appear to be plenty of blameworthy actors.

Susan Wei, a biostatistics professor at University of Minnesota and one of the scientists who (unfavorably) reviewed Wansink’s emails for BuzzFeed, offered thoughtful comments on this point: “I don’t think the emails point to any black-and-white research misconduct. This is more in a grey area,” she told me. “See, I think Wansink’s methods are emblematic of the way statistics is misused in practice, and I lay the blame for that partially at the feet of the statistical community. It is simply difficult to do data analysis in a principled manner. But still, Wansink should’ve known better than to commit such rookie mistakes.” 

I wonder if we’d all be a little less scandalized by Wansink’s story if we always approached science as something other than sacrosanct, if we subjected science to scrutiny at all times, not simply when prevailing opinion makes it fashionable. Just because news outlets and peer reviewers are dusting off their watchdog hats and now taking a (cartoonishly) close look at Brian Wansink, I hardly think that’s reason to pile on the praise for a job well done.


Tim Schwab is a freelance writer based in Washington D.C. and a former researcher with the consumer advocacy organization Food & Water Watch. He supports the use of public record requests in science journalism.