Facebook tried to manipulate user emotions for study: Journalists cite improper informed consent


Addendum: A couple of bloggers made the same observation I did about the lack of Facebook’s demonstrated ability to manipulate emotions and the misleading nature of the mass media claim that Facebook “made us sad.”

Here’s Tal Yarkoni writing for a blog called [citation needed]

And SciLogs blogger Paige Brown weighs in with The Facebook Emotion Study in a Broader Context. She does something the other media outlets failed to very well. She put this study in context by pointing out other similar experiments done on social media users.

Outrage broke out over the weekend surrounding a scientific study in which people’s Facebook news feeds were altered in order to test “emotional contagion.” The source of the outrage: The revelation that the study failed to get informed consent. The researchers used unsuspecting FB users as guinea pigs.

The study is not new. It was published in early June in the Proceedings of the National Academy of Sciences (PNAS). Upon publication, it barely registered in the news media, perhaps because the conclusions seemed pretty obvious. Researchers from Facebook and Cornell University took 689,003 randomly selected Facebook users and altered their news feeds. When the researchers subtracted posts with happy words, the users themselves used fewer positive words in their own posts. Subtracting sad posts likewise was correlated with subjects posting more happy things and fewer sad ones. No big surprise there.

There was a little story in phys.org, Emotional Contageon Sweeps Facebook, by H. Roger Segelkin and Stacey Shackford.

The story took the research at face value and reiterated the conclusion that subtracting sad posts made people happier, and subtracting happy ones made them sadder. That’s the conclusion pushed in the press release.

But by taking a closer look at the papers, academic bloggers on The Conversation drew exactly the opposite conclusion in their version: Facebook emotions can be ‘viral’ but aren’t very contagious

The bloggers, Luke van Ryn and Robbie Fordyce, make an important observation:

Simply put, if you see less bad stuff, you tend to say fewer bad things. While this might seem innocuous, the prospect of broadly affecting the way literally billions of people think is a fairly scary thought.

Fortunately, the effect that the researchers find is small. In fact, the total change was found to be as small as 0.1%, much less than what we are accustomed to describing as significant.

It’s a good example of a common way statistics can mislead. A very large sample can produce an effect that is “statistically significant”, but tiny, and certainly not significant in the way it would affect people.

The whole thing seemed ready to be forgotten when over the weekend a handful of news organizations carried stories that noted Facebook didn’t get informed consent. There is some legal fine print that people acknowledge to get a Facebook account, but that hardly counts as the kind of informed consent normally required for studies. Now questions have surfaced about whether the researchers failed to secure necessary approval from an institutional review board.

The Atlantic ran two similar posts. Robinson Meyer wrote under the headline, Everything We Know About Facebook’s Secret Mood Manipulation Experiment.

Did an institutional review board (IRB)—an independent ethics committee that vets research that involves humans—approve the experiment?

Yes, according to Susan Fiske, the Princeton University psychology professor who edited the study for publication. It seems an IRB was only consulted about the methods of data analysis, though, and not those of data collection.

“I was concerned,” Fiske told The Atlantic on Saturday, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Also online for The Atlantic was, Even the Editor of Facebook’s Mood Study Thought It Was Creepy, by Adrienne LaFrance.

Slate also posted something on Saturday: Facebook’s Unethical Experiment by Katy Waldman.

This gets into some interesting detail about how the study was done:

They tweaked the algorithm by which Facebook sweeps posts into members’ news feeds, using a program to analyze whether any given textual snippet contained positive or negative words. Some people were fed primarily neutral to happy information from their friends; others, primarily neutral to sad. Then everyone’s subsequent posts were evaluated for affective meanings.

The upshot? Yes, verily, social networks can propagate positive and negative feelings!

The other upshot: Facebook intentionally made thousands upon thousands of people sad.

That last statement is a leap of logic and doesn’t take into account the minuscule size of the effect. What people post may not be a direct reflection of emotional state, since out of sheer politeness people might refrain from boasting of a promotion if a good friend has just posted that his beloved dog as died. Whether Facebook actually made people sad in any kind of meaningful way is questionable.

But the fact remains that FB users were not told they were being used as research subjects. Even if the study conclusions were weak, the researchers didn’t know that going in.

Forbes blogger Kashmir Hill also weighed in over the weekend with: Facebook Manipulated 689,003 Users’ Emotions For Science.

All these stories are important. All point out a critical hole in the safety net that should protect people from being used as unwitting guinea pigs. Still, these stories implied that the manipulation of news feeds had a “significant” influence on users’ emotional states. Some were quite dramatic about it, but they were simply perpetuating a misunderstanding of statistical significance. The weakness of the effect is a critical part of the story too. It makes a difference that the study showed the manipulation did very little to alter people’s behavior.

A thorough discussion of the ethical and scientific problems with the study can be found at Science Based Medicine. The blogger here is a doctor named David Gorski:

Gorski clarifies the role of the paper’s “editor”, Princeton’s Dr. Fiske, who was the one quoted by others calling the study “creepy”. Gorski explained her role in the unique publication policy of the journal, PNAS.

These days, submission requirements for PNAS are more rigorous. The standard mode is now called Direct Submission, which is still not like that for any other journal in that authors “must recommend three appropriate Editorial Board members, three NAS members who are expert in the paper’s scientific area, and five qualified reviewers.”

According to several stories, people are livid, and are expressing their outrage on social media – in this case Twitter, of course. Gorski agrees with the questionable ethics, but reminds readers that the results are tiny, despite so-called statistical significance. And the researchers may have committed another no-no, manipulating a graph so that people reading the PNAS paper would be led to believe the effect was big:

This is another thing the authors did that I can’t believe Dr. Fiske and PNAS let them get away with, as messing with where the y-axis of a graph starts in order to make a tiny effect look bigger is one of the most obvious tricks there are. In this case, given how tiny the effect is, even it was a statistically significant effect, it’s highly unlikely to be what we call a clinically significant effect.

More stories and opinion pieces are likely to follow.