This illustration depicts a hand holding a smartphone emanating flames. The flames are rendered in a bright, fiery orange and white, creating a stark contrast against the dark background and the black silhouette of the hand and phone. This imagery could symbolize the viral spread of information, hot news, or the burning intensity of social media and communication in the digital age.

Interview: Are We Misinformed About Misinformation?

Republish

In June, the journal Nature published a perspective suggesting that the harms of online misinformation have been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterize as three common misperceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms are driving this exposure, and that many broader problems in society are predominantly caused by social media.

“People who show up to YouTube to watch baking videos and end up at Nazi websites — this is very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher with the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that edge cases don’t matter, he and his colleagues wrote, but treating them as typical can contribute to misunderstandings — and divert attention away from more pressing issues.

David Rothschild is an economist at Microsoft Research and a researcher with the Penn Media Accountability Project.

Visual: Courtesy of David Rothschild

Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.

Undark: What motivated you and your co-authors to write this perspective?

David Rothschild: The five co-authors on this paper had all been doing a lot of different research in this space for years, trying to understand what it is that is happening on social media: What’s good, what’s bad, and especially understanding how it differs from the stories that we’re hearing from the mainstream media and from other researchers.

Specifically, we were narrowing in on these questions about what the experience of a typical consumer is, a typical person versus a more extreme example. A lot of what we saw, or a lot of what we understood — it was referenced in a lot of research — really described a pretty extreme scenario.

The second part of that is a lot of emphasis around algorithms, a lot of concern about algorithms. What we’re seeing is that a lot of harmful content is coming not from an algorithm pushing it on people. Actually, it’s the exact opposite. The algorithm kind of is pulling you towards the center.

And then there are these questions about causation and correlation. A lot of research, and especially mainstream media, conflate the proximate cause of something with the underlying cause of it.

There’s a lot of people saying: “Oh, these yellow vest riots are happening in France. They were organized on Facebook.” Well, there’s been riots in France for a couple hundred years. They find ways to organize even without the existence of social media.

The proximate cause — the proximate way in which people were organizing around [January 6] — was certainly a lot of online. But then the question comes, could these things have happened in an offline world? And these are tricky questions.

Writing a perspective here in Nature really allows us to then get to stakeholders outside of academia to really address the broader discussion because there’s real world consequences. Research gets allocated, funding gets allocated, platforms get pressure to solve the problem that people discuss.

UN: Can you talk about the example of the 2016 election: What you found about it and also the role that perhaps the media played in putting forth information that was not entirely accurate?

DR: The bottom line is that what the Russians did in 2016 is certainly interesting and newsworthy. They invested pretty heavily in creating sleeper Facebook organizations that posted viral content and then slipped in a bunch of non-true fake news towards the end. Certainly meaningful and certainly something that I understand why people were intrigued by. But ultimately, what we wanted to say is, “How much impact could that plausibly have?”

Impact is really hard [to measure], but at least we can put in perspective about people’s news diets and showcase that the amount of views of Russian direct misinformation is just a microscopic portion of people’s consumption of news on Facebook — let alone their consumption of Facebook, let alone their consumption of news in general, which Facebook is just a tiny portion of. Especially in 2016, the vast majority of people, even younger people, were still consuming way more news on television than they were on social media, let alone online.

While we agree that any fake news is probably not good, there is ample research to see that repeated interaction with content is really what drives underlying causal understanding of the world, narratives, however you want to describe it. Getting occasionally hit by some fake news, and at very low numbers for the typical consumer, is just not the driving force.

UD: My impression from reading your Nature paper is that you found that journalists are spreading misinformation about the effects of misinformation. Is that accurate? And why do you think this is happening if so?

DR: Ultimately, it’s a good story. And nuance is hard, very hard, and negative is popular.

UD: So what’s a good story, specifically?

DR: That social media is harming your children. That social media is the problem.

There’s a general want to cover things on a more negative light. There is certainly a long history of people freaking out over and subscribing all society ills to new technology, whether or not that was the internet, or television, or radio, or music, or books. You can just go back in time, and you can see all of these types of concerns.

Ultimately, there’s going to be people that benefit from social media. There’s going to be people that are harmed from social media, and there’s going to be many people who will progress with it in the way that society continues to progress with new technology. That is just not as interesting a story as social media is causing these problems, without counterbalancing that.

“Social media is the problem, and it’s really the algorithms” provides a very simple and tractable solution, which is that you fix the algorithms. And it avoids the harder question — the one that we generally don’t want to do — about human nature.

A lot of the research that we cite here, and ones I think that make people uncomfortable, is that some segment of the population demands horrible things. They demand things that are racist, degrading, violence-inducing. That demand is capable of being satiated in various social media, as well as it was satiated beforehand in other forms of medium, whether or not it was people reading books, or movies, or radio, whatever it was that people were listening to or gaining information from in the past.

Ultimately, the various channels that we have available definitely shift the ease and distribution and way in which these are distributed. But the existence of these things is a human nature question well beyond my capacity as a researcher to solve, well beyond a lot of people’s capacity — most people’s, everyone’s. I think it makes it tricky and also makes you uncomfortable. And I think that’s why many journalists like to focus in on “social media bad, algorithms the problem.”

UD: On the same day that Nature published your piece, the journal also published a comment titled “Misinformation poses a bigger threat to democracy than you might think.” The authors suggest that “Concern about the expected blizzard of election-related misinformation is warranted, given the capacity of false information to boost polarization and undermine trust in electoral processes.” What’s the average person to make of these seemingly divergent views?

DR: We certainly do not want to give off the impression that we tolerate any bit of misinformation or harmful content or trivialize the impact it has, especially to those people that it does affect. What we’re saying is that it is concentrated away from the typical consumer into extreme pockets, and it takes a different approach and different allocation of resources to hit that than the traditional research, and the traditional questions you see popped up about aiming towards a typical consumer, about aiming towards this mass impact.

I read that and I don’t necessarily think it’s wrong, as much as I don’t see who they’re yelling at, basically, in that piece. I don’t think that is a huge movement — to trivialize — as much as to say, “Hey, we should actually fight it where it is, fight it where the problems are.” I think that it’s a talking past each other, in a sense.

UD: You’re an employee of Microsoft. How would you reassure potentially skeptical readers that your study is not an effort to downplay the negative effect of products that are profitable to the tech industry?

DR: This paper has four academic co-authors, and went through an incredibly rigorous process. You may not [have] noticed on the front: We submitted this paper on Oct. 13, 2021, and it was finally accepted on April 11, 2024. I’ve had some crazy review processes in my time. This was intense.

We came in with ideas based off our own academic research. We supplemented it with the latest research and continue to supplement it with research coming in, especially some research that ran counter to our original conception.

The bottom line is that Microsoft Research is an extremely unique place. For those who are not familiar with it, it was founded under the Bell Labs model in which there’s no review process for publications coming out of Microsoft Research because they believe that the integrity of the work rests on the fact that they are not censoring as they come through. The idea is to use this position to be able to engage in discussions and understanding around the impact of some things that are near the company, some things that have nothing to do with it.

In this case, I think it’s pretty far afoot. It’s a really awesome place to be. A lot of work is joint-authored with academic collaborators, and that certainly always is important to ensure that there are very clear guidelines in the process and ensure the academic integrity of the work that it does.

UD: I forgot to ask you about your team’s methods.

DR: It’s obviously different than a traditional research piece. In this case, this was definitely started by conversations among the co-authors about joint work and separate work that we’ve been doing that we felt was still not breaking through into the right places. It really started by laying down a few theories that we had about the differences between our academic work, the general body of academic work, and what we were seeing in the public discussion. And then an extremely thorough review of literature.

As you’ll see, we’re somewhere in the 150-plus citations — 154 citations. And with this incredibly long review process in Nature, we went line by line to ensure that there wasn’t anything that was not undefended by the literature: either, where appropriate, the academic literature, or, where appropriate, what we were able to cite from things that were in the public.

The idea was to really create, hopefully, a comprehensive piece that allowed people to really see what we think is a really important discussion — and this is why I’m so happy to talk to you today — about where the real harms are and where the push should be.

None of us are firm believers in trying to pull out a stance and hold to it despite new evidence. There are shifting models of social media. What we have now with TikTok, and Reels, and YouTube Shorts is a very different experience than what the main social media consumption was a few years ago — with longer videos — or the main social media a few years before that with news feeds. These will continue to then be something you want to monitor and understand.

Republish

Sara Talpos is a contributing editor at Undark.