At a time when vaccine-preventable diseases are on the rise, vaccine misinformation has become a cause of concern to public health officials.

As Bots Spread Vaccine Misinformation, Some Social Platforms Fight Back

VIEWPOINTS: Partner content, op-eds, and Undark editorials.

Social media has become one of the preeminent ways of disseminating accurate information about vaccines. However, a lot of the vaccine information propagated across social media in the United States has been inaccurate or misleading. At a time when vaccine-preventable diseases are on the rise, vaccine misinformation has become a cause of concern to public health officials.

A 2018 study showed that a lot of anti-vaccine information is generated by malicious automated programs — known as bots — and online trolls. In a striking parallel with the 2016 presidential campaign and the 2018 midterm elections, some vaccine misinformation on American social media has been traced back to Russia.

At Saint Louis University’s Center for Health Law Studies, I monitor legal and policy responses to vaccine misinformation. Now platforms like Twitter, Facebook, and Pinterest are developing strategies to address anti-vaccine bots and to try to reduce their reach in the United States.

Vaccine hesitancy” is what public health officials call the “delay in acceptance or refusal of vaccines” despite their availability. The World Health Organization has classified vaccine hesitancy as one of 10 big threats to global health in 2019, on the list with air pollution, heart disease, cancer, and pandemic outbreaks due to viruses like Ebola.

In recent years, social media platforms have become effective vehicles for conveying inaccurate information about vaccines, amplifying anti-vaccine movements, and giving greater visibility to scientifically unsound data.

In a 2019 experiment, several journalists searched the term “vaccine” on Facebook. What came back was predominantly anti-vaccine content, even though the vast majority of parents — 91 percent in one survey — are pro-vaccine.

One study by the Royal Society for Public Health in the U.K. found that 41 percent of parents using social media reported having encountered “negative messages” related to vaccination. The number increased to 50 percent among parents of children younger than 5.

Memes and other eye-catching visuals can also help propagate the idea that vaccines are unnecessary or harmful, without any reference to scientific or medical data.

Anyone with access to a computer can easily spread inaccurate information about vaccines through social media.

But bots trolling social media can accomplish this goal at a massive level, as they have been doing in the United States at least since 2014.

Bots account for a large percentage of online activity overall. Calculations suggest that between 40 percent and 52 percent of all internet traffic is automated. A study analyzing online bot activity in 2018 estimated that 20.4 percent of bots were malicious. Researchers estimate that between 9 percent and 15 percent of active Twitter accounts, for instance, are run by bots, instead of people.

A 2018 study analyzing Twitter data examined the role of bots and Russian trolls in spreading vaccine misinformation. Researchers looked at over 1.7 million vaccine-related tweets between July 2014 and September 2017. Accounts associated with these two categories tweeted at a higher rate about vaccines than average users. While there are no published studies about other social media, researchers have warned of similar activity on Facebook and YouTube.

In the case of Twitter, there seem to be at least two separate goals behind spreading misleading news about vaccines. Most vaccine-focused bots are deployed with the direct goal of spreading vaccine misinformation, presumably with the purpose of amplifying anti-vaccine views.

But content originating in Russia conveys both pro- and anti-vaccine messages. This is part of a broader strategy aimed at sowing discord in the U.S. by stirring up conflict around divisive topics.

Some Russian tweets identified in the study used the Twitter #vaccinateUS hashtag. Of all the #vaccinateUS tweets that had Russian sources, 43 percent were pro-vaccine, 38 percent were anti-vaccine, and 19 percent were neutral. A pro-vaccine one asked: “Do you still treat your kids with leaves? No? And why don’t you #vaccinate them? It’s medicine!” An example of an anti-vaccine one read: “#vaccines are a parents choice. Choice of a color of a little coffin.”

The U.S. is not alone in facing increasing levels of vaccine misinformation on social media. Canada has also reported a rise in the number of online bots spreading vaccine misinformation. Moreover, as content from social media is consumed across borders, these issues are now turning into a global problem.

A 2015 study analyzing vaccine pins on Pinterest found that the majority were anti-vaccine. By early 2019, the company decided to block all vaccine content from the platform.

Initially, the ban was absolute, regardless of the accuracy or source of the information. In late August, Pinterest announced that it would start allowing content from public health organizations, including the U.S. Centers for Disease Control and Prevention, the American Academy of Pediatrics, and World Health Organization.

In March 2019, Facebook announced that it would take steps to diminish anti-vaccine content. The company no longer allows anti-vaccine advertising and says it is considering removing fundraising tools from anti-vaccination Facebook pages. It no longer “recommends” anti-vaccine content and reduced the rankings of groups and pages conveying vaccine misinformation. They’re less visible, but not banned — these groups and pages are still present on Facebook.

Also in 2019, YouTube prohibited advertising on channels and videos that run anti-vaccination content. Until then, most YouTube searches for “vaccine” served up misinformation at the top of the list results. Afterwards, John Oliver’s HBO episode on vaccines and similar content jumped to the top.

As I wrote this article, dozens of new tweets were added to the #vaccine hashtag on Twitter. Several were similar to this one, tweeted from an account with over 11,000 followers, that conveys an anti-vaccine message under the guise of scientific information.

This account, which appears to be closely related to a previously suspended one, tweeted multiple times per hour. Less than an hour before the tweet above, it had tweeted a visually more blunt message asserting the false link between vaccines and autism.

Like most Twitter users, I have no idea whether this is a personal account or one operated by a bot. But for several hours, the vast majority of the tweets on the vaccine hashtag were spreading content that is not supported by current scientific consensus.

While the latest tweets were predominantly anti-vaccination, when I sorted results by “top tweets,” a tweet from the U.S. Department of Health and Human Services, pointing readers toward its own vaccine information page, appeared first.

But tweets four, five, and nine in the top 10 belonged to the same account with 11,000 followers I encountered repeatedly while monitoring the Twitter vaccine hashtag.

With outbreaks of vaccine-preventable diseases on the rise, public health institutions like the Centers for Disease Control and Prevention have been increasing their social media presences. Social media platforms can continue to help reduce misinformation that could further increase vaccine hesitancy in the United States and elsewhere. As suggested by Pinterest’s approach, these tech companies can increase the amount and visibility of vaccine content from reliable sources. While it’s virtually impossible to eliminate all inaccurate posts, I believe social media can and should be redesigned to facilitate the promotion of accurate vaccine information.

The Conversation

Ana Santos Rutschman is an assistant professor of law at Saint Louis University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.