Pollsters Blew Last Year’s Election and This Year’s. Will Anything Change in Time for the Midterms?

The 2016 election cycle famously brought with it a flood of polls, poll analyses, poll aggregations, and poll-based forecasts — most of it suggesting, often forcefully, that Hillary Clinton was going to win. “It was a rough night for the number crunchers,” began one New York Times recap. A major post-mortem from the American Association of Public Opinion Research (AAPOR), published in May, described the event as a “black eye for the profession.”

THE TRACKER
The Science Press Under the Microscope.

Of course, the industry was facing challenges long before the 2016 election. Over the last decade, a rush of startup polling firms, many of them relying on questionable but low-cost surveying methods, had begun to crowd out old industry flagships like Gallup and Pew. The rise of cellphones — and an increasing unwillingness to answer any phone — had made it more difficult, and more expensive, to reach respondents. Many major surveys today claim to sample the whole American public, but don’t even manage to get a response from 10 percent of the people they call.

Meanwhile, even good polls don’t guarantee informed readers, and details about the response rate and the margin of error are often buried in the fine print — problems that are not unique to American media outlets. One 2015 study of hundreds of articles covering various elections in the Danish press, for example, found that “a large share of the interpretations made by the journalists [are] based on differences in numbers that are so small that they are most likely just statistical noise.” (There don’t seem to be comparable studies available on the American media, but it’s safe to say the problem wasn’t confined to Denmark.)

And yet, for all the handwringing that followed the missteps of 2016 — and a year ahead of much-discussed midterm elections — it’s not evident that major polling and media organizations have changed the way they handle opinion data. And it shows. As Vox pointed out this week, following the successful results for numerous Democrats in Tuesday’s election, including for the governor of Virginia: “Pollsters missed Virginia by more than they missed Trump v. Hillary,” but this time they underestimated Democratic voters.

In other words, while it’s true that the ambitious report in May from AAPOR, the industry’s flagship organization, offers a serious, accessible, 104-page-long dissection of the issue, it’s not clear that the report, or any other efforts to date, has propelled meaningful change or has even set in motion the necessary correctives.

“I don’t think it’s been a huge watershed moment,” said Kaiser Family Foundation senior survey analyst Ashley Kirzinger. “There are still large media organizations and news organizations that are reporting on polls [where] they know very little about their methodology.”



The AAPOR report did identify some sampling errors. It concluded that many polls oversampled Hillary-friendly college-educated voters, and failed to account adequately for last-minute vote switching. The report, though, also defended the big national polls (Clinton did, after all, actually win the national popular vote), and took a few shots at election forecasters who use polls to try to predict each candidate’s odds of victory (“the net benefit to the country is unclear,” the authors wrote).

Courtney Kennedy, director of survey research at the Pew Research Center, chaired the AAPOR committee that produced the report. “Within the polling and survey community, I feel like it had a good impact and was well-received,” she told me.

Whether that impact will go much beyond small methodological adjustments, though, is harder to judge. “Few if any of the public pollsters that conducted surveys ahead of Tuesday’s elections for governor in Virginia and New Jersey appear to have adopted significant methodological changes intended to better represent the rural, less educated white voters who pollsters believe were underrepresented in pre-election surveys,” a recent New York Times analysis found.

Nor is it apparent what kind of impact, if any, the 2016 polling errors will have on media coverage of opinion data. I asked editors and writers at The Times’ Upshot blog, CNN, The Washington Post, NBC News, NPR, and The Associated Press whether they had changed their poll reporting standards at all since the election. CNN did not respond to my request, but the other outlets did.

“That’s a great question and it’s one we’ll fully resolve before the midterms,” wrote the Upshot’s Nate Cohn in an email to Undark, “but I don’t think we’re ready to comment on it yet.” In an email sent through a spokesperson, the deputy managing editor for operations at The Associated Press, David Scott, quoted the AP Stylebook: “A poll’s existence is not enough to make it news.”

“We focus our survey research on voters and their opinions about the issues driving the campaign,” he added. “That’s the approach we took in 2016 and continue to take today.”

The only journalist who pointed to a concrete change at his organization was Scott Clement, the polling director at The Washington Post and one of the contributors to the AAPOR election post-mortem. When I called Clement, he sent me The Post’s coverage of a poll they had conducted earlier this year on the Virginia gubernatorial race primary. The poll showed a two-point gap between the two Democratic frontrunners, Tom Perriello and Ralph Northam. But it also had a very large margin of error.

Results like this are easy to misread: They can be used to imply that someone is narrowly ahead in a race (Perriello is up by two!), even when it’s almost as likely, given the inherent fuzziness of error margins, that Northam was ahead. So Clement tried something that, to his knowledge, The Post has never done before: He printed the full range of sampling error right on the graphic of the poll results.

I asked him whether readers had still drawn inaccurate conclusions. “I did hear a radio story that cited the survey about three weeks later that didn’t refer to any caveats about this, and just said, ‘Perriello is up, 40-38,’” he admitted. When I searched for The Post survey, one of the first results was a news story from a Virginia NBC affiliate.

“Perriello leads the poll with 40 percent of the projected votes in his favor,” the write-up said. There was no mention the margin of error, and it included a statement that while seemingly standard for poll coverage, was completely unsupported: “Northam trails by two points.”

These aren’t just abstract questions of statistical precision. Decades of research suggest that polls can actually shape citizens’ decision-making. “Polls have a huge impact, not just on what people think and feel, but what they do,” said Patricia Moy, a political communications scholar at the University of Washington, in an interview with Undark.

Polls, Moy points out, can make certain issues feel relevant, or give the sense that there’s momentum behind a particular cause. With elections, they may even influence whether people vote — or stay at home, because they feel as if the race is already decided. “How people perceive public opinion writ large, and how people perceive public opinion in their smaller groups, will change how they behave,” Moy said.

Polling experts often complain that journalists need to do a better job of interpreting their results, and that the public just doesn’t understand statistics. After a while, however, this argument can sound a bit like a vodka distributor griping about drunk driving: It’s true that the driver is at fault, but when a product is consistently abused by its target audience, the makers and distributors should probably share some responsibility, too.

In this light, it’s striking to realize that, more than 80 years after George Gallup published his first groundbreaking election poll, and a full year after the highest profile polling failure of the modern era, the industry still does not have — or even seem to be widely discussing — a useful metric for uncertainty.

The metric that’s currently in use, the margin of error, is usually buried in a footnote or a parenthetical. It’s not at all clear that readers understand what it means, and even worse, in most cases, it’s probably an underestimate. A recent historical analysis that compared polling estimates to final election results, for example, argued that the average survey error is close to double what usually appears on a poll.

I asked Kennedy, the Pew survey director and AAPOR committee chair, whether it might help if polling firms simply stopped reporting specific numbers. Instead of saying that, say, 34 percent of people like Candidate X, with a 3 percent margin of error, why not simply show a large cloud of probabilities clustered between 31 percent and 37 percent — which, after all, is what the data actually shows?

“I frankly think that’s an interesting idea. I think in practice it would be — not impossible, but it would very hard to write up a news report,” she said, adding that “the only measure of uncertainty that has really penetrated the public consciousness is the idea of margin of error.”

But nobody seems to understand that, I said.

“Not only do people not understand it, the margin of error itself, I would say, is fundamentally flawed in the context of modern polling,” Kennedy responded, “because it assumes that the poll is essentially unbiased, in that all it reflects is that you did a sample instead of a census of the population.”

For her part, Kirzinger said she does see some encouraging signs. She helps run an AAPOR initiative that rewards polling and media organizations that are transparent in reporting how their surveys are conducted. In the past two years, she noted, more than 80 organizations have joined the program.

Still, the pressures on pollsters and reporters don’t yet point toward a future rife with caution. Benjamin Toff, a political communication scholar at the University of Minnesota who surveyed dozens of pollsters and journalists in 2014 and 2015 about changes in the industry as part of his dissertation research, told me about the structural challenges facing the data market.

“I heard from a number of people who I interviewed … [about] the pressure to be first and to be involved, be in the mix online and on social media as new information is coming in,” he said. “That does create certain incentives around a certain style of coverage that’s maybe a little less reflective.

“So I guess I’m not super-optimistic,” he said, “that the next election will be dramatically different.”

CORRECTION: An earlier version of this piece listed Scott Clement as the polling manager at The Washington Post. He is the polling director.

Michael Schulson is an American freelance writer covering science, religion, technology, and ethics. His work has been published by Pacific Standard magazine, Aeon, New York magazine, and The Washington Post, among other outlets, and he writes the Matters of Fact and Tracker columns for Undark.

Michael Schulson is a contributing editor for Undark. His work has also been published by Aeon, NPR, Pacific Standard, Scientific American, Slate, and Wired, among other publications.