Why election polls were so wrong in 2016 and 2020 — and what’s changing to fix that

Why election polls were so wrong in 2016 and 2020 — and what’s changing to fix that

2014 was the first year Lonna Atkeson remembers receiving hate mail.

Atkeson, a political scientist who researches election surveys and public opinion, has been conducting voter polls since 2004. She is currently a professor at Florida State University and has authored several books.

But a decade into her polling work, she said, the angry messages began rolling in.

“I started getting letters from people saying, ‘You’re part of the problem. You’re not part of the solution. I’m not going to answer your surveys anymore. You’re an evil academic trying to brainwash our children,'” Atkeson recalled in an interview with CNBC.

For Atkeson, those notes marked a shift: a more polarized electorate had begun to lose faith in institutions like polling and voters might no longer be as willing to talk to her.

At the same time, technology was advancing and landlines or mail were no longer foolproof ways to get in touch with survey respondents.

“People were not answering their phones,” Rachael Cobb, a political science professor at Suffolk University, told CNBC. “Even in the last 10 years, you might try 20 callers to get the one that you need. Now, it’s double: 40 callers to get what you need. So every poll takes longer and it’s more expensive.”

Polarization and technology are among the obstacles that pollsters cite as complicating the task of taking accurate voter surveys.

As a result, over the past several election cycles, polling organizations have made some major mistakes.

“If you look at some of the big misses, I mean, they’re pretty big,” Atkeson said.

Among the big misses that have scarred the polling industry is the 2016 presidential election when various headlines littered the news, claiming that Democratic nominee Hillary Clinton’s chances of winning against Republican nominee Donald Trump were around 90%.

An industry-wide post-mortem identified several key causes of the 2016 polling flop.

Certain factors were out of pollsters’ control.

For instance, according to the American Association for Public Opinion Research, some voters did not decide whose name to write on their ballot until the last minute, making them difficult to account for.

And some voters were shy about their support for Trump due to his controversial rhetoric during the 2016 campaign. As a result, they did not always admit their voting intentions to pollsters.

But other factors were direct results of methodological oversight.

“People didn’t factor in educational representation,” said Matin Mirramezani, chief operating officer at Generation Lab, a polling organization that specifically targets young voters. “Education is a lesson learned from 2016.”

White, non-college-educated voters, who made up a large part of Trump’s base, went undercounted in 2016 polls, in part because people with higher education are “significantly more likely” to respond to surveys than those with less education, according to AAPOR.

Despite identifying these issues, when the 2020 election rolled around, polls yielded the highest error margins in 40 years, again underestimating Trump support, AAPOR found.

And during the 2022 midterm elections, the “red wave” of voters that the media was convinced would overwhelmingly propel Republicans back into congressional control never came. Democrats maintained their Senate majority and ceded the House by a slim margin.

Heading into the 2024 rematch between Trump and President Joe Biden, pollsters are trying a variety of strategies to avoid repeating history and to accurately capture the elusive Trump vote.

For one, pollsters have adjusted their approach to “weighting,” a method that assigns a multiplier to each respondent to change how much their answer sways the overall poll outcome.

Pollsters have always used weighting to construct survey samples that accurately reflect the electorate in terms of gender, age, race or income. But after 2016, they are taking particular care to weight education.

Atkeson suggested pollsters go beyond education weighting for 2024 and factor in variables like how someone voted in 2020, or even whether they rent or own a home, or whether they are a blood donor.

“You just start tagging to everything you can,” Atkeson said. “Anything that can tell us, ‘Well, what does the population really look like?'”

Along with weighting, pollsters are paying more attention to survey respondents they used to discount.

“Some people will start a poll, they’ll tell you who they’re going to vote for and then they say, ‘I’m done. I don’t want to talk to you anymore. Goodbye,'” Don Levy, director of the Siena College Research Institute, which helps conduct polls for the New York Times, told CNBC. “In 2020 and 2022, we didn’t count those people.”

But this time around, Levy says they are counting the “drop-offs.”

They found that if they had counted those impatient respondents in 2020 and 2022, their poll results would have moved “about a point and a quarter in the Trump direction,” Levy said, eliminating roughly 40% of their error.

Levy added that SCRI is also taking an extra step to target Trump voters by modeling their sample to include a higher survey quota for people who are considered “high-probability Trump voters in rural areas.”

“If you think of them as M&Ms, let’s say the Trump M&M vote is red,” Levy said. “We have a few extra red M&Ms in the jar.”

administrator

Related Articles