Can we trust the polls on Harris vs Trump — and how do they work?
It is now less than two weeks until election day, and the polls are no closer to showing who will be the next president of the United States.
In the past few days, polls have ranged from a 4-point Harris lead, to Trump 3 points ahead overall.
With this much variation, can (and should) we really trust the polls?
See our latest analysis of the presidential race.
The polls “failed” before
In 2016 and 2020, those following the polls may have been shocked to see the results roll in, on both a national and state level.
In both cases, the Democrat success rate was overstated by the polls when compared to the actual election results.
At this point in the 2016 race, ABC News polls showed Hilary Clinton at 12 points ahead of Donald Trump, paving the way for a win. But in the end, Clinton was just 2 percent ahead in the popular vote, while Trump ultimately won the Electoral College.
Even though Joe Biden won comfortably in 2020, his 4.5-point margin over Donald Trump was below even Fox News polls estimates of an 8-point victory for Biden just days before the election.
This year, pollsters are attempting to correct some of their mistakes from 2020 — though the same was said about 2016. So how can it be that predictions were off, two cycles in a row?
“It’s not the most satisfying answer, but because we don’t know everything, it’s impossible to account for everything,” explains Dr Tristan Hightower, professor of political science at Bryant University.
Since unique factors affecting how people vote are constantly evolving, it can be difficult to identify all of them until after the fact. And the greatest test of accuracy is only possible on election day, when comparing real votes to the polls.
“In 2016, where Trump’s margins were largely underestimated, there was some talk of: Are people just afraid to say they’re voting for Trump?” says Dr Hightower. “In reality, we don’t really see that to be the case in social experiments. What is more likely is that Trump voters were less likely to respond to polls. That’s one example of a non-response bias — and it’s something that pollsters didn’t know about at the time, and they couldn’t account for it.”
The trouble with polling bias
When we talk about polling “bias”, this does not necesarily mean an intentional skew on the data, or a political angle.
“Slight changes in survey methodology can have substantial differences for the outcome of a particular poll. That’s for a variety of reasons,” explains Dr Hightower. “You can introduce bias very easily into a poll. Even when you change something small, you can have errors which are unaccounted for.”
When pollsters create their methodology, each parameter that is introduced can shift the final result. “Bias exists in any survey methodology,” Dr Hightower adds.
However, without these weightings, many polls would not provide an accurate full picture. Polls need to be adjusted to take into account factors like the effects of education, race, and regions.
“If a poll has just 20 percent female respondents, you can make those responses worth a bit more, because we know that female voters should account for roughly 50% of the population. So you can assign, literally, a heavier weight,” explains Dr Hightower.
Another example lies in state-level features which impact voting habits, and pollsters can adjust accordingly, if they have experience in that region.
“Residents in some states are particularly disposed to a certain factor that makes them unique. In Florida, there are several things that concern citizens there specifically — natural disasters by way of hurricane, as a recent example. Sometimes we know about those factors and we can account for them, and sometimes we don’t know that these things exist,” says Dr Hightower.
Yet these state-level quirks can also make it difficult to predict a national election result. Many leading national polls have a sample size of just 1,000 across the country, which is statistically sound, but tricky for the US electoral system.
“But of course, we have an electoral college system,” adds Dr Hightower. “So that doesn’t always directly translate to who would actually win an election. And it is very, very difficult to account for that.”
What about partisan pollsters?
Though most pollsters steer clear of political affiliation, in some cases, polls will be sponsored or carried out by partisan groups.
But just because a poll is carried out by a Democrat-leaning pollster, or a Republican Senate campaign, does not necessarily mean that its results are untrustworthy.
“As far as a malicious bias influencing the outcome of a poll, I don’t see much incentive for that for the pollsters. If you look at a lot of the national polls, most of them are pretty close. Where that that difference is coming in is where people in good faith have made different decisions based on weighting,” says Dr Hightower.
“So I wouldn’t characterize the majority of polls as ‘biased’ [in the traditional sense], regardless of whether they’re run by more liberal or conservative leaning institutions,” he adds.
Most of the “spin” on the results comes in later, at the editorial or analysis stage (think: a headline saying that the candidate is winning by a mile, but in reality is just 2 percent ahead).
While it is mostly in pollsters’ best interests to be accurate, of course, there are instances where this is not the case. FiveThirtyEight provides a useful ranking of over 300 pollsters, based on their transparency and previous accuracy.
The polls are calling — is anybody home?
Even the format of the poll itself can impact the results; and most importantly, who is actually responding.
In this election cycle, polls have taken the form of texts, emails, online surveys, phone calls, and in-app experiences. These can yield different respondents, and therefore different responses.
“Traditionally, we think of polls as random phone number dialing. These days, we know that a lot of people don’t answer their phone, and the ones that do answer their phone are usually from a different demographic than the whole population,” says Dr Hightower. “So doing just random phone number dialing is not going to yield you a representative sample. When you reach out via mail or online, we can see differences there. There are also different forms of non-response, so people who choose not to engage in polls.”
In the past few years specifically, several major pollsters (including Ipsos and CNN/SSRS) have started recruiting for online probability panels, rather than random digit dialing.
There is also the question of who is taking the time to answer these polls; especially when several mainstream polls have dozens of questions, meaning that answering the entire poll can be an hour-long endeavor.
In an evolving digital landscape, pollsters are adapting to new needs and expectations by making some polls shorter and more accessible.
“There’s not usually a huge economic gain by participating in these things. Pollsters can meet people where they are, though,” says Dr Hightower. “We’ve seen that these 30-question polls are really not connecting with younger voters. So we can do shorter polls. Pollsters are using different length polls, and modes of reaching out. We see a switch to online versions of polls versus talking on the phone.”
A poll is a poll, nothing more
What is clear from all of these factors is that differences in poll results begin before people have even had a chance to give their responses.
With an average margin of just 1.8 percent between Kamala Harris and Donald Trump, it becomes more risky to say that either one has a definitive chance of winning, warns Dr Hightower.
“Some people say: ‘We don’t trust polls.’ My response to that is: ‘You have to look at the poll for what it is.’ We’re looking at a representative sample, at a particular slice in time, about what people are thinking,” says Dr Hightower. “The big trouble, in my opinion, comes in when we try to forecast based on what’s happening in those polls at that moment in time.”
In less than two weeks, the American people will have their chance to vote, and the next US president will be decided. Only then will the polls be put to the test.