Blog October 31, 2022
The Peril and Promise of Election Polls
Over the last couple of election cycles, election surveys have faced mounting criticism—much of it justified—for producing estimates that differed significantly from the eventual result. In both 2016 and 2020 the polls were biased in favor of Democratic candidates. (Although the same bias was not evident in 2018.) The 2020 election was a particularly bad year for pollsters, who produced some of the largest errors in the last 40 years.
This year, Americans (and especially disappointed liberals) are much more circumspect about the accuracy of election polls and their ability to predict the outcomes of the midterm elections. And caution is warranted. Here are a few things to remember as we head down the home stretch:
Sources of Error
Because surveys are based on a sample of a given population, the estimates they produce may not exactly match the “true” population value. This variance is typically captured by a survey’s margin of sampling error that provides information about the possible range of outcomes we might expect. However, margin of error should not be the only source of the error to be concerned about, even if it’s the most familiar. Pollsters have long noted that surveys are subject to possible errors due to framing or question wording, interviewer effects, and response bias. Election polls have additional potential problems. In a recent New York Times interview, Patrick Ruffini outlined how a poll’s margin of error is woefully insufficient at capturing the true extent of possible error, stating, “It doesn’t measure the totality of the error that could happen: people could change their minds or you could be surveying a completely different sampling universe of people that actually show up on Election Day.” These types of errors are very difficult to quantify and could easily double the size of the actual margin of error.
Low-Quality Polls
The Internet lowered the barriers to entry in many professions. Suddenly, anyone with an Internet connection could fashion themselves as an authoritative voice or expert on a particular subject. In the pre-Internet era, some combination of experience, training, and education were required to conduct surveys. However, as the cost of conducting traditional telephone surveys rose, inexpensive alternatives flooded the market. Cheap data sources allowed low-budget operators to produce work of dubious quality. To make matters worse, random digit dialing (RDD) telephone surveys conducted by live interviewers, once considered the of “gold standard” in survey research, were not immune to problems. Falling response rates and nonresponse bias raised legitimate questions about whether or not traditional methods were able to produce more accurate results. Today, most pollsters recognize that there is no longer a “gold standard” for survey research methodology. Without a set standard for survey quality it becomes more difficult to know what polls to trust.
Polling Alone: Reading Polls in Context
We have an abundance of polls to help us make sense of the evolving political landscape. This is good! Plentiful polling means we have more information instead of less to assess candidate quality, the public interest in particular campaigns, and the issues that are most salient. But it also means that treating any particular poll in isolation is both unnecessary and unhelpful. Any new election poll should be viewed as an additional clue as to what may unfold. It’s probably unrealistic to expect media companies and other public pollsters to highlight their competitor’s work, but analysts and commentators should help readers make sense of new findings by placing them in context of other previously published surveys. Polling aggregators are also helpful in this regard, but they are only as good as what goes in them, and typically they don’t offer much information about the voting preferences of important subgroups. This is especially important because most election polls are based on relatively small sample sizes where subgroup voting preferences can vary significantly from poll to poll.
Errors abound in any endeavor that requires human beings to give precise and immutable responses about their future behavior. Although polls were not designed to predict future events, they nonetheless provide plenty of useful predictive data. Yet, like any tool, polls are most useful if you appreciate how they work and understand their limitations. There are a few things that will help. First, don’t trust any pollster who is not completely transparent about their methods and results. This means producing a full topline and methodology statement. Second, polls are most useful when they are analyzed together. Patterns that emerge across polls are more likely to reflect something real. And finally, realize that the margin of error is not the sole source of error and adjust expectations accordingly.