Loading icon

The Ins and Outs of Polling

Tech & Innovation | October 24, 2016

Bernie Sanders was not supposed to win the Democratic Primary in Michigan. For weeks, poll after poll showed Hillary Clinton in the lead by double-digit margins. On the eve of the primary, FiveThirtyEight estimated that Clinton had a greater than 99 percent chance of winning the majority of the state’s delegates. But on March 8, 2016, the primary results shocked pollsters. Sanders, against all odds, eked out a victory of a little over 1 percent.

Deborah Schildkraut, a professor of political science at Tufts, stresses that these kinds of issues with polls are anomalies. “Part of the reason why those errors stick out to us so much is because they aren’t that common,” she said. While such upsets may only happen once per election cycle, they can be explained by understanding how polling is conducted, analyzed, and presented.

Cell phones and the Internet are two common ways of reaching poll respondents. Before these technologies were developed, polling was primarily conducted through random digit dialing of landline telephones. This method worked well because most households had one landline, and it was easy to pinpoint the respondent’s location based on their area code. This created a random sample of respondents whose answers were easily generalized to the whole population.

However, the widespread use of cell phones has presented a challenge to pollsters. Since people typically retain their local number even when they move out of state, it is hard to determine where a respondent will vote. Additionally, instead of a single landline, many households now have multiple cell phones in addition to, or instead of, a landline. According to the Centers for Disease Control and Prevention, 41 percent of households have both a cellphone and a landline and another 47 percent are cell-phone only. Because they are missing a significant part of the population, Schildkraut cautions against landline-only polling. “If a [telephone] poll does not include cell phones…then it’s really not going to be a good poll,” she said.

Unfortunately, polling through cell phones is expensive. While “robo-dialing” is legal for landlines, laws prevent this randomized calling to cell phones. Because an actual person must be employed to dial a series of numbers, attaining a large random sample becomes more costly and more time consuming. This reduces the accuracy of polling done on cellphones because the smaller sample sizes they create are less representative of the population as a whole.

Whether on cell phones or on landlines, it is becoming harder to get respondents altogether. As more and more business, politicians, and non-profits started to telemarket, people became wary of answering their phones. “Then came declining response rates because people were getting called for everything under the sun,” said Schildkraut. As telephone polling is becoming more difficult and error-prone, pollsters are looking for new ways to reach respondents.

The next frontier of political polling is collecting responses on the Internet, which allows more organizations to cheaply collect a large number of responses. The majority of internet polls are posted on social media and other relevant websites. Participants voluntarily respond to the questions, and the polls synthesize the information across these sites.

This method, according to Schildkraut, is not without its issues. “Firms that do this argue that they are able to approximate a representative population,” she explained. “The methodology is very complex and controversial.” Still, online firms are often able to closely predict election outcomes. According to the New York Times, online polls done by YouGov and Ipsos performed comparably to their telephone counterparts in predicting the actual results of the 2012 presidential election.

Beyond the issue of random sampling that makes it difficult to generalize to the entire population, all methods of polling are prone to errors that can impact the results. Two main types of error are at play. The first is statistical, which is incurred by the mathematical nature of sampling. This is seen as the margin of error of a poll, often reported as “plus-or-minus.” Generalizing a small group of responses to the entire populace results in a small loss of accuracy. This error is well understood and easily measured; it is present in every poll.

The second main type is measurement error, where the pollsters’ questions may mislead respondents—this is often caused by wording and tone. For example, when asking voters about their opinions on healthcare policy, calling the current system Obamacare versus the Affordable Care and Patient Protection Act is likely to affect the way many individuals respond. Schildkraut explained, “Even if you are not trying to bias the results, the results will differ based on the wording that you use.” But for some polls, this bias isn’t seen as entirely problematic. Polls aim to assess public perception and opinion, which is often influenced by the presentation of an issue rather than its substance.

In order to correct for these issues, the poll is heavily processed before it is published. One of the most important adjustments made to results is known as weighting. This method uses known demographics to correct the results for discrepancies in the demographic makeup of respondents. Gender, race, age, education, socioeconomic status, and location all contribute to weighting. According the 2010 Census, 50.8 percent of the US population identifies as female. An individual poll, however, may only have 40 percent of respondents identify as female. In this case, the responses of these people would be weighted more to accurately represent the demographics of the national population. Weighting helps to correct for some of the error introduced in a sample that is not entirely random.

With so many challenges inherent in polling, it can be difficult to decide which polls are the most accurate and the most representative. While the methodology and the margin of error on a poll are important, polls do not exist in a vacuum. In order to get a better sense of a single poll’s accuracy, it is helpful to view it in the context of a multitude of other polls. “If you see that all of the polls that are out there being done this week show one result, and there’s one survey firm that tends to be different from all of the rest, that’s a red flag,” Schildkraut said.

Since individual polls can suffer from errors, Schildkraut advocates for poll consumers to turn to averages and aggregates such as those found at RealClearPolitics and Huffington Post’s “Pollster” to understand the current state of the race. “I think that’s helpful in this era of such a proliferation of polling with so many different methods being used…[Focusing] on trends is the best way to be a consumer of the polls rather than putting too much stock in any single poll.”