service navigation

EASP – European Association of Social Psychology

An inquiry on election polls

01.11.2017, by Sibylle Classen in opinion

By John Curtice, Professor of Politics at the University of Strathclyde, and President of the British Polling Council

Opinion polls have not had a good press of late – on either side of the Atlantic. In the UK, they are widely thought to have inaccurately anticipated that the 2015 general election would produce a hung parliament (the Conservatives secured a narrow overall majority), that the UK would vote to remain in the EU (it voted to leave) and that the Conservatives would win an (enhanced) overall majority in 2017 (they lost their majority entirely). Meanwhile in the US the polls are accused of forecasting that Hilary Clinton would win the 2016 presidential election when, in the event, Donal Trump pulled off what was widely regarded as a shock victory.

In truth, the record is not as bad as often portrayed. In the US, the final nationwide polls on average put Hilary Clinton three points ahead, when in the event she was ahead by two points. That is about as good as can be expected of any set of polls, which should certainly not be blamed for how the Electoral College system works. True, there was a tendency to overestimate Clinton’s strength in the polls of individual key battleground state polls, but even in these instances the reported poll leads were sufficiently narrow not to justify the assumption that Mrs Clinton was home and dry.

In the UK, there is no disputing that all the polls underestimated the Conservative lead over Labour in the 2015 election, although they more or less got everything else right. But the polls did not point unambiguously to a victory for Remain in the EU referendum. Of the 34 polls that were published during the official campaign, slightly more (17) put Leave ahead than said Remain were ahead (14 – three reckoned it was a tie). Although four companies that published ‘final’ polls put Remain (narrowly) ahead, two others said Leave were ahead. Meanwhile, although most polls overestimated the Conservative lead over Labour in the 2017 general election, not all were wrong. One poll actually managed to underestimate the Conservative lead slightly, while one attempt at modelling a large swathe of polling data in order to estimate the outcome in seats underestimated the eventual Conservative tally.

Still, even taking these caveats into account, it is not the best of records. However, opinion polling is a risky business at the best of times. It is constrained by two requirements – cost-effectiveness and speed. Most polling in the UK and nearly two-fifths of that in the US is conducted (relatively cheaply) over the internet with panels of people who have previously been persuaded to sign-up as willing participants in online surveys. Otherwise they are undertaken over the telephone, but such polls typically have very low response rates. In both cases, interviewing is conducted over just a couple of days. As a result, those who respond to polls are likely to consist disproportionately of the willing and the accessible – whose views and behaviour may not necessarily be typical of their fellow citizens.

Of course, polling companies are alive to these risks. They know, for example, that young people are more difficult to contact. They may well make particular efforts to get hold of them. But if, as is usually the case, they do not entirely succeed they will weight their data to ensure that that, after weighting, their samples do reflect the age profile of the target population. Much the same principle can be applied to other demographic characteristics for which there is reliable information about their distribution in the target population.

However, this does not necessarily resolve all the problems. Demographic weighting only works if it is applied to those characteristics that are correlated with vote choice – and these can change over time. One of the features of the 2016 EU referendum in Britain is that vote choice was more strongly associated with educational background than it is in general elections. The same was true of the 2016 US election. Polls with too many graduates (as is often the position) were at risk of overestimating support for Remain and for Clinton. One reason for the error in the individual state polls in the US at least, was that insufficient attention was paid to the need to weight by education.

Even then, demographic weighting is only adequate so long as the respondents in any particular demographic group are typical of their peers. However, this is not necessarily the case either. One of the lessons many of the UK polling companies took from the inquiry into what wrong in 2015 was that while the young people who they interviewed said they would vote and would back Labour (as they were disproportionately inclined to do) did turn out and vote Labour, they were atypical of their peers in their propensity to vote.

This is a familiar problem for pollsters in the US where low turnouts – and thus the potential for differential turnout – has long been a feature of the country’s elections. One way of trying to deal with this is to weight the data to the demographic profile of those who usually go to the polls rather than that of the population as a whole. In the UK in 2017 many pollsters engaged anew in this and a variety other strategies to better reflect the demographic differences in likelihood of voting that pertained in 2015.

Trouble is, patterns of turnout can change too. In the 2016 US election turnout amongst (mostly Clinton supporting) African Americans fell, while it rose in (Trump supporting) rural counties in key battleground states. Meanwhile, initial evidence on turnout in the 2017 UK election suggests the age gap was narrower than in 2015 (while, at the same time, the age gap in vote preference widened). Polls that weighted their data to reflect the pattern of turnout two years previously were at risk of overestimating the Conservatives’ strength. Indeed, examination of the detailed tables published by each company suggests that much of the weighting of the polls in the 2017 election made the polls less rather than more accurate. Accurate polling is dependent not just on ascertaining how people will vote, but also on whether they will do so at all. Trouble is, recent experience suggests that getting the latter right can prove remarkably difficult.