“Hands up who’s never been polled? Right, there, I’ve polled you”. Bill Hicks
We’re used to opinion polls forming the basis for political debate, but the very idea of polling has come under increasing scrutiny in recent years.
In the 2015 General Election, most of the pollsters predicted a hung parliament, with the only real difference being whether the projected result would marginally favour Labour or marginally favour the Conservatives. As it transpired, David Cameron was returned to power with a hugely increased majority and an almost three-figure advantage in seats.
We may yet see a similar result in the race for the Republican Party nomination in the USA. Polling data for last night’s New Hampshire primary may have been accurate in calling the winners in both Republican and Democratic races, but not in the margins of victory. Donald Trump and Bernie Sanders won their respective primaries as expected, but their 20+ point leads were anything but.
In last week’s Iowa primary, polling had suggested that Donald Trump was the overwhelming favourite to win the Iowa caucus, with forecasters at Fox News suggesting that he had as much as an eleven-point lead over his nearest rivals, Of the 26 polls taken in January, only six predicted anything other than a Trump victory.
As it turned out, Ted Cruz was the overwhelming winner in Iowa, taking 28% of the vote, while Trump (24%) only narrowly edging out Marco Rubio (23%) in second place. Almost everyone from YouGov (who predicted Trump would get 39% of the vote) to FiveThirtyEight’s Nate Silver (a slender 1% lead on 26%) was off in their estimations. Pollsters were also off in their predictions on the Democratic side, with Clinton defeating Sanders by just 0.2%. A much more comfortable 8% margin had been predicted as the polls opened.
As a result, there have been calls to either revamp the way we look at forecasting, or perhaps do away with it all together. In the wake of the 2015 General Election polls, Lord Ashdown suggested that they could do more harm than good, perhaps even to the point of influencing voter turnout, while Lord Foulkes has proposed regulation of future polling, and a moratorium in the run-up to elections.
Even Ben Page of Ipsos Mori has suggested that “there are some really interesting questions about whether we should stop doing political polling altogether, or we should say that we’re not going to do it unless the media sponsors are willing to invest the money seriously.”
Even the aforementioned Nate Silver has suggested “there may be more difficult times ahead for the polling industry”.
That may be true, but we should have a lot more faith in the numbers.
It’s not soothsaying
Projections based on polling data are not a new challenge. The 1948 US presidential election was arguably the most high profile example, when exit data suggested a victory for Thomas Dewey against Harry Truman. All 50 exit polls suggested an overwhelming victory for the Republican candidate, with four out of five US newspapers running morning headlines with Dewey as president. Truman, having accepted defeat overnight, was eventually declared the winner by more than 4% and two million votes.
One of the problems is that polling has become confused with forecasting. A poll is not meant to be a forecast of how people *will* vote, it is a snapshot of how they say they will at a certain point. From this, we can attempt to forecast how an election may pan out, but the poll itself is simply a weighted (and complicated) cross section of opinion at a given time.
Gauging opinion has always been difficult, but never more so. To achieve reliable data there needs to be a fair cross section of society involved in the polls and society has never been more diverse or opinionated.
Young people are more likely to give away their voting intentions, but are less likely to engage with pollsters, we are less likely to answer our mobile phones to pollsters, and the whole population is far more likely to exaggerate their intention to vote. Internet polling should make things easier, but that only tends to target a younger demographic. One of the reasons exit polls tend to be so much more accurate (and they really are) is that pollsters actually know for certain that those who take part actually voted.
Of course, there is also a shifting political spectrum to consider. Traditionally, poll results were largely divided between those who intended to vote Conservative, Labour or Liberal Democrat. Gone are the days of three party politics.
The investigation into the accuracy of polling at the 2015 General Election concluded that Conservative voters were harder to track down. It sounds simplistic, but it’s obviously true. As any veteran Labour canvasser will tell you, they never encountered a single Thatcher voter and yet she won three general elections.
As Nate Silver recently explained, “the foundation of opinion research has historically been the ability to draw a random sample of the population. That’s become much harder to do.
Polling is often incredibly accurate.
All is not lost for pollsters, in fact, they are getting better and more refined with their processes. While few predicted the margin between the biggest two parties at the General Election, the polls were very accurate in predicting the successes and failures of the SNP, UKIP, the Greens and the Liberal Democrats.
Equally, polls across Scotland and Wales were largely reflected in the final results. Partly this is down to size – Scotland has a smaller number of voters and therefore fewer variables – but also because the data was most current. The independence referendum in Scotland allowed pollsters to refine their techniques and their data models. Tools such as our own Scotland Votes site were much more accurate because they had developed an up-to-date understanding of the electorate.
So what did the polling in the General Election in Scotland look like? Remarkably accurate. Here are the opinion polls between the independence referendum and the General Election in 2015.
In an era of declining affiliation for established parties and growing “undecideds” this is no mean feat. The negatives tend to be highlighted far more easily than the positives. Polling as a science is facing new challenges. As a result, it’s broadening scope and refining approaches. The most recent, tried-and-tested examples, tend to provide the most accurate results.
The controversy over polling might lie with the pundits’ expectations rather than the pollsters’ forecasts. To paraphrase one famous pollster, the polls are not wrong when it comes to measuring national sentiment, but they might fall short in determining an election. They’re only as good as the data they can gather, and we’re gathering very good data indeed.