After the error in predicting the 2015 General Election, polling is back in full swing (no pun intended). For the EU referendum, NatCen’s very popular What UK thinks website shows that 104 polls on the EU question were run between 3 September and 19 April 2016. The disparity between the polls collected online as opposed to those by phone suggests there are still significant issues for the industry to address. It all begs the obvious question, what, if any, lessons have been learnt from the Election and how far any changes have been implemented into the flurry of polls we are seeing today?
The obvious first port of call is to return to the findings and recommendations of the inquiry into the 2015 British general elections opinion polls. The inquiry, led by Patrick Sturgis of the National Centre for Research Methods, published its final report in March. The report is an impressive piece of methodological analysis, taking each of a number of hypotheses in turn. It includes re-analysis of the original microdata, a review of the results from ‘re-contact’ surveys carried out by the pollsters after the General Election, and even the construction of alternative weighting schemes. The wide variety of complex, and sometimes opaque, methods used to weight the samples and adjust the results is particularly striking. Despite the time and effort that goes into statistical adjustments, the inquiry argues that the single most important reason for the error was unrepresentative achieved samples.
Over the long-term the inquiry calls for fewer polls of a higher quality that would allow polling companies to improve the representativeness of their samples through, for example, longer fieldwork periods and more re-contact attempts. But the sheer numbers of polls we have seen on the EU referendum shows we are not there yet. Another recommendation was the creation of a new random survey carried out at key points in the election cycle to establish a benchmark for polls to compare themselves to. The report puts forward ideas for a pre-election British Election Survey, and possible use of a new on-line panel survey. Here at NatCen we have been exploring the feasibility of a random on-line panel in the last few months. We will be conducting more surveys in the future and, in the spirit of the inquiry, we will be as transparent as possible about the methodological challenges. Of course, we won’t see it merely as a tool for benchmarking polls, but rather a high quality polling resource in its own right.
Not all of the inquiry’s proposals were around improving the accuracy of polls, but were instead about making it easier to assess their quality through greater transparency . The report argues for publication of confidence intervals as well as advance notification of carrying out an election poll and making the micro-data available on request. The British Polling Council has published their response to the inquiry, including a commitment into exploring a common approach for calculating confidence intervals.
We have already seen greater publication of methods used by pollsters, including weighting and the small print does now include more information on methods. The challenge of persuading the media to report confidence intervals in a meaningful way – to ensure people interpret the results as indicative rather than definitive – is perhaps bigger.
What else could be done? There could be an accessible, common location on the internet for the data that would allow further analysis and comparison, although it would like be expensive to set up and maintain. In addition, a recent study by the Pew Research Center showed how we can provide quality assurance through comparison studies – this report highlighted the disparity in polling quality between a number of online panels. Although I would stop well short of the proposals set out in Lord Foulkes Bill, we may need some oversight from a regulatory body to keep a check on whether polling companies are meeting their obligations (has an organisation such as the Electoral Commission got a role to play?).
The inquiry report reminds us that the polls have a history of inaccuracy, but on most occasions this didn’t matter too much. They got caught out in 2015 because the errors meant the prediction of the overall result was wrong . The latest test (the mayoral in London and Scottish parliamentary election) seems to suggest that predictions were far more credible than last year, with the possible exception of identifying the rise of the Tories in Scotland. The EU referendum will be seen as the next test, where the stakes are perhaps higher.
Despite the positive signs so far, it is too early to know if the change of approach in the polling industry would prevent a repeat of the 2015 election error. For the sake of all survey research, we need to ensure their reputation is repaired and improved.