Monthly Archives: January 2016

The Cathie Marsh Lecture on the Polling Miss

Back in November last year I attended the annual Cathie Marsh Memorial Lecture at the Royal Statistical Society, which was excellent (as it has been when I have attended before). The focus of the lecture was on polling failure and the future of survey research, and it was delivered by Professor Patrick Sturgis, who is chairing the inquiry into the performance of the polls preceding the general election. Given that the polling inquiry is due to release its results this week, I thought this would be an opportune moment to post my record and interpretation of the points made by Prof. Sturgis back in November. To be clear, the following is a mix of his and my thoughts, so if you’re interested in seeing Prof. Sturgis’ own words then you can watch the full lecture here.

The lecture began, rightly, with some kind words remembering Cathie Marsh, before engaging in a little definition. To wit, it is possible to differentiate between polls and surveys on the grounds of snootiness, quality, and purpose. Taking the latter two, more defensible, grounds, it has been argued that surveys are higher quality than polls (based, as they usually are, on random (or at least more random) samples) and that their purpose is broadly investigatory (i.e. academic) rather than political or democratic. Crucially, the point was made that this distinction is now less rigid than it was in the past. Still, even if the distinction between the two is less rigid, the fact that surveys and polls are arguably distinct on the basis of quality didn’t seem to bode well for the latter. Like any good academic, though, Professor Sturgis was quick to introduce a note of complexity.

It’s not as simple, he argued, as saying ‘the pollsters got it wrong’. Indeed, they did a good job in predicting the UKIP vote, the SNP surge, and the Liberal Democrat collapse so it was just, alas, on the ‘main event’ that they went skew whiff. Whilst the latter point may seem the most salient, Prof. Sturgis went on to remind the audience that without polls there may be a growth in even less accurate speculation about the outcomes of elections. There is certainly a healthy dash of truth in his statement that we couldn’t do better on that front by relying twitter, facebook, and equivalent sources.[1] This, of course, does not mean that we should settle for polling as it is (and, in my experience, the polling companies have far from rested on their laurels since May), especially in light of the historical trend that was outlined wherein polls have increasingly underestimated the Conservative share of general election votes whilst at the same time overestimating the Labour share. This may mean, as remarked, that we are now using something that measures pounds to measure ounces (if you’ll forgive the imperial units).

With the magnitude of the problem established (it’s not great but still better than it could be), Prof. Sturgis turned to possible explanations for the polling miss, all of which have been circulating since the day after the general election:

  1. Late swing. In other words, a load of people might have changed their minds just before they voted (and largely moved to the Conservatives) thus rendering the polls, which were conducted at the latest the day before, wide of the mark.[2]
  2. Sampling and weighting. As Prof. Sturgis pithily put it, polls are ‘modelling exercises based on recruited samples’. So, maybe the polling companies have recruited the wrong people to the panels of respondents that they survey, or perhaps they had out-of-date or incorrect assumptions underpinning the weights that they apply to their samples to correct for unrepresentative recruitment.
  3. Turnout misreporting. Perhaps a load of people who said they were sure they’d vote and that they would do so for Labour ended up not being able to make it to the polling station. At the same time, perhaps more of the people who said they’d vote Conservative managed to actually do so in practice.
  4. Don’t knows or refusals. If the people who said they didn’t know who they’d vote for, or who refused to say, broke to the Conservatives more than Labour then it could explain the disparity between the polls and the election result.
  5. Question wording. If the questions that are asked do not prompt a similar decision-making process to the one that people go through before they actually cast their vote then they may give a different answer.[3]
  6. Voter registration and postal voting. It may be that issues with registering to vote disproportionately affected voters for one party (i.e. Labour), or that those who held postal votes were not accurately taken into account. As Prof. Sturgis pointed out, this is unlikely to be the case since there were relatively small numbers in both groups.

We’ll come back to which of the above explanations is most convincing but, before doing so, it was noted that the polling companies’ results were surprisingly similar given their methodological difference. This may have suggested unintentional (the key word here) herding by the companies, whereby they looked at each other’s results and adjusted their methods to replicate those of their competitors (based on fear of being too far from the pack). This is obviously important (and related to the point about polling as a ‘modelling exercise’ above) but it’s an issue that needs to be considered separately from the original cause(s) of the disparity from the election result.

Since you’re reading this I guess you’re aware of at least some of the implications of all the above but, we were helpfully reminded. Namely, such a high-profile polling miss is likely to reduce public interest in polls and survey on the basis that, their trust (along with that of the media and politicians) has been dented. This could have the knock-on effect of further reducing response rates, making it even harder for pollsters and survey researchers to gain accurate results in future. This is something that the polling companies appear acutely aware of; I wouldn’t go so far as to call this an existential threat to them but it’s obviously had serious reputational repercussions and could continue to make their business harder for some time.

Despite the above, Prof. Sturgis went to some effort to moderate concerns, suggesting that the polling miss will actually have a relatively minimal impact. This was an unexpected argument but he outlined a number of reasons for thinking that it might be right. First, it’s rather difficult to estimate election results, in part because respondents are best at answering questions about their recent behaviour rather than about what they will do in the future. Thus, it shouldn’t be too much of a surprise that the polls get it wrong at times, which links to the previous point about measuring ounces with an instrument for pounds. Second, returning to the opening distinction between polls and surveys, it is likely that the damage will impact more on the former than the latter because surveys with samples that were recruited through more random means (such as the exit poll (which could also ask about recent behaviour rather than future behaviour), and the British Election Study) did a better job of approximating the outcome. It is important that Prof. Sturgis referenced this distinction again at this point in the lecture, as will be seen below. Third, a number of different research designs (phone and online, varying levels of randomness) failed to predict the result so no particular company is implicated, meaning the consequences will be spread between them. Fourth, the rise of opt-in panels (which are low cost, have a rapid turnaround, allow for ever-increasing functionality, and can accommodate client involvement in survey design) seem inexorable, so the polling miss is unlikely to stop it.

The final of the preceding points (which links to his preceding restatement of the distinction between polls and surveys) is key, because Prof. Sturgis went on to note the increasingly difficult time that those who conduct random sample surveys have. Response rates are falling (even more so with random digit dialling phone surveys than face-to-face) so it takes more time and effort to get the same response rate, meaning that costs also rise. Thus, in certain key senses random sample survey research is increasingly suffering by comparison to opt-in panels. This is a paradox in the sense that it is also random sample surveys, as noted above, that did a better job of predicting the outcome of the general election. And thus, we return to which of the possible explanations for the polling miss seems most likely to account for it. The focus of much of the latter part of the lecture on the difference (in quality) between random sample (survey) research and opt-in panel (polling) research suggests that sampling and weighting are likely to be the main culprits (though other explanations may well have a part to play), and this is a position that is supported by work that has been done by both the British Election Study team and the British Social Attitudes survey team (both of which have random samples). It is also supported by Prof. Sturgis’ comment that there is not a great deal of value in those who adopt a random sample approach chasing non-response. This is an unnecessary additional cost (for an already expensive method of data gathering) and random sampling is already better than non-probability opt-in panel based sampling. Thus, Prof. Sturgis concluded, reports of the death of random sample surveys are exaggerated.

So, what do we, or at least I, take from this? Well, if sampling and weighting were the main problem with the general election polls, which seems perfectly plausible, then the repeated distinction between surveys (based on random samples) and polls (based on samples drawn from opt-in panels) becomes particularly salient. This is especially so for those working with survey research in (quantitatively orientated social science) academia, because survey methodology is a whole sub-field of academia on its own, and because it reflects an ongoing debate about whether opt-in panel samples (usually online) are good enough to base robust academic research conclusions on.[4] The polling miss, and Prof. Sturgis’ lecture, seems to suggest that the latest point in that ongoing debate favours the sceptic’s point of view. In other words, it may now be harder for those who conduct research based on opt-in panel samples (such as myself) to convince academics to trust our results.

And what about beyond academia? I was recently asked why all this fuss about polling really matters. My answer was that some in the media may feel that they were led up the garden path by polling companies and were therefore implicated in ‘misleading’ the public, who may now be less trusting of both polling companies and the media. Crucially, there is also the argument that the media focus on the ‘horse-race’ that was supplied by the polls took attention away from the policy positions and political issues that should have been reported on more, which may have influenced the outcome of the election (which would be pretty important if it could be proved to be true).[5] This is especially problematic because the race that took so much attention turned out to have a much clearer winner than had been anticipated. So, the polling miss is important because it has implications for public trust of polls, and the media that report them, which means that it has implications for how, and whether, the media report them in future. This means that it may also have implications for future election campaigns and perhaps even results. As I have said, the polling companies (and media) seem to be taking these implications very seriously, as demonstrated by their full cooperation with the inquiry. The release of that inquiry will make the precise nature of the aforementioned implications clearer, so I’ll certainly be paying attention to it.

 


 

 

[1] If anyone who’s critical of polls or survey research ever tries to make a point about what people think based on what they’ve seen on social media then I implore you to call out this contradiction.

[2] Notably absent from this list is the idea of ‘shy Tories’, or people who don’t want to admit to polling companies that they vote Conservative. This was a big part of the explanation for the polling miss at the 1992 general election but seems much less likely to be part of the problem this time round.

[3] There’s absolutely tons of research on the impact of question wording (down to minute levels detail), and this informed the approach of those who conducted the Exit Poll, which asked respondents to replicate the voting process with a replica ballot paper and ballot box, rather than just answering a survey question. This may have contributed to the high level of accuracy that the Exit Poll achieved.

[4] If you’re interested in looking into this debate you can start with the following two articles that represent the two sides:

Neil Malhotra and Jon A. Krosnick, ‘The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 286-323 [presenting evidence that non-probability samples drawn from internet panels may produce less accurate results].

David Sanders, Harold D. Clarke, Marianne C. Stewart, and Paul Whiteley, ‘Does Mode Matter for Modeling Political Choice? Evidence From the 2005 British Election Study’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 257-285 [presenting evidence that non-probability samples drawn from internet panels may produce results that are not (statistically) significantly different from random face-to-face samples in terms of the relationships between variables].

[5] I’ll go out on a limb and state that I don’t think this will ever be proven; it’s remarkably difficult to prove the impact of particular factors in election outcomes, and this would take quite a lot of (quite expensive) academic research to provide robust evidence (if that’s even possible now that the event has passed), and with no guarantee of a clear conclusion.

Advertisements