Category Archives: Uncategorized

Brexit Referendum: Positive Principle, Destructive Discourse

The grey man of politics, Sir John Major, was on the Today programme last week railing against the big bag of soundbites that has been opened by the Leave campaign in the run up to the forthcoming referendum on Britain’s membership of the European Union.[1] He then proceeded to inform the supporters of a British exit from the E.U. that if they’re so concerned with undiluted sovereignty then they can find it in North Korea. Soundbite much?![2] Putting aside the argument that sovereignty and isolationism are not the same thing (and the distinct likelihood that North Korea is influenced by a large, powerful, and economically significant neighbour anyway), let’s just add this to the long list of ridiculous rhetoric that has been spouted by both sides in the debate. Previous entries on that list include Michael Gove’s claim that voting to remain in the E.U. would mean being ‘hostages locked in the back of the car driven head long towards deeper E.U. integration’,[3] George Osborne’s clear-as-mud claim that every family in the U.K. will be £4,300 a year worse off if the country leaves the E.U.,[4] and Nigel Farage’s incomprehensible waggling of a U.K. passport whilst arguing that a major reason for leaving the E.U. is to reduce sex attacks by foreigners.[5] Blimey, it’s even enough to get bureaucrats to emerge from their smoke-filled backrooms and start commenting.

So why do the politicians insist on talking like this? Well, it’s all part of ‘project fear’, which is something that both sides like to claim the other lot are engaged in. And both sides are right. The referendum debate has, to a large extent, become a game of one-upmanship in which the campaigns compete to promote the most lasting fear in the electorate. And, from their perspective, it probably makes sense to do so. Both sides have political advisers, campaign managers, and strategists who are aware that the fight probably isn’t over the roughly 40% of voters on each side who’ve already made up their minds.[6] Rather, it’s over the 20% in the middle, of which approximately three quarters say they don’t yet know which way they’ll vote. That 15% or so of the electorate probably contains plenty of people who we might call cautious, or even conservative (emphasis on the small ‘c’). They don’t want to decide which way to vote until they feel comfortable that they’ve got enough information, or at least a good reason for their decision. This means that if one side can successfully populate the public narrative with more reasons why things will be worse if the other side wins then they may well be home and dry. This, I think, is what happened in the Scottish independence referendum; the ‘no’ side made a more convincing (and louder) case that the economic risk of breaking up the U.K. was too great. Thus, those who waited until late in the campaign to decide their vote were more likely to oppose Scottish independence.

And how did the ‘no’ side manage to make their case more convincing? Well, in part, they got more ‘respectable’ voices to make it. Lots of economists, business leaders, and consultants releasing reports about, and estimates of, the potential costs to the Scottish economy of breaking away from the U.K., which is a trend that’s being replicated in the current referendum campaign. This will, I think, benefit the Remain campaign. Not only do they have more famous and more establishment politicians on their side, but they also seem to have more economists and business leaders too. And the thing about those cautious, or conservative, voters I mentioned is that they are the people who are most likely to be swayed by economists and business leaders, or ‘respectable’ voices. So, as we approach the referendum on the 23rd of June, I think we’ll see more of those undecided voters coming out in favour of remaining. They might take it right down to the wire, but I suspect they’ll swing it for staying in. This is the first, and most important, advantage that the Remain campaign has. On top of that, they also benefit from having government resources[7] and, I suspect, more money on their side, and they seem united in comparison the competing Leave campaigns.

So, is it all doom and gloom for the Leave campaign? No, I don’t think so. To my mind, they’ve got four things in their favour, which are, from least to most important: protest votes, committed supporters, media narratives, and demographics. The first of those is the counterpoint to the observation that the Remain campaign’s establishment status will sway cautious voters. On the flip side, it might inspire a backlash, though I suspect that the overall effect of looking ‘respectable’ will benefit Remain. Second up, the committed supporters extend from campaigners to voters; people who are opposed to E.U. membership seem to be more passionate than those who support it. This makes them more likely to turn up to vote and, potentially, to convert others to the cause. Still, in the same way that being ‘establishment’ may alienate some from the Remain campaign, being too passionate may alienate others (and, perhaps, particularly those cautious voters I keep going on about) from the Leave campaign. Third, on the press narratives front, many years of anti-E.U. articles in a lot of the major daily newspapers is part of what led to a referendum in the first place. Still, the press is often self-interested when it comes to public opinion and, if it looks like Leave isn’t a sure bet then they might hedge their bets. So, if the first three points don’t definitely favour Leave then it comes down to demographics, which is the big plus for them. It’s well known that opposition to E.U. membership is stronger amongst older people and men, and that those people are more likely to vote (or at least claim that they will). However, assuming that the polling companies have addressed the problems that underpinned last year’s general election polling miss, their results suggest that the two sides are pretty much neck and neck. This is even after weighting to account for demographics.

If even the benefit of having older (male) supporters doesn’t bear fruit for Leave then it comes down to those cautious undecided voters, who are the main targets of the ongoing rhetoric of fear on both sides. It’s sad that it’s come to that because, I think, having referendums on significant issues (especially constitutional matters) is a positive principle. It’d be great to figure out a way to engage in a less destructive discourse around such votes, but I still think that it’s good to have a discourse that will feed into a popular decision. It’s not the only way to do make such a decision, but it’s one, and it’s appropriate for some occasions. So, the two campaigns will keep banging their rhetorical drums. Remain’s drum is a bit bigger and more impressive but Leave’s drum is being beaten more frantically. We’ll have to wait and see what the outcome of the contest will be but, on balance, I think that the advantages of the Remain campaign will outweigh those of the Leave campaign. Indeed, if the trends of the last five years are anything to go by, the U.K. will still be part of the E.U. on the 24th of June, and for some years to come.

[1] I originally wrote this post for a foreign-language blog but, alas, they couldn’t get a translator so I’m sticking it up here now instead.

[2] And my repeating it here undermines any claim I might make to disapprove of soundbite politics.

[3] Putting aside the second part of the sentence, it’s the choice of language in the first half of the sentence that one might consider to be a bit over the top.

[4] As I understand it, the claim was predicated on a predicted decrease in the future growth of the U.K. economy if the country leaves the E.U. and, I must say, I’m baffled by why anyone would take a long term economic projection with anything less than a big pinch of salt. There were lots of assumptions involved in making that claim, and we don’t know if they’ll hold in reality.

[5] Incomprehensible because I’m not sure that the juxtaposition of waving a U.K. passport around whilst talking about sexual assault gave the intended impression. Also, and much more importantly, evidence suggests that most sexual crimes are committed by people known to the victims.

[6] Though shouting their respective messages probably won’t undermine the support they’ve built up so far.

[7] Hence the Government’s booklet in support of the U.K. remaining in the E.U. which did seem somewhat unfair to me.


The Cathie Marsh Lecture on the Polling Miss

Back in November last year I attended the annual Cathie Marsh Memorial Lecture at the Royal Statistical Society, which was excellent (as it has been when I have attended before). The focus of the lecture was on polling failure and the future of survey research, and it was delivered by Professor Patrick Sturgis, who is chairing the inquiry into the performance of the polls preceding the general election. Given that the polling inquiry is due to release its results this week, I thought this would be an opportune moment to post my record and interpretation of the points made by Prof. Sturgis back in November. To be clear, the following is a mix of his and my thoughts, so if you’re interested in seeing Prof. Sturgis’ own words then you can watch the full lecture here.

The lecture began, rightly, with some kind words remembering Cathie Marsh, before engaging in a little definition. To wit, it is possible to differentiate between polls and surveys on the grounds of snootiness, quality, and purpose. Taking the latter two, more defensible, grounds, it has been argued that surveys are higher quality than polls (based, as they usually are, on random (or at least more random) samples) and that their purpose is broadly investigatory (i.e. academic) rather than political or democratic. Crucially, the point was made that this distinction is now less rigid than it was in the past. Still, even if the distinction between the two is less rigid, the fact that surveys and polls are arguably distinct on the basis of quality didn’t seem to bode well for the latter. Like any good academic, though, Professor Sturgis was quick to introduce a note of complexity.

It’s not as simple, he argued, as saying ‘the pollsters got it wrong’. Indeed, they did a good job in predicting the UKIP vote, the SNP surge, and the Liberal Democrat collapse so it was just, alas, on the ‘main event’ that they went skew whiff. Whilst the latter point may seem the most salient, Prof. Sturgis went on to remind the audience that without polls there may be a growth in even less accurate speculation about the outcomes of elections. There is certainly a healthy dash of truth in his statement that we couldn’t do better on that front by relying twitter, facebook, and equivalent sources.[1] This, of course, does not mean that we should settle for polling as it is (and, in my experience, the polling companies have far from rested on their laurels since May), especially in light of the historical trend that was outlined wherein polls have increasingly underestimated the Conservative share of general election votes whilst at the same time overestimating the Labour share. This may mean, as remarked, that we are now using something that measures pounds to measure ounces (if you’ll forgive the imperial units).

With the magnitude of the problem established (it’s not great but still better than it could be), Prof. Sturgis turned to possible explanations for the polling miss, all of which have been circulating since the day after the general election:

  1. Late swing. In other words, a load of people might have changed their minds just before they voted (and largely moved to the Conservatives) thus rendering the polls, which were conducted at the latest the day before, wide of the mark.[2]
  2. Sampling and weighting. As Prof. Sturgis pithily put it, polls are ‘modelling exercises based on recruited samples’. So, maybe the polling companies have recruited the wrong people to the panels of respondents that they survey, or perhaps they had out-of-date or incorrect assumptions underpinning the weights that they apply to their samples to correct for unrepresentative recruitment.
  3. Turnout misreporting. Perhaps a load of people who said they were sure they’d vote and that they would do so for Labour ended up not being able to make it to the polling station. At the same time, perhaps more of the people who said they’d vote Conservative managed to actually do so in practice.
  4. Don’t knows or refusals. If the people who said they didn’t know who they’d vote for, or who refused to say, broke to the Conservatives more than Labour then it could explain the disparity between the polls and the election result.
  5. Question wording. If the questions that are asked do not prompt a similar decision-making process to the one that people go through before they actually cast their vote then they may give a different answer.[3]
  6. Voter registration and postal voting. It may be that issues with registering to vote disproportionately affected voters for one party (i.e. Labour), or that those who held postal votes were not accurately taken into account. As Prof. Sturgis pointed out, this is unlikely to be the case since there were relatively small numbers in both groups.

We’ll come back to which of the above explanations is most convincing but, before doing so, it was noted that the polling companies’ results were surprisingly similar given their methodological difference. This may have suggested unintentional (the key word here) herding by the companies, whereby they looked at each other’s results and adjusted their methods to replicate those of their competitors (based on fear of being too far from the pack). This is obviously important (and related to the point about polling as a ‘modelling exercise’ above) but it’s an issue that needs to be considered separately from the original cause(s) of the disparity from the election result.

Since you’re reading this I guess you’re aware of at least some of the implications of all the above but, we were helpfully reminded. Namely, such a high-profile polling miss is likely to reduce public interest in polls and survey on the basis that, their trust (along with that of the media and politicians) has been dented. This could have the knock-on effect of further reducing response rates, making it even harder for pollsters and survey researchers to gain accurate results in future. This is something that the polling companies appear acutely aware of; I wouldn’t go so far as to call this an existential threat to them but it’s obviously had serious reputational repercussions and could continue to make their business harder for some time.

Despite the above, Prof. Sturgis went to some effort to moderate concerns, suggesting that the polling miss will actually have a relatively minimal impact. This was an unexpected argument but he outlined a number of reasons for thinking that it might be right. First, it’s rather difficult to estimate election results, in part because respondents are best at answering questions about their recent behaviour rather than about what they will do in the future. Thus, it shouldn’t be too much of a surprise that the polls get it wrong at times, which links to the previous point about measuring ounces with an instrument for pounds. Second, returning to the opening distinction between polls and surveys, it is likely that the damage will impact more on the former than the latter because surveys with samples that were recruited through more random means (such as the exit poll (which could also ask about recent behaviour rather than future behaviour), and the British Election Study) did a better job of approximating the outcome. It is important that Prof. Sturgis referenced this distinction again at this point in the lecture, as will be seen below. Third, a number of different research designs (phone and online, varying levels of randomness) failed to predict the result so no particular company is implicated, meaning the consequences will be spread between them. Fourth, the rise of opt-in panels (which are low cost, have a rapid turnaround, allow for ever-increasing functionality, and can accommodate client involvement in survey design) seem inexorable, so the polling miss is unlikely to stop it.

The final of the preceding points (which links to his preceding restatement of the distinction between polls and surveys) is key, because Prof. Sturgis went on to note the increasingly difficult time that those who conduct random sample surveys have. Response rates are falling (even more so with random digit dialling phone surveys than face-to-face) so it takes more time and effort to get the same response rate, meaning that costs also rise. Thus, in certain key senses random sample survey research is increasingly suffering by comparison to opt-in panels. This is a paradox in the sense that it is also random sample surveys, as noted above, that did a better job of predicting the outcome of the general election. And thus, we return to which of the possible explanations for the polling miss seems most likely to account for it. The focus of much of the latter part of the lecture on the difference (in quality) between random sample (survey) research and opt-in panel (polling) research suggests that sampling and weighting are likely to be the main culprits (though other explanations may well have a part to play), and this is a position that is supported by work that has been done by both the British Election Study team and the British Social Attitudes survey team (both of which have random samples). It is also supported by Prof. Sturgis’ comment that there is not a great deal of value in those who adopt a random sample approach chasing non-response. This is an unnecessary additional cost (for an already expensive method of data gathering) and random sampling is already better than non-probability opt-in panel based sampling. Thus, Prof. Sturgis concluded, reports of the death of random sample surveys are exaggerated.

So, what do we, or at least I, take from this? Well, if sampling and weighting were the main problem with the general election polls, which seems perfectly plausible, then the repeated distinction between surveys (based on random samples) and polls (based on samples drawn from opt-in panels) becomes particularly salient. This is especially so for those working with survey research in (quantitatively orientated social science) academia, because survey methodology is a whole sub-field of academia on its own, and because it reflects an ongoing debate about whether opt-in panel samples (usually online) are good enough to base robust academic research conclusions on.[4] The polling miss, and Prof. Sturgis’ lecture, seems to suggest that the latest point in that ongoing debate favours the sceptic’s point of view. In other words, it may now be harder for those who conduct research based on opt-in panel samples (such as myself) to convince academics to trust our results.

And what about beyond academia? I was recently asked why all this fuss about polling really matters. My answer was that some in the media may feel that they were led up the garden path by polling companies and were therefore implicated in ‘misleading’ the public, who may now be less trusting of both polling companies and the media. Crucially, there is also the argument that the media focus on the ‘horse-race’ that was supplied by the polls took attention away from the policy positions and political issues that should have been reported on more, which may have influenced the outcome of the election (which would be pretty important if it could be proved to be true).[5] This is especially problematic because the race that took so much attention turned out to have a much clearer winner than had been anticipated. So, the polling miss is important because it has implications for public trust of polls, and the media that report them, which means that it has implications for how, and whether, the media report them in future. This means that it may also have implications for future election campaigns and perhaps even results. As I have said, the polling companies (and media) seem to be taking these implications very seriously, as demonstrated by their full cooperation with the inquiry. The release of that inquiry will make the precise nature of the aforementioned implications clearer, so I’ll certainly be paying attention to it.




[1] If anyone who’s critical of polls or survey research ever tries to make a point about what people think based on what they’ve seen on social media then I implore you to call out this contradiction.

[2] Notably absent from this list is the idea of ‘shy Tories’, or people who don’t want to admit to polling companies that they vote Conservative. This was a big part of the explanation for the polling miss at the 1992 general election but seems much less likely to be part of the problem this time round.

[3] There’s absolutely tons of research on the impact of question wording (down to minute levels detail), and this informed the approach of those who conducted the Exit Poll, which asked respondents to replicate the voting process with a replica ballot paper and ballot box, rather than just answering a survey question. This may have contributed to the high level of accuracy that the Exit Poll achieved.

[4] If you’re interested in looking into this debate you can start with the following two articles that represent the two sides:

Neil Malhotra and Jon A. Krosnick, ‘The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 286-323 [presenting evidence that non-probability samples drawn from internet panels may produce less accurate results].

David Sanders, Harold D. Clarke, Marianne C. Stewart, and Paul Whiteley, ‘Does Mode Matter for Modeling Political Choice? Evidence From the 2005 British Election Study’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 257-285 [presenting evidence that non-probability samples drawn from internet panels may produce results that are not (statistically) significantly different from random face-to-face samples in terms of the relationships between variables].

[5] I’ll go out on a limb and state that I don’t think this will ever be proven; it’s remarkably difficult to prove the impact of particular factors in election outcomes, and this would take quite a lot of (quite expensive) academic research to provide robust evidence (if that’s even possible now that the event has passed), and with no guarantee of a clear conclusion.