brogdale
Coming to terms with late onset Anarchism
With the usual caveat about Anthony working for YG...here's his snap judgement written the day after. It's obviously brief and a little rough but gets to the nub of the methodological challenges facing the pollsters:-
I’ve only had a couple of hours sleep so this is a very short comment on lessons from the polls at the election. The two best performing traditional polls seem to be those from Survation and Surveymonkey. Survation had a one point Con lead in their final GB poll, Surveymonkey had a four point lead in their final UK poll. The actual lead is 2 or 3 points depending on if you look at UK or GB figures. Congratulations to both of them. While it wasn’t a traditional poll, YouGov’s MRP model also came very close – it’s final GB figures were a four point lead (and some of the individual seat estimates that looked frankly outlandish, like Canterbury leaning Labour and Kensington being a tossup, actually turned out to be correct).
Looking across the board the other companies all overstated the Tory lead to one degree or another. The actual share of the Tory vote was broadly accurate, rather it was that almost everyone understated Labour support. I have a lot of sympathy with Peter Kellner’s article in the Standard earlier – that to some degree it was a case of pollsters “trying too hard”. Companies have all been trying to correct the problems of 2015, and in many cases those changes seem to have gone too far.
A big gulf between pollsters that many commented on during the campaign was the attitude to turnout. The pollsters who were furthest out on the lead, ComRes, ICM and BMG, all used methods that pumped up the Tory lead through demographic based turnout models, rather than basing turnout on how likely respondents said they are to vote. This was in many ways a way of addressing an issue in 2015 polling samples that contained too many of the sort of young people who vote, weighting down young turnout (and turnout among working class respondents, renters, or less well educated – different pollsters used different criteria). This wasn’t necessarily the wrong solution, but it was a risky one – it depends on modelling turnout correctly. What if turnout among young people actually did rise, then pollsters who were replicating 2015 patterns of turnout might miss it. That may be what happened.
That said, one shouldn’t jump to conclusions too quickly. It may be a case of how demographic turnout models were applied (by weighting the whole sample to match 2015 recalled vote and then separately weighting different demographic groups up or down based on likelihood to vote there’s a risk of “double-counting”). Most importantly, the YouGov MRP model and the Surveymonkey survey both based their turnout models on demographics too, and they both got the election right, so clearly it’s an approach that has the potential to work if done correctly.
Personally I’m pleased the YouGov model worked, disappointed the more traditional YouGov poll had too big a lead… but that at least gives us something to learn from (and for most of the campaign the two showed a similar lead, so rolling back some decisions and learning from the model seems a good starting point).