Urban75 Home About Offline BrixtonBuzz Contact

The 2024 UK General Election - news, speculation and updates

I'm so glad we're back to proper grown-up politics.



"You're Islamophobic and mad"
"You're anti-Semitic and all from North London"

If the entire fucking building fell into the river at the next PMQs nothing of any value would be lost. How dare these witless freaks claim the power to run our lives, and how abject are we to let them?

The libraries of the houses of Parliament would be worth saving
 
I'm baffled by some of the data though. How can this area be 75% ABC1, but 42% Deprived?




I * think * it's because their deprivation is classified as this

Fraction of households which are classed as deprived on one or more of the following four indicators: employment (unemployed or long-term sick); education (no good GCSE); health and disability (bad health or long-term problem); and housing (overcrowded, shared or no central heating).

So you can be ABC1 but with bad health or long term health problem, which makes you deprived. If I'm reading it right. However, I am highly skeptical of the whole site. I found it messy. And it conveniently asks you to pay 'shop online!' if you want more detailed info on your area.

People are paying too much attention to electoral calculus. But it must be science right coz it's got the word calculus in it.
 
I * think * it's because their deprivation is classified as this

Fraction of households which are classed as deprived on one or more of the following four indicators: employment (unemployed or long-term sick); education (no good GCSE); health and disability (bad health or long-term problem); and housing (overcrowded, shared or no central heating).

So you can be ABC1 but with bad health or long term health problem, which makes you deprived. If I'm reading it right. However, I am highly skeptical of the whole site. I found it messy. And it conveniently asks you to pay 'shop online!' if you want more detailed info on your area.

People are paying too much attention to electoral calculus. But it must be science right coz it's got the word calculus in it.

An election expert recently

1709146399151.png
 
People are paying too much attention to electoral calculus. But it must be science right coz it's got the word calculus in it.

They did well predicting the overall result of the GE 2019, which would suggest they were as accurate as possible at seat levels.

Electoral Calculus made the most accurate pre-poll prediction of the result of the December 2019 General Election. Our final prediction correctly predicted a Conservative victory with a substantial majority. We predicted the Conservatives would win 351 seats, which was closer to the actual result of 365 seats than any other final pre-poll prediction (source Wikipedia).

 
The SNP predicted to hang on where I am comfortably.

I really don't see the party dropping to 19 seats nor the mass enthusiasm for Keir Starmer or Labour. So suspect "electoral calculus" is more than a little skwee-wiff.

The SNP will end up high 20s I think, if the narrow contests fall their way, low 30s. Labour just aren't cutting through.
It can never be proved or disproved what the "chance" is of a party winning a particular constituency.
 
There seems to be a confusion of “predicted percentage” with “chance of winning”.

It seems to me that, even if all the constituencies were the same as last time, there could be no test of the accuracy of a probability claim with respect to one constituency.

To say that there is a 40% probability of an outcome of an event means that, if the event were to occur 100 times, then that outcome would arise 40 times. Each general election in a constituency is a unique event. It cannot occur again.

Flipping a coin is not a unique event. The conditions that give rise to the event can be identical. Flipping a coin is an event that can be reproduced indefinitely.

Each general election in a constituency, even if the boundaries are unchanged, is unique. We cannot examine 100 general elections in a constituency, and observe that 40 of them are won by the Liberal Democrats, because the events are not identical.

We can never prove that the Liberal Democrats have a 40% chance is a particular constituency, because we can never observe a particular general election in that constituency more than once. That conditions that give rise to the event that is a general election result in that constituency can never be repeated.
This is a classical view of probability that is well out of date. Modern theory views probability as a statement about knowledge, not about what the universe would actually do if it were rerun 1000 times. If the universe were rerun 1000 times, you’d still get the same result every time, because conditions would be identical. When we say that the Liberals have a 40% change of winning, it’s a statement about the state of knowledge we currently have and our observations about what that state of knowledge tends to be associated with. This model says that demographics that look like this combined with the current state of responses to specific questions tends to be associated with particular types of voting, and that model allows us to make a prediction that can be couched in terms of what we call “probability”.
 
This is a classical view of probability that is well out of date. Modern theory views probability as a statement about knowledge, not about what the universe would actually do if it were rerun 1000 times. If the universe were rerun 1000 times, you’d still get the same result every time, because conditions would be identical. When we say that the Liberals have a 40% change of winning, it’s a statement about the state of knowledge we currently have and our observations about what that state of knowledge tends to be associated with. This model says that demographics that look like this combined with the current state of responses to specific questions tends to be associated with particular types of voting, and that model allows us to make a prediction that can be couched in terms of what we call “probability”.
It is a meaningless figure, because we can not prove if it is true or false. What does a 40% chance of winning mean?
 
It is a meaningless figure, because we can not prove if it is true or false. What does a 40% chance of winning mean?
We don’t attempt to “prove” it is true or false. We use large historic data sets to regress the logistic transformation of the response variable against a function of the linear combination of input variables. We can use inferential statistics to derive information about the certainty of our position from the deviations in the dataset between response and input variables.

If you don’t like this approach then I suggest you take it up with, well, science.
 
I'm curious how those "chance of winning" figures are calculated.

They've predicted vote share of

Con 29.7%
Lib 30.0%
Lab 24.7%

If those figures were correct, the LibDems would win, but of course there's a margin of error.

Depending what that margin of error is, it might be too close to call between LibDem and Con, but there would need to be a pretty big error for Lab to win, so my hunch is that they're probably understating the chance of winning for LibDem and Con, and overstating it for Lab.

But without a bit more information on how they've arrived at the predicted vote figures and how they've translated those into chance of vote figures, I'm not sure how meaningful it all is.
The probabilities reflect the parameter uncertainty that is inherent to the model fit, as well as the process uncertainty for what actually happens. A slightly different parameter fit might produce a prediction that is closer still to a Labour victory, and then the reality on the ground could tip it over the edge. The modelling approach assesses the overall probability of all that as being 20%.

This modelling approach is definitely by far and away the best way to predict the election in theory. The biggest problem with it is that you need tens of thousands of data points and time to process the data, by which time your predictions may already be out of date. It’s a rapidly evolving feast and the probabilities next month may look very different to their position right now. But that doesn’t mean that the model is inaccurate for a given data set.
 
We don’t attempt to “prove” it is true or false. We use large historic data sets to regress the logistic transformation of the response variable against a function of the linear combination of input variables. We can use inferential statistics to derive information about the certainty of our position from the deviations in the dataset between response and input variables.

If you don’t like this approach then I suggest you take it up with, well, science.
Well, if you cannot prove it, then it ain't science.
 
I’m in a new constituency which will go from Quite Labour to Very Very Labour, so won’t be voting for them. Will decide between whichever People’s Front Of Judea or Judean People’s Front is standing on the day
 
I don’t think you understand how science works.
Of course he does. First you take your proof to the Grand Arbiter in his lair under the Champs Elysee, and if your work passes muster you are allowed entry into the tomb of the Sphinx, where a themed riddle will be posed. If you correctly deduce the answer your wisdom is officially added to the Hall of Facts, where it sits forever bathed in the glorious light of the Golden Knowledge Tree, and can be repeated on Urban without issue.
 
Newton Abbot has a history of going Lib Dem when the tories are struggling, I think, though I’m not sure how much the Lib Dem wipe out has changed that. Strikes me it could be the sort of seat where Labour and lib dem votes could cancel each other out unless there’s some kind of arrangement.

I find the Tory MP to be fairly anonymous.
The council are Lib Dem I think but not sure if that helps their chances at a Westminster election or not.
 
I don’t think you understand how science works.
What is the chance of a scone falling on my head while I am walking across the South Downs? It is not possible to assign a probability to a unique event. To say the Liberal Democrats have 40% chance of winning a constituency is a meaningless assertion. It is not true. You cannot prove that it is true. We should not believe in entities that cannot be proven to exist.
 
What is the chance of a scone falling on my head while I am walking across the South Downs? It is not possible to assign a probability to a unique event. To say the Liberal Democrats have 40% chance of winning a constituency is a meaningless assertion. It is not true. You cannot prove that it is true. We should not believe in entities that cannot be proven to exist.
You’re still persisting with a classical definition of probability that is about 70 years out of date. I addressed this already. Probability is not about whether an event happens, it’s an epistemic statement about data.
 
The Telegraph and FT are running a "sources close' story about Hunt considering scrapping non-dom status in next week's budget.
 
They did well predicting the overall result of the GE 2019, which would suggest they were as accurate as possible at seat levels.




They're ok. But they're not as good as me and Knotted

May I remind you of this thread for predictions on the 2019 Election.


The results of which were here.

Winner based on actual results remains unchanged from the forecast:

Knotted, declared official winner...

Name
Knotted 966
planetgeli 964
Dogsauce 936
kabbes 935
Hollis 917
steeplejack 893
Flavour 891
Leafster 891
danny la rouge 890
killer b 887
littlebabyjesus 885
belboid 878
Chilango 868
Proper Tidy 868
Poot 863
Marty21 858
Weepiper 857
editor 853
LiamO 831

Electoral calculus would have come 3rd, with a score of 955.

I'll trust my own instincts. And we must repeat that thread this time around.
 
They're ok. But they're not as good as me and Knotted

May I remind you of this thread for predictions on the 2019 Election.

The results of which were here.



Electoral calculus would have come 3rd, with a score of 955.

I'll trust my own instincts. And we must repeat that thread this time around.
Statistically speaking, I would expect the best model to be beaten in a single event by some random outliers. Because that’s how outliers work. Over time, however, I would be amazed if any individual could consistently beat it.
 
Statistically speaking, I would expect the best model to be beaten in a single event by some random outliers. Because that’s how outliers work. Over time, however, I would be amazed if any individual could consistently beat it.

I'm consistently good at it. I don't bet much. At all. But I've won a fair bit at elections. And been right about others when I haven't bet (I still grimace at missing out by not having a bet against Kinnock when everyone thought he would win - 'alright!').
 
I'm consistently good at it. I don't bet much. At all. But I've won a fair bit at elections. And been right about others when I haven't bet (I still grimace at missing out by not having a bet against Kinnock when everyone thought he would win - 'alright!').
I’m sure you are good at it. You’re a very knowledgeable chap. I’m betting that the model is consistently good at it too. And in this one test, you outscored the model by 0.9% (with the model using an older dataset than you, of course). Can you repeatedly outscore it by 0.9%? If so, the model authors need to include whatever factor you’re using that is currently out of scope of their model. And then the model will be better than you.
 
Labour has supposedly an 80% chance of winning Hertford and Stortford?
That is a load of bollocks.

Hertford and Stortford: Overview​


Prediction: LAB


Implied MP at 2019:Julie Marson (CON)
County/Area:Hertfordshire (Anglia)
Electorate:74,993
Implied Turnout 2019:73.5%
Predicted Turnout:69.3%



Party2019
Votes
2019
Share
Pred
Votes
CON30,69555.7%30.3%
LAB13,20524.0%43.1%
LIB7,81514.2%7.3%
Green2,5334.6%11.4%
OTH8671.6%0.2%
Reform00.0%7.7%
CON Majority17,49031.7%12.8%
Pred Maj

See overview of other seats in Anglia.

Chance of winning
CON
space.gif
20%
LAB
space.gif
80%
LIB
space.gif
0%
Green
space.gif
0%
OTH
space.gif
0%
Reform
space.gif
0%
 
The probabilities reflect the parameter uncertainty that is inherent to the model fit, as well as the process uncertainty for what actually happens. A slightly different parameter fit might produce a prediction that is closer still to a Labour victory, and then the reality on the ground could tip it over the edge. The modelling approach assesses the overall probability of all that as being 20%.

This modelling approach is definitely by far and away the best way to predict the election in theory. The biggest problem with it is that you need tens of thousands of data points and time to process the data, by which time your predictions may already be out of date. It’s a rapidly evolving feast and the probabilities next month may look very different to their position right now. But that doesn’t mean that the model is inaccurate for a given data set.

To someone like me, with a good understanding of the traditional way these things have been measured and communicated (essentially giving a poll result plus a margin of error), the combination of the data table you've posted and the words you've used don't really make it clear what they and you are on about (I'm not suggesting this is your fault, just making an observation).

If the "chance of winning" is to be genuinely meaningful to me, I need to have some idea of how that figure was arrived at.

Is it a poll, is it a projection from previous polls or is it something else and if so, what? And how have they gone from predicted vote share to "chance of winning"? The people who've come up with this figure haven't bothered to include that (or at least the stuff you've posted doesn't include it), so it's not meaningful to me.

In the absence of that explanation, I personally, and I suspect many others in a similar position to me, would find it much more helpful to have the information presented in the traditional way, from which I can deduce my own idea of the chance of a particular party winning.
 
Is it a poll
No.

is it a projection from previous polls
Not really

or is it something else and if so, what?

Yes. It uses a methodology that is used a lot across biological and social sciences. You hypothesise a big, complex mathematical description of a response variable (in this case, this is related to the probability that an individual votes in a particular way) in terms of a whole bunch of input variables (like income, belief in remain, location, attitude to economics etc). These variables can interact too — maybe an economic right winger who is also nationalistic is different to one who is merely one of those things alone. Then you determine the values associated with that model using massive data sets.

And how have they gone from predicted vote share to "chance of winning"?
The model is actually predicting the probability of an individual vote. The predicted vote share drops out of the aggregate of those predictions, rather than the other way around.

The people who've come up with this figure haven't bothered to include that (or at least the stuff you've posted doesn't include it), so it's not meaningful to me.
I get that. Unfortunately, this is one of those occasions where if you have to ask, you’re already in trouble when it comes to understanding the answer. The maths of it is not straightforward. The technique is absolutely standard in scientific quantitative analysis though.

In the absence of that explanation, I personally, and I suspect many others in a similar position to me, would find it much more helpful to have the information presented in the traditional way, from which I can deduce my own idea of the chance of a particular party winning.
They’ve given their prediction of vote share, though. That’s as traditional as it gets.
 
No.


Not really



Yes. It uses a methodology that is used a lot across biological and social sciences. You hypothesise a big, complex mathematical description of a response variable (in this case, this is related to the probability that an individual votes in a particular way) in terms of a whole bunch of input variables (like income, belief in remain, location, attitude to economics etc). These variables can interact too — maybe an economic right winger who is also nationalistic is different to one who is merely one of those things alone. Then you determine the values associated with that model using massive data sets.


The model is actually predicting the probability of an individual vote. The predicted vote share drops out of the aggregate of those predictions, rather than the other way around.

I get that. Unfortunately, this is one of those occasions where if you have to ask, you’re already in trouble when it comes to understanding the answer. The maths of it is not straightforward. The technique is absolutely standard in scientific quantitative analysis though.


They’ve given their prediction of vote share, though. That’s as traditional as it gets.
Thanks for your explanation which has made it clearer, though perhaps not as clear as I might like.

One specific question - why don't they give an indication of the possible error as was traditional with polling data?
 
Thanks for your explanation which has made it clearer, though perhaps not as clear as I might like.

One specific question - why don't they give an indication of the possible error as was traditional with polling data?
The probabilities encapsulate the error. The probability always has to add to 1, remember. So the more uncertain the prediction, the closer the probabilities will get to random chance.

It might help to think of the results as the opposite of a usual poll. It tells you the error, and the predicted result is kind of the afterthought!
 
Back
Top Bottom