Urban75 Home About Offline BrixtonBuzz Contact

Omicron news

I'd also be very interested to see a proper direct comparison between modelled and actual outcomes for previous waves.
 
This is the problem. Their output is presented as covering a range of scenarios, and the impression given is that that this is the range of likely scenarios, whereas it's actually a subset of pessimistic scenarios.
Does that mean that you think I wont be able to find any modelling scenarios that showed better outcomes than actually turned out to be the case?

I'll probably start looking at this with the most recent (before Omicron) modelling first - ie first I will look at the modelling of the Delta wave post-'freedom day' and compare it to what actually happened. I know that some modelling they did before the summer relaxation of measures showed worse outcomes than happened, in great part because their scenarios included a return to greater levels of normal behaviour than was initially actually the case. But I think they did a set of modelling after that reality had become clearer, I will check.
 
I'd also be very interested to see a proper direct comparison between modelled and actual outcomes for previous waves.
Well you know I wont be able to take a proper academic stab at that, just cruder attempts at such stuff. But I'd be surprised if there have been no academic papers on that theme, though whether I have time to find any of them is another question.
 
I'd also be very interested to see a proper direct comparison between modelled and actual outcomes for previous waves.

If you fancy something to read this paper compares a couple of models used on the Spanish covid data.

 
In part-answer, some models actually didn't perform too badly - for a given set of conditions the relevant scenario offered largely played out with a not unreasonable level of concordance.

Related, though not directly addressing the question, you might find this interesting.
'Two perspectives on the use of modelling during the pandemic', (SPI-M).
 
Last edited:
Well you know I wont be able to take a proper academic stab at that, just cruder attempts at such stuff. But I'd be surprised if there have been no academic papers on that theme, though whether I have time to find any of them is another question.
It's one of the things i keep seeing elsewhere, said by those who think the whole covid thing has just been a fuss about nothing with everyone drummed into paranoia by the government/scientists/communists/big pharma or whatever. It's stated as fact that all the previous alarming projections presented to the public turned out to bear no resemblance to what actually happened. But I've not had the time or skills or energy to go back through it and try and get a handle on how true that is - ie is it complete nonsense, or slightly true, or kind of true but only if you ignore X or Y.
 
Well I certainly dont intend to spend any time looking at it from the fucking stupid angle of those people. There is no huge need to study the detail when it comes to that since the number of deaths seen in the waves, number of hospitalisations etc was quite sufficient to see why the initial waves were a big deal that governments could not ignore. We saw how awful things were getting before lockdown and massive behavioural changes kicked in. And many of the real data totals got rather large despite the fact that rather strict measures were eventually implemented each time. But the extremists who try to suggest all the modelling was just propaganda also tend to be in complete denial about whats shown by actual data that measures what happened, giving me little reason to attempt to argue sincerely with them about detail of modelling.

Plus when it comes to the detail, its not like any of the modelling exercises said 'this is what is going to happen', they are mostly all exercises in what sorts of curves, peaks and and totals you get when you change various modelling input parameters. Done so that policy makers get an idea of what sort of effects they might expect if they implement various strengths of measures, and what to expect with variants with different transmission, immune escape levels, different pace of return to normal behaviours by the population, different amounts of waning immunity over time etc etc. Or stuff designed to see what sort of reasonable worst case scenarios need to be planned for in advance, eg in advance of winter. As such I've alway struggled to know which bits of modelling documents to quote, and have ended up posting numerous graphs and explanations as a result, as well as going on about confidence intervals and paying attention to ranges rather than single numbers.

Anyway since the latest modelling we hae seen this time around is from the London School of Hygiene and Tropic Medicine, I decided the first summer 2021 modelling I would look at was from them ( https://assets.publishing.service.g...rior_to_delayed_step_4.2__7_July_2021__1_.pdf ). Here is my summary of my opinion about it now that we have the benefit of data hindsight in regards most of that period:

Some of the scenarios they presented did a reasonably good job of coming out with total estimated numbers of infections, hospitalisations and deaths for the July-December 2021 period that are in the same realm as the real totals for the period have turned out to be for England. Some other scenarios/demonstrations of what happens when you change one or two parameters were wider of the mark when it came to totals, but thats normal enough and demonstrates that they covered a fair range of different possibilities in both directions, not just the most extremely bad ones, although there were certainly a lot of those included. In terms of the peak levels their modelling came out with, as opposed to totals, they tended to came out with peak levels that were notably higher than what actually happened. And they didnt really get the curve shape of what was seen from July to December right either, although their modelling that included what effects waning vaccine effectiveness could have did manage to better hint at the later curve shape and later persistence of the wave. But it still featured a larger initial peak and smaller resurgence relative to that peak than was actually seen. However those same scenarios where they included assumptions about waning effects of vaccines were really far wide of the mark when it came to the various totals for the whole period. But thats not too surprising given that they said in the document that their method of accounting for waning probably wasnt very good and would need later refinement. The scenarios that featured totals that ended up close to the real totals seen managed this despite getting the peak sizes and shape of wave wrong because two wrongs ended up making a right - in reality the first summer peak wasnt as high as their modelling tended to show, but after those peaks the wave then persisted at higher levels than their model showed.

Trying to put that into words fairly turned out to be way more tedious than the process of re-reading that modelling document and comparing it to actual data and forming my conclusions in my mind. And it starts to remind me of all the other sorts of blah blah blah I inevitably end up coming out with when I try to describe modelling exercises in any detail. Partly because sometimes the tables of numbers and the graphs are a better way to put it than all these words, but also because they cover so many scenarios and 'what ifs when different parameters are changed' that I'm not reviewing one thing, I'm not comparing an attempt at a single prediction with what actually happened, so I cant come out with a single judgement and neat description of how well they managed.

I'm certainly happy to point out that I think modelling is more challenging now than it once was. At the start of the pandemic the assumptions about population susceptibility were really straightforward. These days there are so many more uncertain input parameters, such as all manner of aspect of the effects of vaccines, and properties of variants. And they certainly dont have any magic ways to make all the right guesses and assumptions about those, they just want to pick a useful range of possibilities for those and then model what the implications are of those different possibilities.
 
Last edited:
I don't like the uncertainty, as soon as we have some facts about Omicron I will be happier.

There are a larger number of uncertainties this time. A lot of what will be learnt will be based on studying the real data that emerges as more people get infected, and some of these pictures will take some time to emerge sufficiently. South Africa can offer some clues ahead of the UK situation being able to do so, but there is also some wariness about whether everything obsered in South Africa will hold fully true in the UK context. This is for a number of reasons including different demographics and different sizes and features of past waves in the two countries.

Modelling is only as good as the assumptions fed into it, so even when I am a fan of the utility of these models, I have to take them with an even larger pinch of salt this time. The people doing them know this all too well, and expect to have to update their stuff as more data emerges and better estimates of various things can be fed into the models. And I dont think certain possibilities are probed at all in the modelling we've seen this time so far, eg seeing what happens when different levels of clinical severity (and hospitalisation ratios) for Omicron are fed into those models. But perhaps some other groups modelling that has been looked at by authorities but not seen by us yet does look at that side of things. Even if they have, it wont tell us anything about what the actual clinical picture of this variant is, only what sort of thing happens to the numbers if they play around with these values/guesses.

The current modelling only offers me vague clues about how strong the further restrictions will actually need to be. It offers clues about what sort of scale of challenges they have to consider to be possible at this stage of great uncertainty about many things. Real data that arrives once the wave is larger will provide a far more substantial guide, but unless it tells a happy story it will arrive too late for authorities to act on it with the right strength of measures at the right time.
 
Last edited:
Nadhim Zahawi interviewed on Sky News' Trevor Phillips on Sunday show - 1/3rd of cases in London are now omicron, and we now know that there are omicron cases in hospital too.
 
Some of the scenarios they presented did a reasonably good job of coming out with total estimated numbers of infections, hospitalisations and deaths for the July-December 2021 period that are in the same realm as the real totals for the period have turned out to be for England. Some other scenarios/demonstrations of what happens when you change one or two parameters were wider of the mark when it came to totals, but thats normal enough and demonstrates that they covered a fair range of different possibilities in both directions, not just the most extremely bad ones, although there were certainly a lot of those included. In terms of the peak levels their modelling came out with, as opposed to totals, they tended to came out with peak levels that were notably higher than what actually happened. And they didnt really get the curve shape of what was seen from July to December right either, although their modelling that included what effects waning vaccine effectiveness could have did manage to better hint at the later curve shape and later persistence of the wave. But it still featured a larger initial peak and smaller resurgence relative to that peak than was actually seen. However those same scenarios where they included assumptions about waning effects of vaccines were really far wide of the mark when it came to the various totals for the whole period. But thats not too surprising given that they said in the document that their method of accounting for waning probably wasnt very good and would need later refinement. The scenarios that featured totals that ended up close to the real totals seen managed this despite getting the peak sizes and shape of wave wrong because two wrongs ended up making a right - in reality the first summer peak wasnt as high as their modelling tended to show, but after those peaks the wave then persisted at higher levels than their model showed.
That seems a fair summary. I have taken some of the graphs in the document you linked to, these ones on page 16:

Screenshot 2021-12-12 at 10.04.48.jpg

And roughly overlaid the "reality" graphs from the gov.uk dashboard with the scales more or less adjusted to be the same and it looks like this:

Screenshot 2021-12-12 at 10.03.21.jpg

What's interesting to me is that there is a second 'hump' around November that their "scenario 2" line predicts, and it's also visible in the real data, and they have got the timing pretty much right, even if the magnitude and overall direction of travel is different.
(maybe this should be in the nerdy details thread instead)
 
(maybe this should be in the nerdy details thread instead)
Yeah I think that if I feel the need to look at other models from the past I will try to stick the detail in that thread instead, as I'm not sure there will be much that is highly relevant to whats happening with Omicron now. I'll just say that the graphs you posted were ones I think they chose in order to demonstrate the large degree of uncertainty, how far apart two different modelling scenarios for that period could be. Some of the later ones, especially ones with waning included, show the later bump in a slightly more obvious way. Also note that models sometimes get the timing of some phenomenon right because they've likely got school holiday timing baked into them, and in this case they might have been reasonable in their assumptions of the timing of waning if not the magnitude. I dont know as they got the strength of seasonal factors right though, and I wouldnt be surprised if they failed to anticipate the strength of effects seen earlier via 'the pingdemic' (large numbers self isolating acting as a sort of equivalent to a mini lockdown during a crucial period).

And just to repeat a point one last time, part of the utility of such modelling for authorities is not as a prediction of what will happen, but as a guide as to what sort of effects you'd see if you change some parameters, whether they be parameters relating to restrictions and behaviour, or examples of what happens if the properties of a variant and vaccines protection against it turn out to have changed in particular ways. If I were making big decisions I'd want to see such examples, even though they are not a guide as to what those variant properties actually are in reality.
 
I'd also be very interested to see a proper direct comparison between modelled and actual outcomes for previous waves.

Not really possible as the outcome of those waves was affected by measures taken in response to the modelling.
 
Not really possible as the outcome of those waves was affected by measures taken in response to the modelling.

A mathematical model could be recalculated retrospectively based on those effects though couldn't it? If you have a formula including measure X you could plug that in even if you didn't calculate it for the eventual actual value at the time.
 
The media arent so hot at reporting all the key points from modelling documents. For example with the latest Omicron modelling we saw, in addition to not bothering to detail the confidence intervals, they also neglected to mention that the modelling people didnt find that speeding up the booster campaign made much difference to the pandemic numbers for this wave. The percentage of people who take up the booster offer does make a notable difference to their results though. And its always possible one of their assumptions in this regard is faulty, for example they've assumed that the most at risk people have been prioritised for boosters and will already have received them, but I'm not sure thats actually reflected in the real world booster data, there are some alarming gaps.
 
So yeah like I pointed to earlier, South Africa have some data delay issues. And even if they didnt have particular data issues right now, we can see from past pattern of daily figures that they have data-related dips at fairly frequent intervals that should be taken into account when considering both an individual days figures and those figures short-term effect on 7 day averages.
 
Last edited:
Back
Top Bottom