I don't want an argument with you about that. Again, I was responding to a very specific idea, one that gave credit to the vaccines for the fall in numbers. I very much hope that the vaccines have been a big factor, but you cannot just ignore the fact that Portugal's wave has been almost identical to the UK's this year with a very, very different vaccine roll-out.
I'll take that as a no then, and move away from that area. Its a shame though because I cannot take any conclusions you form by comparing waves in different countries very seriously given that you have a long track record of preferring to attribute declines to factors not involving human behaviour, number of contacts people are having with each other etc.
As far as studies looking into the effects of vaccines go, they do tend to make my brain hurt when it comes to some of the detail. I expect that in many cases their methodologies are not bad, although not without limitations that the authors tend to acknowledge. Those limits may include the period covered often including a marked decrease in viral prevalence overall, which limits the scope of really being able to test things to the fullest extent possible via 'the proof of the pudding being in the eating'. In other words in order for me to form solid and long-lasting conclusions about the protection offered, I require the vaccines to prove that they will prevent sizeable waves from occurring over a sustained period of time. And even then, unknowns about variants and how long immunity lasts before waning mean that I will struggle to find really solid ground to rest on for prolonged periods of time.
In the meantime, that does not mean I utterly reject reports that show very high percentages, the likes of which are probably what infomed platinumsages stance on the modelling which I took issue with. I'm just a bit cautious of taking the very highest numbers shown as being the gospel, and feeling them into modelling exercises. I tend to hold the view that central modelling scenarios should use more conservative numbers, although I would of course be happy to see further exercises that explore a broader range of different numbers so we can explore a wider landscape of possibilities.
Here is an example of a Pfizer study from Israel that seems quite recent and came up with very high numbers:
Two doses of the Pfizer-BioNTech covid-19 vaccine provide more than 95% protection against infection, hospital admission, and death, including among older people, a peer reviewed study from Israel has found. A single dose of the Pfizer vaccine was associated with 58% protection against...
www.bmj.com
Seven days after a second dose, the Pfizer vaccine provided everyone aged over 16 with 95.3% protection (95% confidence interval 94.9 to 95.7) against infection, 97.2% (96.8 to 97.5) protection against hospital admission overall, 97.5% (97.1 to 97.8) protection against severe and critical hospital admission, and 96.7% protection (96.0 to 97.3) against death. By 14 days after the second dose, protections for everyone over 16 increased to 96.5% (96.3 to 96.8) against infection, 98.0% (97.7 to 98.3) against hospital admission overall, 98.4% (98.1 to 98.6) against severe and critical hospital admission, and 98.1% (97.6 to 98.5) against death.
People over 85 had 94.1% (91.9 to 95.7) protection against infection, 96.9% (95.5 to 97.9) against hospital admission, and 97% (94.9 to 98.3) against death seven days after their second dose. Those aged 16-44 had 96.1% (95.7 to 96.5) protection against infection, 98.1% (97.3 to 98.7) against hospital admission, and 100% against death.
When it comes to variants of concern, apart from the Kent variant, I am even more cautious due to a lack of data or lack of enough time and/or quantity of variant infections with which to draw solid enough conclusions. In terms of this particular India variant my expectations are almost completely blank, and it sounds like authorities here will have to find out the hard way in the coming weeks. If it takes a very long time to deduce things via real world hospital data then I'll take that as potentially being a very good sign, as opposed to bad news on that front which may be expected to arrive far more quickly via the sorts of means I quoted documents about earlier. Although another reason why it might take far longer than ideal to find out would be if the sorts of data the modelling groups have said they needed is not able to be provided reasonably quickly. Until then we may just go round in circles a lot and it would be entirely wrong of me to speak with any certainty.
An example of what I am on about in terms of the sort of data/analysis the SAGE modelling group say they need:
There is currently insufficient evidence to indicate that any of the variants recently detected in India cause more severe disease or render the vaccines currently deployed any less effective. It is also too early to comment on the impact of B.1.617.2 on hospital admissions or deaths; reported COVID-19 hospitalisations in Bolton are concerning. Only accumulating more data on B.1.617.2 will provide this much needed clarity. If there were a time series of the total number of hospitalised B.1.617.2 cases according to their vaccination status, SPI-M-O would be much better placed to assess the threat that the variant poses.
(from
https://assets.publishing.service.g.../986709/S1237_SPI-M-O_Consensus_Statement.pdf )
We are certainly in a period where people need to take care not to interpret "no evidnece to suggest" type statements as offering proof that something is not the case. And so as I am alert to this, I am also interested in whether I should read anything into the different use of language at the start of that paragraph, "insufficient evidence" as opposed to the more frequently seen "no evidence".