The number of kids dying from influenza in the 2023-2024 season has set a new record for a regular flu season, after one new death was reported last week, according to the Centers for Disease Control and Prevention (CDC).
There were 200 pediatric flu-related deaths in the 2023-2024 season, compared to the previous high of 199 during the 2019-2022 season.
About 80% of the kids that died from flu this season were not fully vaccinated against influenza, CDC data shows. Nearly half of the children had at least one pre-existing medical condition.
Sigh, of course that’s happening.
Sounds like a statistical anomaly:
"About 80% of the kids that died from flu this season were not fully vaccinated against influenza, CDC data shows. Nearly half of the children had at least one pre-existing medical condition.
Children up to 8 years old receiving their first flu shot should receive two doses if they previously have not, the CDC notes."
So 200 kids died, 40 were fully vaccinated and died anyway, 160 “were not fully vaccinated”.
But “fully vaccinated” means getting 2 shots. So of the 160, how many were in the process of getting both, but got sick and died before the 2nd shot?
No way to know from this article.
The earliest recommended age by the CDC for a flu vaccine is six months. If it is baby’s first flu vaccine, or a five year old getting the flu vaccine for the very first time in their life, the two vaccine recommendation applies. If they had one ever before in their entire life, the five year old getting one this year means they are fully vaccinated.
The number that were under 8 and had not yet had their second vaccine is such a weird subgroup to focus on.
Even if you were right and this were relevant, this would not be a statistical anomaly, it would be a methodology failure
Statisical anomaly in that they’re talking about 160 deaths on a population of ~46.6 million kids.
https://www.childstats.gov/americaschildren/demo.asp
22.4 million aged 0 to 5
24.2 million aged 6 to 11.So 160/46,600,000 = essentially a rounding error. 0.0000034335
Sorry, I think you need to brush up on statistics. The relevant measurement here would be the variance (Variation? Variability? Whatever the term is officially called) in the relevant statistic, not the size of the statistic itself. Using the variance and previous average of the deaths per capita statistic, you can calculate the likelihood of the current deaths per capita having this value compared to the past values. If that likelihood is sufficiently low (for most scientific fields, 5% or less), the result is declared significant, since it’s different than what we would expect it to be if nothing had changed, and we can say that with a high (>95%) confidence. To learn more about this “predict the chance of the result being within normal bounds and then go “whoa that’s weird” when it’s not” method, look up “null hypothesis”, or even better “statistical significance”.
To give a practical example: The number or deaths from car accidents is fairly low per capita, but since we have a very large amount of data available, it has a low variance and we can predict and calculate the ratio very accurately. If you look up a graph of car deaths per capita over time, each year will only have a ratio of like 0.001%, but the variance between years will not be very high, because we have so much data that the little bits of randomness all even out. We can then look at, for example, car deaths per capita for streets with crosswalks vs without crosswalks, and even though they’ll both be a fraction of a percent, because they’re both measured so accurately we can make confident assessments of that data.