2018-10-30 • Frank Wen
We recently published a study attempting to measure vaccine-driven selection in seasonal influenza. This project was partly motivated by the question: how can we measure the flu vaccine’s indirect effects? In other words, how can we measure a reduction in prevalence due to herd immunity? This is difficult to answer since prevalence is difficult to measure to begin with. Instead of measuring prevalence, we examine selection. Indirect effects (as well as direct effects) should, in theory, manifest as changes in the relative prevalence of flu strains.
Our conceptual model was simple: if a vaccine protects more against some flu strains than others, then the less affected strains should be relative more common in more vaccinated populations. Vaccine effectiveness studies suggest that the trivalent inactivated vaccine is most effective against H1N1, followed by B and then H3N2. This trend suggests that more vaccinated populations should have (on average) more H3N2 relative to the other subtypes, compared to less vaccinated populations. In other words, since the vaccine protects least against H3N2, more vaccination should increase the abundance H3N2 relative to other subtypes, even if vaccination decreases the prevalence of H3N2 overall.
Using surveillance data from the WHO and CDC, we found significant support for vaccine-driven selection of H3N2 relative to H1N1, consistent with our expectations (see figure). Relative to H1N1, H3N2 was less frequent in Europe (where vaccination rates are lower), compared to the United States (where vaccination rates are higher). These results suggest that (1) vaccine-driven selection between H3N2 and H1N1 does occur, and (2) the flu vaccine does have measurable effects.
Not all of our results were consistent with vaccine-driven selection, however. For example, we found support for selection for B relative to H3N2, opposite of expectations based on measured vaccine effectiveness (VE) (see figure). This discrepancy could be simply due to inaccurate VE estimates. Other epidemiological factors that we don’t account for (e.g., intrinsic differences in transmission rates between subtypes and competition between subtypes through cross-immunity) likely also play a role. Perhaps most importantly, differences in surveillance protocols likely introduce bias in ways that we cannot correct for because we simply don’t know some details of surveillance.
Flu surveillance is complex. There currently exists no apparatus for determining the true incidence of flu. Instead, flu cases are tallied by testing patients who present with flu-like symptoms. National health agencies define what those flu-like symptoms are, and how often to test patients for a flu infection. However, these definitions and procedures vary between countries. For example, in Germany, testing for flu occurs in patients with more severe respiratory disease than in the United States. Thus, in Germany, strains causing more severe disease (e.g. H3N2) might be overrepresented compared to in the United States. However, it’s not clear how to correct for this potential bias. Other sources of bias are more difficult to pinpoint. Are certain age groups represented more in some countries’ surveillance, and would this affect strain frequencies? Does different health seeking behavior across countries introduce additional bias?
I’d like to repeat this work someday using data that we have high confidence in. Two things stand out. First, as stated before, improvements to surveillance (or at least better documentation) would help us reduce potential methodological bias. Second, reliable measurements of vaccine effectiveness would help us better calibrate expectations. Test-negative design is widely used to measure VE, but is susceptible to bias. These improvements to surveillance will help us better understand the broader impact of intervention strategies.