A while back I mentioned that we were in new territory with COVID-19: we’ve had similar or bigger pandemics before, but we’ve never had one where we adopted widespread countermeasures and then studied how well they worked. This was an offhand comment and I got curious today if it was really true. Spoiler alert: It is.
I went looking for empirical studies done before 2020 since I was interested only in pandemics prior to COVID-19. It didn’t take long to come across “Effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review,” which was published in 2018. It’s a review of previous studies of social distancing and other countermeasures in “socially dense community settings, such as schools or workplaces,” and it’s far more comprehensive than anything I could do myself. Here’s what they found:
- A grand total of 15 studies.
- Of which, only three were epidemiological (the others were modeling studies).
- Of which, the overall risk of bias was rated “critical” in one, meaning the “study is too problematic to provide any useful evidence on the effects of intervention.”
- That leaves two studies. The risk of bias was rated “serious” in both, but at least they weren’t completely useless.
- Of those, one was a study of ordinary seasonal influenza in 2007-08 and looked only at employees who could work at home vs. those who couldn’t.
The sole remaining study was conducted on a group of 1,015 miltary personnel in Singapore during the 2009 H1N1 pandemic. It compared a control group (no intervention) to a “normal” group (individuals were provided general health education on respiratory and hand hygiene and were advised to seek medical care if ill) and an “essential” group (which received enhanced surveillance with isolation, segregation, and personal protective equipment). The study found that 44 percent of the control group contracted the flu compared to 17 percent of the normal group and 11 percent of the essential group. It also found that the pandemic peaked earlier in normal units compared to the control group.
So that’s it: a single study that compared groups to each other but didn’t try to establish the efficacy of individual interventions. Normal hygeine advice apparently had a big effect, reducing incidence from 44 percent to 17 percent, but only the “essential” group practiced any kind of social distancing, and it reduced incidence of the flu only from 17 percent to 11 percent. This may sound worthwhile even though it’s small, but don’t forget that this study had “serious” problems with possible bias.
Effectively, this means we’re flying blind. We have loads of modeling estimates of various interventions:
- Stay-at-home orders
- School closings
- Social distancing
- Mask wearing
- Bans on large gatherings
- Closure of restaurants
- Closure of non-essential businesses
- Mass testing and contact tracing
For practical purposes, however, we have no reliable empirical data at all on any of these measures. It makes sense that some or all of them have an effect—and it’s probably safe to say that all of them put together have an effect—but we have no idea which particular ones have a large effect vs. which ones have a small effect. And given the vast range of assumptions used in various models, it’s not clear to me that we can trust models to tell us anything useful at the level of specific interventions.
It’s astonishing how much we’re learning about COVID-19 on literally a daily basis. In some ways it works like any other pandemic, but in other ways it truly appears to be unique. Given this, and given the complete lack of good empirical studies of past pandemics, we should be very, very cautious about insisting on any particular intervention as either critical or dispensable. We just don’t know.