The recent news that a decades-old steroid drug, dexamethasone, showed a one-third reduction in mortality for patients with severe Covid-19 infections has generated immense hope in the medical community. The finding came from the Recovery trial, a large randomized controlled trial in Britain.
The reason doctors are so excited is that randomized controlled trials are the gold standard in medicine. Using randomization (by, say, flipping a coin to assign patients to a new treatment or not) is the best way to determine whether treatments work.
Unfortunately, randomized trials take time — which is a problem when doctors need answers now. So doctors and public health officials have been turning to available real-world data on patient outcomes and trying to make sense of them.
But it can be hard for doctors to find the answers they need in this observational data because without randomization, hidden biases can lead to misleading results. For example, if patients who receive a treatment tend to be sicker, we may incorrectly conclude that a treatment harms patients. Concerns over these biases have led several observational studies about Covid-19 treatments to be heavily criticized.
Do we need to wait for randomized trials to act? Not necessarily. The key lies in a set of tools economists have been using for decades: natural experiments.
With natural experiments, you can get around many of the hidden biases that plague observational studies by looking for circumstances that happen by chance — events that cause patients to be essentially randomized to one treatment or another. One study used a national shortage of the drug norepinephrine as a natural experiment to assess the drug’s effect on the mortality of patients critically ill with sepsis, comparing mortality rates of otherwise similar sepsis patients in the months before, during and after the shortage.
Although natural experiments are commonly used by economists, they’ve been infrequently used in medicine, where randomized trials have appropriately dominated.
“Large-scale randomized evaluations have been less common in economics, prioritizing the need for economists to identify often creative but sometimes narrow natural experiments to estimate the causal effects of treatments,” said Amitabh Chandra, an economist at the Harvard Business School and the Kennedy School of Government.
Ashish Jha, recently appointed the dean of the Brown University School of Public Health, said that while “natural experiments have causal interpretations, typical associational studies in medicine do not, which may make some medical researchers less comfortable interpreting the results.”
But randomized trials may not always be available. Or they may be too cumbersome to perform, or not ethical. Most doctors can relate to recent comments by the Food and Drug Administration director Stephen Hahn in last week’s congressional pandemic hearing. “In a rapidly moving situation like we have now with Covid-19,” he said, decisions are made “based on the data that’s available to us at the time.”
So, until the trials arrive, here are some ideas for how natural experiments could help:
Timing of treatment
Patients hospitalized with critically ill Covid-19 infection in the days before the Recovery trial results were announced would be expected to have worse outcomes than otherwise similar patients hospitalized in the days afterward, assuming doctors suddenly started using more dexamethasone (which they almost certainly have). Consistent with the trial results, we would expect to see an effect only in Covid-19 patients who were critically ill.
Based on the trial’s results, it’s also reasonable to think that once patients are critically ill, earlier treatment with dexamethasone might lead to better outcomes (the steroid has not been shown to be effective with patients who are not on respiratory support). This hypothesis could be tested by evaluating whether mortality rates were lower for patients hospitalized with severe Covid-19 infection in the one to two days before the Recovery announcement compared with otherwise similar patients hospitalized in the week prior — who would, by chance, be getting the drug later in their disease course.
Staggered changes in protocols
Changes over time in practice patterns within hospitals could also be used to better estimate the effectiveness of Covid-19 treatments. Hospitals have varied considerably in their treatment protocols for Covid-19, and these protocols have changed in the months since the pandemic began. The staggered change in treatment patterns across hospitals could allow researchers to estimate the effectiveness of Covid-19 treatments by using each hospital, at a different period in time, as its own control.
For example, if dexamethasone is indeed effective, we would expect that hospitals that quickly incorporated the drug into treatment protocols after the Recovery trial announcement would experience earlier reductions in Covid-19 deaths than hospitals adopting it later. The key to this natural experiment would be to verify that the average characteristics of coronavirus patients within a hospital would be unchanged in the short interval before protocols were changed versus afterward.
Just above or below a threshold
The sometimes-arbitrary thresholds that hospitals use to decide which patients receive specific treatments could be used to better estimate the effectiveness of Covid-19 treatments. Treatment decisions often rely on clinical cutoffs, such that patients immediately above or below a threshold have very different likelihoods of treatment despite being otherwise similar.
For example, if a hospital decides that every person needing more than six liters per minute of oxygen will go on a ventilator, patients using six liters will go on a ventilator, but those using five liters will not — even though they are much the same.
The receipt of treatment is therefore effectively random for those patients near the cutoff, something economists call a regression discontinuity. To the extent that hospital protocols specify thresholds above or below which a scarce treatment can be given, knowledge of these thresholds could be used to demonstrate whether differences in treatment around the threshold correspond to differences in clinical outcomes.
These are just a few ideas to demonstrate the types of questions we could be asking to avoid the common biases encountered when using observational data. But it’s important to remember that natural experiments are happening by accident all the time, and many have already happened — it’s just a matter of looking for them. Randomized trials will remain the gold standard in health care, but just because these methods may be new to some health researchers does not mean we shouldn’t be using them when trials either take too long or cannot be done at all.
Anupam B. Jena, M.D., Ph.D., is an economist, a physician, and the Ruth L. Newhouse Associate Professor at Harvard Medical School. Follow him on Twitter at @AnupamBJena. Christopher M. Worsham, M.D., is a pulmonologist and critical care physician at Harvard Medical School. Follow him on Twitter at @ChrisWorsham.