A number of recent news articles have brought renewed attention to financial conflicts of interest in medical science. Physicians and medical administrators had financial links to companies that went undeclared to medical journals even when they were writing on topics in which they clearly had monetary interests.
Most agree such lapses damage the medical and scientific community. But our focus on financial conflicts of interest should not lead us to ignore other conflicts that may be equally or even more important. Such biases need not be explicit, like fraud.
“I believe a more worrisome source of research bias derives from the researchers seeking to fund and publish their work, and advance their academic careers,” said Dr. Jeffrey Flier, a former dean of Harvard Medical School who has written on this topic a number of times.
How might grant funding and career advancement — even the potential for fame — be biasing researchers? How might the desire to protect reputations affect the willingness to accept new information that reverses prior findings?
In 2018, Cornell University removed Brian Wansink from all teaching and research positions, saying he had committed academic misconduct. He had gained a measure of fame as a food researcher. CreditMike Groll/Associated Press
I’m a full professor at Indiana University School of Medicine. Perhaps the main reason I’ve been promoted to that rank is that I’ve been productive in obtaining large federal grants. Successfully completing each project, then getting that research published in high-profile journals, is what allows me to continue to get more funding.
A National Institutes of Health regulation sets a “significant financial interest” as any amount over $5,000. It’s not hard to imagine that being given thousands of dollars could influence your thinking about research or medicine. But let’s put things in perspective. Many scientists have been awarded millions of dollars in grant funding. This is incredibly valuable not only to them but also to their employers. Journals and grant funders like to see eye-catching work. It would be silly not to think that this might also subtly influence thinking and actions. In my own work, I do my best to remain conscious of these subtle forces and how they may operate, but it’s a continuing battle.
Getting positive results, or successfully completing projects, can sometimes feel like the only way to achieve success in research careers. Just as those drivers can lead people to publish those results, it can also nudge them not to publish null ones.
As a pediatrician, I’ve been acutely aware of concerns that relationships between formula companies and the American Academy of Pediatrics might be influencing policies on feeding infants. But biases can occur even without direct financial contributions.
If an organization has spent decades recommending low-fat diets, it can be hard for that group to acknowledge the potential benefits of a low-carb diet (and vice versa). If a group has been pushing for very low-sodium diets for years, it can be hard for it to acknowledge that this might have been a waste of time, or even worse, bad advice.
There are things we can do to help mitigate the effects of biases. We can ask researchers to declare their methods publicly before conducting research so that they can’t later change outcomes or analyses in ways that might influence the results. Think of this as a type of disclosure.
A 2015 study published in PLOS ONE followed how many null results were found in trials funded by the National Heart, Lung and Blood Institute before and after researchers were required to register their protocols at a public website. This rule was introduced in 2000 in part because of a general sense that researchers were subtly altering their work — after it was begun — to achieve positive results. In the 30 years before 2000, 57 percent of trials published showed a “significant benefit.” Afterward, only 8 percent did.
Moves toward open science, and for a change in the academic environment that currently incentivizes secrecy and the hoarding of data, are perhaps our best chance to improve research reproducibility Recent studies have found that an alarmingly high share of experiments that have been rerun have not produced results in line with the original research.
We could also require disclosure of other potential conflicts just as we do with ties to companies. In early 2018, the journal Nature began requiring authors to disclose all competing interests, both financial and nonfinancial. The nonfinancial interests could include memberships in organizations; unpaid activities with companies; work with educational companies; or testimony as expert witnesses.
When results are clear and methods are robust, we probably don’t need to worry too much about the subtle biases affecting researchers. When the results are minimally significant, however, and interpretations among experts differ, the biases of those who discuss them probably do matter.
Unfortunately, many results fall into this group. A new drug is minimally better than another, so anyone’s associations with the companies that produce them matter when people are making decisions about their use or in writing guidelines. The overall effect of individual nutrient changes is small, but it might have built careers, so it’s easy for groups to be too dismissive of new findings that might ask them to change their tunes.
If someone has built a body of research investigating which drugs are better for treating certain infections, that person may discount research that argues we shouldn’t use antibiotics at all.
These conflicts aren’t all the same. Academic researchers are arguably running in many directions with the hope of generally heading toward the truth. Companies are, for the most part, interested in making money. That’s not a moral judgment; it’s economics. Because of this, it might make sense to put in some hard rules regarding companies. For example, people with financial connections to companies that make medications for high blood pressure should not write guidelines on the treatment of that condition, or be on boards or in positions where they are in charge of policies that could be influenced by those ties.
Even with those rules in place, however, we may need some additional guardrails for scientists and physicians to make sure all research is as unbiased as possible. Such moves would protect us not only from financial conflicts, but also from other types as well.