Big Data in health care is growing, and it is coming from an increasing number of sources, including electronic health records, patient monitors and physical activity trackers, and smartphone applications. How accurate is this data? What biases does it contain? How do errors and biases ultimately affect patients?
Health data models have the potential to exacerbate current health and health care disparities and create new ones. A new Hastings Center project will develop an ethics framework for scientists to use to help anticipate and prevent these problems. Diane M. Korngiebel is leading the project, which is funded by the National Human Genome Research Institute of the National Institutes of Health and the NIH Office of the Director.
“For ethical and transparent health data models, we have to think about what data are used, why, and how, and who’s involved in decision-making throughout,” says Korngiebel. “We already know that data skew toward white people of European descent and that nonwhites bear a disproportionate disease burden of chronic conditions like Type 2 diabetes. What are we doing during the model development process to ensure that health models recommend interventions equitably with the goal of addressing health and health care disparities?
“There is currently no framework that biomedical data scientists can use to ensure they document key decisions and consider the potential ethical and societal consequences of those decisions, including impacts on nonwhites and other underrepresented groups, during model development rather than after the fact.”
###
More information about the project can be found here https:/
To interview Diane Korngiebel, please contact:
Susan Gilbert
Director of Communications
The Hastings Center
gilberts@thehastingscenter.org
845-424-4040 x244
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.