“Precision” Health Tools and… Increased Health Disparities?
Posted on byWorking from the perspective of public health, we have frequently expressed concerns about the potential of precision health technology to exacerbate health disparities. Many of these discussions have focused on genomic-based approaches such as using polygenic risk scores (PRS) for a wide array of disease and health outcomes. Because of minority underrepresentation in basic research, there remains a lack of evidence on the clinical validity and utility of PRS and other genomic approaches in non-white populations. As a result, these scientific investments may benefit people of European heritage more than people of minority and ethnic populations. Ongoing efforts, such as the National Institutes of Health’s All of Us Research and the National Heart, Lung, and Blood Institute’s Trans-Omics and Precision Medicine (TOPMed) programs, are working to change this by involving a far greater number of minority participants. However, based on a study recently published in the journal Science, it is now clear that some non-genomic precision health applications are also a concern and can even have the unintended effect of worsening health disparities.
Unintentional but Systematic, Racial Bias
The Science article showed that an algorithm-based, risk determination tool, in wide use in U.S. health care, functioned in a manner such that a large proportion of black patients were erroneously assigned the same level of risk as white patients who were not as sick. This meant that the tool did not accurately indicate that many black patients actually needed greater present and future care. The study was conducted in a large hospital and focused on primary care patients enrolled from 2013-2015 and included 43,539 patients who identified as white and 6,079 who identified as black. Within the sample, 71.2% were enrolled in commercial insurance and 28.8% in Medicare. The average age was 50.9 years old and 63% were female. The study authors found that using that algorithm resulted in a bias: only 17.7% of black patients were indicated to need additional help; however, 46.5% should have been identified for extra care. As a result of the hidden and inherent bias in the tool’s algorithm, and probably several others like it, potentially millions of black patients received less care than whites who were at the same actual level of health.
How does this happen?
There is no reason to think that this algorithm was developed or implemented with any intent to discriminate or harm. In fact, the methods used explicitly excluded race from consideration. The tool was based, in part, on a relatively simple but flawed premise: people who had spent more money on health care would be sicker and would need more attention from the health system to avoid having worse health outcomes in the future. The algorithm failed to consider that, because of lower/limited access to healthcare and other reasons unrelated to need, black people may spend substantially less on health care than whites. In reality, health care costs are not a good proxy for present and future health care needs – when comparing across ethnic groups and socioeconomic differences. It can be strongly argued that an independent evaluation, conducted before the tool was implemented, might have prevented the bias in this and many other algorithm-based tools like it. However, this would require that the evaluators be given access to the data, formulas, and assumptions that the tool was based on. This generally does not happen, because these elements are usually protected by predictive tool developers as intellectual property or trade secrets. In hindsight, it is not difficult to dissect what went wrong in this case, but how can we prevent premature implementation like this from happening again?
The way to a solution
As precision health applications that use algorithms, big data, and artificial intelligence (AI) continue to garner greater use in health care practice, so grows the potential for substantial health benefits but also missed opportunities and even harms. In particular, AI based systems are not as rigorously tested as other medical devices, and this is resulting in serious consequences. By entering the terms “disparities” in to the Public Health Genomics and Precision Health Knowledge Base’s “Search PHGKB” function, one can review numerous articles which identify and address these increasing concerns in regards to health equity. New precision health technology can only reach intended goals if launched in a manner that fully embraces implementation science that includes robust pre- and post-evaluation. We believe three important steps are needed:
- Include independent assessment, evaluation, and health outcomes research as integral elements of precision health implementation
Clinical trials might provide a model to consider before implementing new precision health applications. For example, clinical trials are not implemented without first conducting independent reviews and certifications regarding the potential risks to human health as well as ethical, legal, and social implications. As the trial proceeds, interim health outcomes are deliberately assessed to determine whether harms or benefits are occurring. Such safeguards should be rigorous but need not be onerous. Unbiased, pre- and post-assessment and evaluation of new precision health applications are essential to determine analytic and clinical validity and clinical utility of predictive functions and to identify unintended effects.
- Enable access to proprietary data for algorithms in a manner that maintains integrity of intellectual property
It is understood that the private sector cannot develop precision health tools without the ability to protect the intellectual property and methods used to create them. At the same time, independent researchers cannot evaluate new applications, such as those that use algorithms, without having access to this critical information before and after implementation. However, as has been demonstrated in other fields, legal and data use agreements can be applied to enable access and also ensure appropriate data security and intellectual property protections. We are only aware of the bias found in the study described here because the unusual and progressive step was taken to allow independent researchers access to the algorithm used in health system data. In that case, the study authors are subsequently working with the tool developer to correct the bias.
- Establish a private/public partnership to develop best practices for the effective and equitable implementation of precision health applications
In a recent blog, the authors of the Science article recommended steps to help confront these problems, including a new initiative to address racial bias in health care algorithms. We believe that there needs to be a concerted effort to address bias and increased health disparities not just related to algorithms but across all precision health applications. We believe the best way to do this is to establish a multidisciplinary, public/private partnership which includes health care providers and payers, academia, public health, and patient advocacy groups that can meet and agree on best practices for implementation of precision health tools. These applications include those involving: genomics, large amounts of diverse data sources or “big data,” machine learning and artificial intelligence, as well as algorithm-based risk prediction tools. Issues addressed should include all aspects of potential bias, as well as privacy and other patient-centered matters. As an independent and objective body, public health could provide leadership in convening this diverse group. Such a collaboration could be extraordinarily effective since all players – including industry partners – would share common goals and a wide range of benefits. We believe this group could develop best practices in a motivated and expeditious manner. This work could ultimately decrease costs and speed up development of precision health applications by reducing uncertainty in the pathway to implementation and by decreasing liability concerns for application developers that follow best practices.
Trustworthiness is Critical
Ultimately, the key to the success of precision health technology is scientific integrity based on evidence. It is critical that doctors can depend on their tools as valid and predictive within an acceptable level of confidence. Health care payers need to assure that resources are being used effectively and without discrimination. Most importantly, every patient should have reason to trust that their health care system functions in an unbiased manner.
Posted on by