Public Health Impact of Digital Health: Reinventing the Wheel

Posted on by Muin J. Khoury and W. David Dotson, Office of Genomics and Precision Public Health, Centers for Disease Control and Prevention, Atlanta, Georgia

a cell phone being held by a stethoscope and a figure looking at the cell phone and thinking about reinventing the wheel“Digital health has potential to improve health management, but the current state of technology development and deployment requires a “buyer beware” cautionary note.” (Perakslis and Ginsburg, JAMA, 2020)

In a recent JAMA viewpoint, Perakslis and Ginsburg summarize the current state of digital health and discuss approaches in evaluating benefits, risks and value of these technologies.

Digital health includes numerous measurement technologies such as personal wearable devices, internal devices, and sensors that could be used to identify health status and help with disease diagnosis and management. Perhaps most importantly, digital health technologies provide new approaches to capture continuous data on individuals and populations that enhance our current irregular and episodic health care approaches (pulse, blood pressure, etc.).

At the outset, the authors admit that the evidence base in support of digital health is still in its infancy. There are no clinical guidelines that currently incorporate digital health tools. Few digital health technologies have been shown to be equivalent to accepted measurement tools. Only a few clinical trials are ongoing to assess the clinical utility of digital health.

Evaluating Digital Health

From our perspective, the evaluation of digital health technologies is not that different than the evaluation of new genomic tests. To the extent a digital technology is measuring a disease or health state in a person or a community, it can be viewed as a “test’ and thus subject to evaluation of its analytic and clinical validity and clinical utility with observational studies and randomized trials. Obviously, digital health technologies involve more than results of a test but do involve other aspects such as the movement of existing health-related data across health systems, patients and populations. Those attributes are not discussed here.

Just like genomic tests, electronic medical records, mobile health apps, and wearable devices and sensors contribute to new health related data. These data require standardization, evaluation for validity and health outcomes, and integration across health information technology systems. Just like genomic tests, digital health tools also come with ethical challenges. An important outcome of the massive amount of information that can be gleaned are the competing needs for security and privacy, especially in the contexts of clinical monitoring and public health surveillance. Also, new digital health technologies powered by machine learning/artificial intelligence can exacerbate existing racial and ethnic disparities due to measurement biases, as well as differential access to the technologies.

ACCE wheel: Wheel showing Clinical Utility, validity and analytic validityFor more than a decade, our office sponsored the EGAPP initiative (Evaluation of Genomic Applications in Practice and Prevention), an evidence-based approach to assess the benefits and harms of new genetic tests in treatment and prevention. EGAPP’s methodology was founded on the ACCE model that we have used to evaluate the analytic validity, clinical validity, clinical utility and ethical, legal and social implications of genomic tests for each intended use. The ACCE “wheel” (see figure) consists of a series of 44 questions that were developed to address the rapid pace of evidence generation in genomic medicine. The ACCE wheel (with some modification of its 44 proposed questions) may provide as an appropriate template for evaluation. The attached table provides a first attempt to modify the list of questions that were devised for genomic tests. Additional work will be needed to refine the evaluation approach to digital health data.

Understanding the full public health impact and value of digital health will take time and rigorous studies. As digital health technologies become standardized and interoperable, and as evidence of utility develops, health care and public health will become more efficient and effective, ushering the new era of precision medicine. At the same time, potential risks and harms must be addressed. Until then, digital health adoption will be a challenge to its widespread use. Just like genomic tests, there are no shortcuts on the long road to evidence-based digital health.

44 Targeted Questions Aimed at a Comprehensive Review of Digital Health “Tests” (Adapted from the ACCE Framework for the Evaluation of Genomic Tests)
Element Component Specific Question
Disorder/Setting 1.    What is the specific health measure (e.g. blood pressure) or clinical disorder to be studied?

2.    What are the findings defining this health measure/disorder?

3.    What is the clinical setting in which the test is to be performed?

4.    What test(s) are associated with this health measure/disorder?

5.    Are preliminary screening questions employed?

6.    Is it a stand-alone test or is it one of a series of tests?

7.    If it is part of a series of screening tests, are all tests performed in all instances (parallel) or are only some tests performed on the basis of other results (series)?


Analytic Validity
8.    Is the test qualitative or quantitative?
Sensitivity 9.    How often is the test positive when the target analyte or health-related measure and is present (or present in amounts above a pre-determined threshold)?
Specificity 10. How often is the test negative when the target analyte or health-related measure and is not present (or present in amounts below a pre-determined threshold)?
11. Is an internal QC program defined and externally monitored?

12. Have repeated measurements been made?

13. What is the precision within- and between- individuals running the test?

14. If appropriate, how is confirmatory testing performed to resolve false positive results in a timely manner?

15. What range of personal attributes have been tested?

16. How often does the test fail to give a useable result?

17. How similar are results obtained in multiple devices using the same, or different technology?


Clinical Validity
Sensitivity 18. How often is the test positive when the health measure/disorder is present?
Specificity 19. How often is the test negative when a health measure/disorder is not present?
20. Are there methods to resolve false positive results in a timely manner?
Prevalence 21. What is the prevalence of the health measure/disorder in this setting?
22. Has the test been adequately validated on all populations to which it may be offered?

23. What are the positive and negative predictive values?

24. What are the relationships between phenotypes and health states measured by the digital test?

25. What are the genetic, environmental or other modifiers?


Clinical Utility
Intervention 26. What is the natural history of the health measure/disorder?
Intervention 27. What is the impact of a positive (or negative) test on patient care and health promotion?
Intervention 28. If applicable, are diagnostic tests available?
Intervention 29. Is there an effective remedy, acceptable action, or other measurable benefit?
Intervention 30. Is there general access to that remedy or action?
31. Is the test being offered to a socially vulnerable population?
Quality Assurance 32. What quality assurance measures are in place?
Pilot Trials 33. What are the results of pilot trials?
Health Risks 34. What health risks can be identified for follow-up testing and/or intervention?
35. What are the financial costs associated with digital health test?
Economic 36. What are the economic benefits associated with actions resulting from digital health test?
Facilities 37. What facilities/personnel are available or easily put in place?
Education 38. What educational materials have been developed and validated and which of these are available?
39. Are there informed consent requirements?
Monitoring 40. What methods exist for long term monitoring?
41. What guidelines have been developed for evaluating program performance?
ELSI Impediments 42. What is known about stigmatization, discrimination, privacy/confidentiality, and personal/family social issues?
43. Are there legal issues regarding consent, ownership of data and/or samples, patents, licensing, proprietary testing, obligation to disclose, or reporting requirements?
Safeguards 44. What security safeguards have been described and are these safeguards in place and effective?

 

Posted on by Muin J. Khoury and W. David Dotson, Office of Genomics and Precision Public Health, Centers for Disease Control and Prevention, Atlanta, GeorgiaTags

Post a Comment

Your email address will not be published. Required fields are marked *

All comments posted become a part of the public domain, and users are responsible for their comments. This is a moderated site and your comments will be reviewed before they are posted. Read more about our comment policy »

Page last reviewed: May 11, 2021
Page last updated: May 11, 2021