This post was originally published November 3, 2011 but is still very much applicable in 2013!
The popular proverbial saying “you cannot have your cake and eat it too” implies that one cannot consume something and preserve it at the same time–in other words, we cannot have it both ways. Well, for once, maybe we can have our cake–our whole genome sequence (WGS)–and eat it too. I believe having our WGS and consuming it in small bite sizes over a lifetime may be the only way to integrate it into medicine and public health.
Rapid advances in genomic sequencing technologies are making the possibility of reliable and affordable whole genome sequencing (WGS) a reality in the next few years. We all carry about 6 billion base pairs of DNA in each of our cells, with 5-10 million inherited variants that are different among us. This genetic variation along with environmental influences provides a blueprint for health throughout the life span, and is related to virtually every disease of public health significance. There is definite interest among the public and scientists about the personal utility of this information. In a recent survey by Nature, attitudes towards genome sequencing were explored among a sample dominated by scientists and professionals from medicine and public health. Although only 18.2% of respondents had had their genome sequenced or analyzed, 2/3 of those who had not reported they would take the opportunity should it arise. Curiosity was reported as the main single factor influencing respondents.
Can this information be useful today in improving medical care and preventing disease?
The answer is not yet. However, there are now early success stories in using whole genome analysis for identifying causes of mysterious illnesses that have a strong familial or genetic component. In the near future, WGS will improve the diagnosis of a small subset of patients with disorders resulting from disruption of a single gene or chromosomal region. However, WGS is unlikely to provide valuable clinical diagnosis for people with more common diseases, resulting from complex genetic and environmental interactions, because our ability to interpret the combined effects of common genetic variants remains limited.
Scientific policy groups such as the PHG Foundation are currently assessing what to do with such a large amount of information, and how we can separate what is actionable to improve health from what is not (information that could be harmful or simply unknown). For sure, there are still some technical challenges to overcome in terms of which laboratory platforms and informatics tools to use, and how to ensure the accuracy and performance of the test, and the laboratories performing and interpreting the testing. There is also the question of price. While the cost of a WGS has plummeted in the past few years, it is still not widely affordable. Nevertheless, technological concerns and price issues seem likely to be tackled successfully in the next few years.
What remains to be seen, however, is whether and how to use a WGS for improving health and preventing disease for the vast majority of people. In other words, what is the clinical validity and the clinical utility of the WGS?. The problem is that WGS is not one test. It is billions of tests that could be used for various purposes throughout life and in different contexts, including prenatal diagnosis and carrier testing for rare genetic diseases. It could also be used in newborn screening as a risk assessment tool for future common diseases. It could be used in diagnosis for people with symptoms suggestive of various diseases, in risk prediction for people with family history of diseases, for modifying drug treatments (pharmacogenomics), and for a variety of other purposes. For most of these applications, we currently have no clue what to do with WGS and we are unlikely to find out for a very long time.
So can we have our genome and eat it too? In 2010, we suggested that the deployment of genomic technologies could proceed according to a three-tier classification schema based on the methods of evidence-based medicine. Subsequently Berg, Evans, and I have referred to the three tiers as “bins” in the context of WGS, and classified them further into subgroups. Tier 1 genes and their variants from WGS include those with sufficient evidence for clinical validity and clinical utility to provide meaningful and actionable information to consumers and providers. Tier 2 genes/variants from WGS are those with established evidence of validity but insufficient evidence of utility to support a recommendation for use. For tier 2 genes/variants, outcomes research is needed in clinical trials and/or comparative effectiveness studies. Public health and clinical practitioners could, however, provide individuals with results for informational purposes with appropriate caveats. Tier 3 genes/variants are those with either sufficient evidence for a lack of utility or presence of clear risk for harms, or those with insufficient evidence for both validity and utility. Public health and clinical providers could discourage using tier 3 variants in practice until further studies establish validity and utility and adequately address any potential harms. Evidence-based triaging of genomic applications into the three categories, or “bins,” would be an iterative process, constantly evolving based on newly acquired evidence.
Clearly, we are still in the early days of WGS applications. The vast majority of genetic variants are currently in tier 3 and therefore we have no business using them in practice until we figure out what they mean for health. A few genetic variants are in tier 1 where we know that the information, if provided to people, with appropriate counseling, can benefit their health and that of their relatives. Examples of these include BRCA genetic variants associated with increased risk of breast and ovarian cancer, and Lynch syndrome mutations associated with increased risk of colorectal cancer. A small but growing number of genetic variants are currently in tier 2, including many of those associated with drug response (pharmacogenomics, 10).
As the landscape changes over the next few years, we need a group to function as an “honest broker” in the binning of the WGS into small pieces that could be used to inform patients and consumers in general in an appropriate way, while changing and updating this classification over time as more data accrue. The EGAPP working group, an independent, multidisciplinary panel sponsored by CDC since 2005, has developed and applied methods to evaluate the evidence for validity and utility of genomic applications in practice and disease prevention. They are now beginning to turn their attention to developing evidence classification for the WGS. Undoubtedly, this is the beginning of a long journey towards a slow but steady deployment of WGS into clinical practice, towards fulfilling the idea of having our genome and eating it too.