AI-Driven Precision Medicine and the Fight Against Scientific Racism

 

Written By Eliana Liporace

Introduction 

In the rapidly advancing landscape of modern healthcare, artificial intelligence (AI) has emerged as a promising tool in the pursuit of precision medicine, promising to revolutionize how we approach individualized patient care. Yet, beneath this promise is a sobering reality that demands our unwavering attention. As AI systems become indispensable tools in medical decision-making, their heavy reliance on historical medical data has the potential to perpetuate and amplify biases deeply entrenched in our scientific and medical narratives. The specter of scientific racism threatens to cast a long shadow over the future of healthcare equity. The perilous paradox at hand is evident: as AI-driven precision medicine races forward, the unintended consequences of racial bias and misinformation within these systems pose a formidable threat to the health and well-being of our diverse patient population. As we stand on the precipice of the digital medicine era, ignoring this issue would be nothing short of complicity in perpetuating the injustices of the past within the ever-evolving realm of AI-driven healthcare.

Figure 1. Seven opportunities for precision medicine by 2030.

Precision medicine will collectively shape the future of healthcare, promising a more personalized and effective medical touch if properly monitored (Denny & Collins, 2021).

Historical Context of Scientific and Medical Injustice

The history of medical research in the United States is a complex tapestry of achievements and injustices. While we celebrate groundbreaking discoveries and medical progress, there is an undeniable dark undercurrent that flows through the annals of medical research. Henrietta Lacks, a name etched in the history of medical research, exemplifies these deep-seated ethical issues. In 1951, she visited The Johns Hopkins Hospital complaining of vaginal bleeding. Upon examination, gynecologist Dr. Howard Jones discovered a large, malignant tumor on her cervix and took two samples from her cervix without her knowledge or consent. At the time, The Johns Hopkins Hospital was one of only a few hospitals to treat poor African Americans. It would soon be discovered that HeLa cells, abbreviated from “Henrietta Lacks”, cells were the first recognized immortal human cell line to be discovered (Khan, 2011). Like many other cancer cells, immortal cells have an active version of telomerase during cell division, which copies telomeres repeatedly. This prevents the incremental shortening of telomeres that occurs in aging, eventually leading to cell death. Unlike regular human cells, which have a limited number of divisions before undergoing senescence (cellular aging), HeLa cells can continue to grow and divide, making them a valuable resource for scientific research. The Lacks family only became aware of HeLa cells’ use in 1975, years after Lacks death, and the subsequent decades saw them grappling with issues of consent and privacy. Though Dr. George Otto Fey propagated and freely donated these cells for the “benefit of science,” neither Lacks nor her family ever permitted them to harvest them. Henrietta Lacks’s story highlights the ongoing debate about balancing scientific progress with the ethical treatment of individuals and their families in pursuing medical knowledge (Brogan, 2021). The legal battle and settlement in recent years illustrate the persistent ethical dilemmas in healthcare research and the commercialization of biological materials. 

Another example of the chilling structural racism reinforced in the name of scientific research is the infamous Tuskegee Experiment. In 1932, the study began with the aim of documenting the natural history of syphilis, enlisting 600 Black men to participate, 399 of whom had syphilis and 201 who did not (Alsan & Wanamaker, 2018). Shockingly, informed consent was not obtained from the participants. Instead, researchers informed the men that they were being treated for “bad blood,” a vague local term that covered various ailments, including syphilis. The men received free medical exams, meals, and burial insurance in exchange for their participation. This lack of informed consent and the deceptive manner in which the study was presented were blatant ethical violations. By the mid-1940s, penicillin had become the standard treatment for syphilis, and its effectiveness was well-established. However, the study participants were intentionally denied treatment, even when a cure was readily available. This deliberate withholding of treatment was a severe breach of medical ethics and a violation of the principle of “do no harm.” It wasn’t until 1972, when a news article exposed the study to the public, that the Ad Hoc Advisory Panel convened to review the research, eventually announcing the study’s termination a month later (Brogan, 2021).

Trust is a fundamental concept that revolves around the belief in an entity’s ability to assist individuals in achieving their desired outcomes, especially in challenging or uncertain situations. Studies show that African Americans, Latin Americans, and Asian Americans have reported higher perceived discrimination by healthcare providers and poorer health outcomes compared to white Americans (Lee, 2021). This perception of discrimination has deterred racial minorities from seeking healthcare. African American and Black communities have been extensively researched in terms of medical mistrust, owing to the historical and contemporary issues of medical racism toward these communities, as previously discussed. Factors contributing to medical distrust among Black patients include concerns about the quality of care, a shortage of minority healthcare providers, and a lack of cultural competence among healthcare professionals. This deep-seated mistrust has tangible effects on healthcare decisions, such as leading Black Americans to rely more on emergency care than primary care compared to white Americans. This legacy of mistrust and the negative medical experiences within Black communities pose significant challenges for healthcare AI. People’s perceptions of innovation in medical technologies, including AI, can differ significantly by race. Prior research has indicated that Black Americans were more likely than white Americans to exhibit hesitation toward medical innovation, particularly concerning the introduction of new prescription drugs and medical implants (Brogan, 2021). This hesitancy is multifaceted, often rooted in historical mistrust and experiences of medical exploitation among Black communities, necessitating a nuanced approach to bridge these gaps in healthcare acceptance. Understanding these dynamics becomes pivotal in the context of the burgeoning digital health landscape. As technological advancements rapidly reshape healthcare, equitable access and acceptance of these innovations become paramount. The need to address historical hesitancy and cultivate trust within marginalized communities remains a critical focal point in ensuring the inclusivity and effectiveness of digital health initiatives.

The Rise of Digital Health and Equitable Care

Historically, medicine had relied on generalized treatments that aimed to address the needs of the majority. It was a process rooted in empirical experience, where treatments and interventions were passed down through generations based on anecdotal evidence. The transition from this experience-driven, somewhat ‘trial-and-error’ model to evidence-based medicine marked a crucial turning point in the evolution and advancement of healthcare. 

Physicians began substantiating their decisions with rigorous scientific research, documenting the efficacy of treatments, and exploring potential side effects. The dawn of the twenty-first century has brought about disruptive science. This science introduces, breaks with the status quo, and paves new directions for scientific inquiry, including affordable genome sequencing, sophisticated biotechnology, and in-home health sensors. Ultimately, the result is an unprecedented influx of patient data. 

However, the proliferation of digital health tools, notably smartphones and health trackers, has presented an insurmountable challenge for physicians to handle data analysis and stay abreast of the rapidly evolving medical landscape (Mesko, 2017). To this end, supercomputers have become essential tools in advancing precision medicine. They enable the processing of vast datasets, complex simulations, and genetic analyses at unprecedented scales. Deep learning algorithms, a subset of AI, have demonstrated remarkable capabilities in making accurate diagnoses, sometimes rivaling or surpassing human physicians. These algorithms have excelled in cardiology, dermatology, and oncology, offering potential breakthroughs in disease detection and treatment. 

It is essential, however, to underscore the synergy between AI and human expertise. The successful application of deep learning algorithms should complement, not replace, the knowledge and judgment of healthcare professionals. An example of the symbiotic relationship between AI and physicians comes from the International Symposium on Biomedical Imaging, where computational systems were designed to detect metastatic breast cancer in whole slide images of sentinel lymph node biopsies. While the deep learning algorithm achieved an accurate identification success rate of 92.5% when a human pathologist independently reviewed the same images, their success rate was 96.6%. When the predictions of the deep learning system were combined with the pathologist’s diagnoses, however, the success rate increased to an impressive 99.5%. This collaboration resulted in an approximate 85% reduction in the human error rate, underlining the potential for AI to enhance the accuracy and reliability of medical diagnoses (Mesko, 2017). 

In contrast to this potential, in terms of equitable care, AI lacks knowledge of biased research and statistics. The historical legacy of racial bias and mistreatment in medical research can persist stealthily, concealed beneath layers of intricate computer code, understood by only a select few. The danger arises when algorithms influence healthcare professionals, potentially leading them to unconsciously propagate bias while making critical treatment decisions (Fountain, 2022). Notably, the absence of diversity in research participant populations presents a pressing challenge in designing representative algorithms. A striking fact underscores this issue: less than 3% of individuals included in published genome-wide association studies have African, Hispanic, or Latin American ancestries, while a staggering 86% of clinical trial participants are white. Such underrepresentation poses serious risks, exacerbating health disparities and limiting the richness of biological discoveries applicable to all populations (Denny & Collins, 2021). This inherent risk underscores the need for healthcare leaders to be vigilant and actively guide the development of algorithms and databases to minimize systemic racism. 

AI and Health Example: Big Data Analytics (BDA)

The emergence of data-intensive biomedical methodologies and predictive analytics has been another pivotal consequence of the “digital era.” Such processes as Big Data Analytics (BDA) hold tremendous potential for transforming the healthcare industry (Galetsi et al., 2019). With its ability to analyze vast volumes of data, BDA enables healthcare organizations to gain deeper insights into their operations, patient care, and market trends (Johnson et al., 2020). This technology is data-intensive and requires dynamic big data platforms with innovative tools, making it particularly well-suited for healthcare’s data-rich environment. BDA techniques, such as optimization, forecasting, simulation, machine learning, and data visualization, are pivotal in providing meaningful recommendations and insights to healthcare managers and policy-makers. Apache Hadoop, a distributed data processing framework, is widely used in healthcare to store, refine, and analyze large datasets, enhancing decision-making capabilities. BDA’s impact can be seen in various aspects, including the development of more effective drugs and medical devices, fraud detection in billing, improving the speed of healthcare services, disease prevention, public health surveillance, and the timely provision of medical services during emergencies (Erikainen & Chan, 2019). Though the sociological perspective acknowledges the value of data collected from various sources in unveiling insights that might remain hidden through traditional means, it’s essential to address concerns related to data ownership and privacy protection in health data analytics. 

Figure 2. AI Revolutionizing Biomarker Discovery and Drug Design - A Multifaceted Approach.

This figure illustrates how AI is reshaping biomarker discovery and drug design. It showcases the diverse applications of AI, from data-driven disease subtyping and functional genomics to AI-guided drug combinations and accelerated vaccine design. Deep generative models are used to explore the design space of proteins and small molecules, leading to the creation of novel drugs that were previously challenging to attain through traditional methods. This figure highlights the transformative impact of AI in advancing healthcare research and innovation (Boniolo et al., 2021).

Despite the growing body of literature on BDA in healthcare, a comprehensive understanding of its organizational and social impact still needs to be improved (Denny & Collins, 2021). The assumption that AI-based decision-making is inherently neutral and capable of eliminating human biases is common. But, AI crucially hinges on the quality of the underlying data. In many cases, data can reflect societal-level biases that are deeply entrenched. Sampling biases and errors, as well as missing key variables, can lead to flawed datasets, while over-representation of demographic groups may result in biased inferences. Predictive models present one area of concern, as they’re more common for diseases that affect the majority of the population, leaving conditions that primarily affect minority groups underrepresented. Additionally, some diseases may not have well-defined or easily scalable interventions, and as a result, they are less likely to be targeted (Gervasi et al., 2022). Algorithms can perpetuate bias without the developers’ intention, and proving disparate impact can be arduous. This challenge is compounded by the difficulty of modifying historical data to counteract biases. The use of these models, therefore, requires a comprehensive reexamination of the meanings of “discrimination” and “fairness” (Fountain, 2022). It is crucial to recognize that the benefits of AI’s efficiency must be balanced with the critical need to ensure that models are developed and used with fairness, equity, and accuracy. 


Bias in AI and Algorithms

As we transition into the digital age, we’re confronted with the sobering reality that racial bias has already seeped into commercial healthcare software through predictive algorithms. These algorithms assign risk scores to patients based on their health needs and ultimately inform medical decisions made by physicians. A 2019 study published in Science focused on the critical issue of health disparities in algorithmic risk scores used to determine patient eligibility for care coordination programs. In calculating comorbidity scores, or indicators of a patient’s overall health status, they found that even at the same level of algorithm-predicted risk, Black patients exhibit 26.3% more chronic illnesses as compared to their white counterparts (Obermeyer et al., 2019). These findings would be detrimental for Black patients being considered for these care coordination programs, which rely heavily on algorithmic risk scores. The study simulated a hypothetical scenario in which there was no gap in health outcomes between Black and white patients at a given risk threshold. It found that for all patients with a higher risk of certain health conditions, eliminating the difference in risk assessment would lead to a significant increase in the proportion of Black patients identified and considered for the program. For example, at the 97th percentile risk score, where patients are at a very high risk, the fraction of Black patients chosen would rise from 17.7% to 46.5%. These disparities stem from a historical underinvestment in Black patients, leading to a skewed allocation of resources. Rectifying this bias could mean that three times as many Black patients receive the additional resources they need. Bias in AI and algorithms hinders access to equitable healthcare, resulting in racial and ethnic minorities receiving subpar healthcare even when controlling for access-to-care barriers (FitzGerald & Hurst, 2017). To ensure that racially biased examples such as these do not continue, the collaboration of clinicians, data scientists, ethicists, and epidemiologists from diverse backgrounds is essential. More diversity in data and human collaboration is not just a wish but a necessity in the medical world to ensure that every patient receives the care, respect, and resources they deserve.

 Conclusion: The Digital Age and Biased Predictive Algorithms

It is evident that AI has its limitations in healthcare, especially when dealing with novel cases that lack historical data for learning. It cannot replace the tacit knowledge and expertise of healthcare professionals that cannot be easily codified (Mesko, 2017). Several key preparations are necessary to preserve the human touch in medicine while leveraging the benefits of precision medicine. First, creating ethical standards for AI use that prioritize human safety while acknowledging the inherent complexities of ethical decisions in healthcare is imperative. The gradual development and precise monitoring of AI can help mitigate potential downsides and ensure the implementation of fail-safe systems to prevent AI catastrophes. Independent bioethical research groups and institutions can play a role in overseeing this process. Medical professionals should acquire a basic understanding of how AI works in a medical setting to integrate AI into their practice effectively.

On the other hand, patients need to familiarize themselves with AI and recognize its benefits in healthcare. By understanding the intricacies of AI, patients can become aware of the potential biases and challenges associated with its implementation, especially in precision medicine, where personalized treatments are determined based on individual characteristics, including race. When patients are informed about AI, they are better positioned to critically assess the ethical implications of its use in healthcare, including issues related to racial bias. Awareness of the benefits and pitfalls of AI enables patients to advocate for fair and unbiased practices, demanding transparency and accountability in the development and deployment of AI algorithms. Moreover, an informed patient population can actively participate in discussions to mitigate racial discrimination in AI. By recognizing the benefits of AI in precision medicine, such as improved diagnostics and tailored treatment plans, patients can advocate for the responsible integration of these technologies while pushing for the elimination of biases that might disproportionately affect certain racial or ethnic groups.

Additionally, healthcare institutions should take steps to measure the effectiveness of AI systems while pushing for the availability of affordable AI solutions. Regulatory bodies, like the Food and Drug Administration (FDA), can play a role in approving AI solutions for medical purposes. Government-based safeguards may also be essential in reducing systemic racism in computational biomedical research. Initiatives like the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity and the Bridge to Artificial Intelligence program aim to increase diversity in researchers, data, and the content of AI and machine learning algorithms, thereby reducing health disparities (Brogan, 2021). Government oversight is equally essential for privately funded algorithms used in patient care. The FDA has a crucial role in regulating AI and machine learning algorithms classified as Software as a Medical Device (SaMD). All SaMD approved by the FDA are registered by private companies, and these algorithms must be held to the same ethical standards as publicly funded research (Brogan, 2021).

To address these systemic issues, coordinated solutions are imperative. Healthcare workers, students, policy experts, and advocates must actively combat racism in healthcare by acknowledging and calling out racism where it occurs. It is essential to focus on the impact of biases rather than intentions, as implicit bias operates subconsciously. Standardizing care procedures can help ensure equitable healthcare for all patients, regardless of race or ethnicity. Addressing racism in healthcare is an urgent moral imperative that transcends the boundaries of medicine and extends into social justice and equity. It is a call to action that requires a united front against systemic racism to create a healthcare system that truly prioritizes the well-being of all its patients, regardless of their racial or ethnic backgrounds.

Alsan, M., & Wanamaker, M. (2018). Tuskegee and the health of Black men. The Quarterly Journal of Economics, 133(1), 407–455. https://doi.org/10.1093/qje/qjx029

Blasiak, A., Khong, J., & Kee, T. (2020). CURATE.AI: Optimizing Personalized Medicine with Artificial Intelligence. SLAS TECHNOLOGY: Translating Life Sciences Innovation, 25(2), 95-105. doi:10.1177/2472630319890316

Brogan, J. (2021). The Next Era of Biomedical Research: Prioritizing Health Equity in The Age of Digital Medicine. Voices in Bioethics, 7. https://doi.org/10.52214/vib.v7i.8854

DeAngelis, T. (2019). How does implicit bias by physicians affect patients' health care? Monitor on Psychology, 50(3). https://www.apa.org/monitor/2019/03/ce-corner

Denny, J. C., & Collins, F. S. (2021). Precision medicine in 2030—seven ways to transform healthcare. Cell, 184(6), 1415-1419. https://www.cell.com/cell/fulltext/S0092-8674(21)00058-1?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867421000581%3Fshowall%3Dtrue

Erikainen, S., & Chan, S. (2019). Contested futures: Envisioning “Personalized,” “Stratified,” and “Precision” medicine. New Genetics and Society, 38(3), 308-330. DOI: 10.1080/14636778.2019.1637720

FitzGerald, C., & Hurst, S. (2017). Implicit bias in healthcare professionals: A systematic review. BMC Medical Ethics, 18(1), 19. https://doi.org/10.1186/s12910-017-0179-8

Fountain, J. E. (2022). The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. Government Information Quarterly, 39(2), 101645. https://doi.org/10.1016/j.giq.2021.101645

Galetsi, P., Katsaliaki, K., & Kumar, S. (2019). Values, challenges and future directions of big data analytics in healthcare: A systematic review. Social Science & Medicine, 241, 112533. https://doi.org/10.1016/j.socscimed.2019.112533

Gervasi, S. S., Chen, I. Y., Smith-McLallen, A., Sontag, D., Obermeyer, Z., Vennera, M., & Chawla, R. (2022). Analysis: The Potential For Bias In Machine Learning And Opportunities For Health Insurers To Address It. Health Affairs, 41(2), Racism & Health. https://doi.org/10.1377/hlthaff.2021.01287

Johnson, K. B., Wei, W.-Q., Weeraratne, D., Frisse, M. E., Misulis, K., Rhee, K., Zhao, J., & Snowdon, J. L. (2020). Precision Medicine, AI, and the Future of Personalized Health Care. Clinical and Translational Science. https://doi.org/10.1111/cts.12884

Khan, F. A. (2011). The Immortal Life of Henrietta Lacks. The Journal of IMA, 43(2), 93–94. https://doi.org/10.5915/43-2-8609

Lee, M. K., & Rich, K. (2021). Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21) (pp. 1-14). Association for Computing Machinery. https://dl.acm.org/doi/abs/10.1145/3411764.3445570

Mesko, B. (2017). The role of artificial intelligence in precision medicine. Expert Review of Precision Medicine and Drug Development, 2(5), 239-241. DOI: 10.1080/23808993.2017.1380516

Murphy, K., Di Ruggiero, E., Upshur, R. et al. (2021). Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics, 22, 14. https://doi.org/10.1186/s12910-021-00577-8

Xu, L., Sanders, L., Li, K., & Chow, J. (2021). Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer, 7(4), e27850. https://doi.org/10.2196/27850

Zhai, K., Yousef, M. S., Mohammed, S., Al-Dewik, N. I., & Qoronfleh, M. W. (2023). Optimizing Clinical Workflow Using Precision Medicine and Advanced Data Analytics. Processes, 11(3), 939. MDPI AG. Retrieved from http://dx.doi.org/10.3390/pr11030939

Ziad Obermeyer et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447-453. DOI: 10.1126/science.aax2342

 
Previous
Previous

AI-Powered Menstrual Health: Breaking Stigmas and Advancing Care

Next
Next

Robotic Revival: Can AI be used to revitalize endangered languages?