Racist Robots: Racial Bias in Healthcare AI and Patient Distrust

Written By Meena Seshadri

Healthcare artificial intelligence describes the application of machine learning algorithms to medical diagnoses, treatment, and decision-making in the healthcare system. The use of artificial intelligence in healthcare continues to become more prevalent, with some predicting that AI will be able to replace most healthcare professionals’ tasks. Though some argue that the automation of healthcare tasks can improve our healthcare system, many fear that implementing medical artificial intelligence will harm marginalized groups. As healthcare artificial intelligence becomes more sophisticated, its systematic bias against marginalized races remains stagnant. In its current state, the implementation of artificial intelligence in healthcare is dangerous due to the inherent algorithmic bias against groups of color, and such inequity in the quality of treatment reinforces a cycle of mistrust between patients of color and the medical system.

Artificial intelligence systems utilize data representative of cognitive functions to execute tasks reminiscent or, in some cases, more advanced than what humans can do (Abràmoff, 2023). Artificial intelligence and machine learning systems have been applied in healthcare settings to assist medical professionals in condition diagnosis, management, and treatment (Davenport et al., 2019). By utilizing medical datasets as a reference, healthcare artificial intelligence systems can accurately and more efficiently serve patients, specifically historically underserved populations. In addition to increasing medical system productivity, healthcare artificial intelligence can potentially decrease racial bias in medical diagnosis and treatment.

Racial bias is any implicit or explicit prejudice based on an individual’s race. In healthcare, racial bias can affect treatment access, quality, and patient experience (Dehon, 2017). Systematic racism in medical care has been prevalent since the inception of the modern healthcare system, as seen through the segregated healthcare options available and accessible to white versus non-white patients and the stark difference in quality between such systems (Yearby et al., 2022). Such inequality of care is still present in the modern medical field, as patients of color continue to have disproportionate struggles with healthcare coverage, financing, and quality, as well as issues with access to proper and accurate treatment and care (Yearby et al. 2022). Past negative experiences with the healthcare system, whether individually or communally, make many patients of color hesitant to rely on healthcare providers and can lead to an overall distrust of the medical system as a whole.

Previously, racial bias in the medical field was primarily based on interactions between healthcare professionals and patients. However, with the rise of healthcare artificial intelligence, racial bias in medicine has transcended person-to-person exchanges and become present in algorithmic medical systems. Healthcare artificial intelligence systems often utilize datasets that reflect past medical inequities, meaning current algorithms are modeled off data that includes the poor experiences of patients of color (Levi et al., 2023). These biased datasets that do not adequately represent populations of color lead to patients of color being underdiagnosed, misdiagnosed, and not treated properly. The integration of racially biased artificial intelligence into the healthcare system adds to the existing bias in the medical field by creating a new avenue of racist patient diagnosis and care besides that present in physician-patient interactions.

Racial bias in medically applied machine learning algorithms is particularly dangerous, as any partiality can result in adverse medical outcomes for individual patients and eventually, large populations. Additionally, the artificial intelligence implemented in healthcare settings is often incredibly under-regulated and relatively unmonitored, meaning the discrimination caused by such biased systems can persist for a considerable time before it is recognized and addressed (Grant, 2022). Thus, it is essential to have pre-emptive approaches to discussing and targeting these concerns. Racial bias in healthcare artificial intelligence has already caused measurable damage to the health of communities of color. One study showed that Black patients had to be much sicker than their white counterparts to be assigned the same level of health risk (Obermeyer et al., 2019). This trend of misdiagnosis and underdiagnosis was also seen in artificial intelligence algorithms applied to chest radiographs and heart disease prediction (Seyyed-Kalantari et al., 2021; Igoe, 2021). Even without patients’ identifiable racial information, many healthcare algorithms can still accurately identify a patient’s race (Gichoya et al., 2022). Therefore, even in cases where patient race is not disclosed, there is still a likelihood of racial bias affecting the medical decisions made, meaning the systemic partiality cannot simply be fixed by making the collection of health data race-blind. Discrimination in quality of care can lead to the deterioration of the health of communities of color, which further exacerbates societal racial stratification.

The utilization of healthcare artificial intelligence systems that are not optimized for patients of color can contribute to growing medical mortality and morbidity. Relying on racist machine learning algorithms can lead to patients of color not receiving needed healthcare services, as such systems increase the difficulty for patients of color to be insured and decrease the medical resources invested into  patient populations of color (Chase, 2020; Volpe et al., 2021). In addition to its potential to reduce healthcare accessibility for patients of color, medical artificial intelligence misdiagnosing and underdiagnosing underrepresented minorities furthers patients' distrust in the healthcare field. This widespread distrust in communities of color against the healthcare field can exacerbate healthcare adversities in such populations, as patients might not seek medical attention from healthcare professionals when needed due to fear that they may be wrongly diagnosed.

While racial bias in healthcare artificial intelligence has been entrenched since its inception, through intentional changes, such technology can become impartial and even mitigate existing structural racism in the medical field. Modeling artificial intelligence systems off of already biased datasets further aggravates existing partiality in the medical field, leading to increased negative patient medical experiences and outcomes. However, this does not mean that artificial intelligence in healthcare is entirely impotent. If a machine learning system is developed using racially balanced data and its methods remain honest and transparent, it can offer fair patient diagnosis and treatment that is not influenced by patient-physician interactions and physician bias (Thomasian et al., 2021). Additionally, more vigorous screening and regulation of healthcare artificial intelligence must occur before the systems’ implementation so that any potential bias is caught and addressed before the algorithm is rolled out to medical systems. With proper development and policy surrounding healthcare artificial intelligence, the algorithms can indeed provide unbiased diagnosis and treatment, regardless of patient race. Suppose algorithms are developed and implemented that provide proper diagnoses and treatments without ignoring or overlooking patients of color. In that case, more trust may be built between such communities and the healthcare field. With lessened skepticism of communities of color towards the medical field, patients of color may feel more comfortable and inclined to trust the healthcare system with their medical concerns, as their symptoms and experiences will no longer be overlooked by biased physicians or skewed healthcare artificial intelligence.

Current healthcare artificial intelligence cannot be trusted when considering the health of patients of color. The racial bias present in such systems is quite literally built into the code, meaning the health concerns of patients of color are algorithmically overlooked. The failure to appropriately acknowledge patients of color in healthcare algorithms can lead to damage to an individual’s health as well as alterations to the well-being and mortality of entire communities of color. To address such concerns, we must radically reconsider how healthcare artificial intelligence systems are developed and regulated. Impartial data should be  used to develop such machine learning software, and the systems should be screened for fairness before they are implemented into the healthcare system. This should also be continued as long as they are in use, as such systems may have the potential not only to diagnose and treat patients as a physician would, but potentially even better, as physician bias would no longer affect such medical decisions. While we cannot give up on healthcare artificial intelligence as a whole, we must change how such systems are developed and implemented to protect the health and lives of patients of color. Only after these aforesaid changes are made can we see the fruit of such medical algorithms for the entirety of our society.

Abràmoff, M. D., Tarver, M. E., Loyo-Berrios, N., et. al. (2023). Considerations for addressing bias in artificial intelligence for health equity. npj Digital Medicine, 6(170). doi: 10.1038/s41746-023-00913-9

Chase, A. C. (2020). Ethics of AI: Perpetuating Racial Inequalities in Healthcare Delivery and Patient Outcomes. Voices in Bioethics, 6. https://doi.org/10.7916/vib.v6i.5890

Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in Healthcare. Future Healthcare Journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94

Dehon, E., Weiss, N., Jones, J., Faulconer, W., Hinton, E., & Sterling, S. (2017). A systematic review of the impact of physician implicit racial bias on clinical decision making. Academic Emergency Medicine, 24(8), 895–904. https://doi.org/10.1111/acem.13214

Gichoya, J. W., Banerjee, I., Bhimireddy, A. R., Burns, J. L., Celi, L. A., Chen, L.-C., Correa, R.,Dullerud, N., Ghassemi, M., Huang, S.-C., Kuo, P.-C., Lungren, M. P., Palmer, L. J., Price, B. J., Purkayastha, S., Pyrros, A. T., Oakden-Rayner, L., Okechukwu, C., Seyyed-Kalantari, L., … Zhang, H. (2022). AI recognition of patient race in Medical Imaging: A modelling study. The Lancet Digital Health, 4(6). https://doi.org/10.1016/s2589-7500(22)00063-2

Grant, C. (2023, February 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Igoe, K. J. (2023, October 3). Algorithmic bias in health care exacerbates social inequities - how to prevent it. Executive and Continuing Professional Education. https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/

Levi, R., & Gorenstein, D. (2023, June 6). Ai in medicine needs to be carefully deployed to counter bias – and not entrench it. NPR. https://www.npr.org/sections/health-shots/2023/06/06/1180314219/artificial-intelligence-racial-bias-health-care

Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464). Doi: 10.1126/science.aax2342

Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in underserved patient populations. Nature Medicine, 27. doi:10.1038/s41591-021-01595-0

Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing Health Equity with Artificial Intelligence. Journal of Public Health Policy, 42(4), 602–611. https://doi.org/10.1057/s41271-021-00319-5

Volpe, V. V., Hoggard, L. S., Willis, H. A., & Tynes, B. M. (2021). Anti-black structural racism goes online: A conceptual model for Racial Health Disparities Research. Ethnicity & Disease, 31(Suppl), 311–318. https://doi.org/10.18865/ed.31.s1.311

Yearby, R., Clark, B., & Figueroa, J. F. (2022). Structural racism in historical and modern US health care policy. Health Affairs, 41(2), 187–194. https://doi.org/10.1377/hlthaff.2021.01466

Previous
Previous

Digital Bioacoustics: Will AI Allow People To Talk With Their Pets In The Future?

Next
Next

AI-Powered Menstrual Health: Breaking Stigmas and Advancing Care