How AI could widen health disparities without stronger policies


In the realm of health care, artificial intelligence (AI) stands as a beacon of transformative potential. With its ability to process vast amounts of data, AI promises to streamline diagnostic processes, enhance treatment precision, and revolutionize patient care. Yet, it’s overshadowed by various ethical concerns about perpetuating existing biases and systemic discrimination.

AI development is ultimately influenced by the datasets it’s trained on, making it vulnerable to biases present in that dataset. Take, for example, “race-based GFR” — a metric for determining kidney function–where equations incorporating a race coefficient inaccurately suggested higher function in Black Americans. This adjustment, based on unproven assumptions about muscle mass differences, overlooked social factors or other comorbidities. As a result, it failed to consider the diversity within and across racial groups, leading to disparities in such health care as underdiagnosis, delayed treatment of kidney disease in Black patients, and even postponing Black patients from receiving kidney transplants they desperately needed. The misuse of data in cases such as this results in poorer outcomes in the populations these innovations were ostensibly designed to serve, further propagating mistrust among people of color towards the institution of medicine.

We see this already taking place with AI use in skin cancer detection. A study conducted at the University of Oxford showed that in a repository of 2,436 pictures of patient skin used to develop the AI algorithm, only 0.4 percent were of brown skin, and 0.04 percent were of dark brown or black skin. People of color are already at a higher risk of being underdiagnosed for skin cancer due to misconceptions about their lower susceptibility based on skin color, even without the influence of artificial intelligence. AI developed with such flawed datasets potentially puts patients of darker skin color at greater risk for a missed diagnosis of skin cancer.

Failure to address these biases in our data comprehensively may hinder progress toward a more just and equitable health care system. These trained AI could further institutionalize these prejudices under the guise of objective, evidence-based decision-making. It begs the question: what policies or guidelines are in place to prevent this?

At the federal level, recent efforts by the Biden Administration include issuing an executive order “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence” and a “blueprint for an AI Bill of Rights.” Within these documents there is mention of AI being a source of potential harm in health care. Yet, the vague language used and suggested “safety program” fail to describe what this looks like and what explicitly are some of the administration’s most pertinent concerns. The AI Bill of Rights states, “Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.” Essentially, they are asking for a voluntary commitment from AI developers, which is ultimately non-binding and ineffectual. While leaders in AI development like Google, Bing, and others are ideally positioned to be at the forefront of positive change, they are fixated more on surpassing each other’s technological achievements than on grappling with the ethical stakes of deploying these technologies, overlooking the profound societal impacts. A stronger policy would mandate that AI tools used in health care were built with data that is representative of the population it intends to serve based on Federal estimates and census representations of both demography and geography.

Mitigating bias in AI extends beyond improving efficacy; it necessitates a comprehensive approach that reduces the risk of inheriting biases. Establishing clear-cut and thorough policies to ensure the use of inclusive datasets and creating guidelines are crucial steps. As we navigate the integration of AI in health care, we face tremendous potential for beneficence as well as ethical challenges. Proceeding with a conscientious commitment to equity is essential to harness AI’s potential to mitigate, rather than exacerbate, health care disparities.

Fadi Masoud and Sami Alahmadi are medical students.






Source link

About The Author

Scroll to Top