Canary Islands News - Promo

How the DEA’s use of predictive algorithms is worsening crises in urban communities and raising suicide rates among African Americans


The true definition of criminal behavior has always been problematic. Are we criminals because we break the law or because we have been convicted? Famous and revered people throughout history have clearly broken the law but are almost never defined as criminals. All the founding fathers of the United States self-admittedly committed treason against the crown, a capital offense. In contrast, others, like Stalin and Hitler, had every legal right to order the execution of millions of citizens under their laws but are often still described as criminals.

The power of this label should not be understated, as it often correlates with the amount of sympathy afforded should a citizen be wrongfully targeted or mistreated. Two American states, South Carolina and Missouri, executed men whom most knowledgeable reviewers, including prosecutors, agreed were provably innocent. But they were deemed by every appellate and supreme court, state and federal, as unworthy of the opportunity to prove innocence. Is it a coincidence that both these men were Black? It turns out that the odds your conviction will be overturned in a federal court if you are non-white is about 7.56 percent, about half that for white appellants in America (15 percent). However, the effect is prevalent long before anyone reaches the appeals court.

There is no doubt that in many nations, including the United States, minority citizens are often the focus of police actions. Citizens of color in the United States, by almost every measure applied, are suspected more, policed more, stopped more, searched more, arrested more, convicted more, incarcerated more, and reincarcerated more than the majority population or those with more power. Ultimately, it always comes down to that last measure: power. Those paid to search and arrest citizens can run into problems if they target a citizen with the resources to fight back. Since political and economic power is often reflected by skin color in the United States, who the federal government deems worthy of effective medical care is now determined by this seemingly trivial aspect of humanity. And while we imperfect human beings are often biased, unvetted AIs will be perfectly so. This is now reflected in which physicians are targeted by the DEA.

The Drug Enforcement Administration and the Organized Crime Drug Enforcement Task Force partner with federal, state, local, and tribal law enforcement and public health to better facilitate information sharing through the use of investigative de-confliction tools, including the DEA Analysis and Response Tracking System (DARTS) and the De-confliction and Information Coordination Effort (DICE), as well as other information coordination systems. These efforts are coordinated between the DEA’s Special Operations Division, the OCDETF Fusion Center, and the El Paso Intelligence Center (EPIC), with the goal of sharing de-identified, real-time data between public health and public safety, when feasible, to reach maximum harm reduction in communities. Task forces utilize other artificial intelligence systems including the National Benefit Integrity Medicare Drug Integrity Contractor (NBI MEDIC) Qlarant Artificial Intelligence System, the CMS Predictive Learning Analytics Tracking Outcomes Tool (PLATO), and the National Health Care Anti-Fraud Association’s (NHCAA) Online Special Investigation Resource and Intelligence System (SIRIS).

In the ongoing battle against the opioid epidemic, the DEA has increasingly turned to the power of artificial intelligence (AI) and predictive algorithms to identify and charge physicians accused of overprescribing opioids. While this high-tech approach may seem like a necessary step toward regulating the medical profession, its use in poor urban communities has resulted in devastating, unintended consequences. African American communities, in particular, are facing rising suicide rates, exacerbated by the closure of health care clinics and the criminalization of doctors serving these areas. Behind the statistics lies a deeper story of systemic inequality and the dangers of technology when it amplifies, rather than mitigates, existing social issues.

At the forefront of understanding these complex dynamics is Dr. S. Craig Watkins, the Ernest A. Sharpe Centennial Professor and Executive Director of the IC² Institute at the University of Texas at Austin. A scholar and expert on the social and behavioral impacts of technology, Dr. Watkins leads research teams investigating how AI, data systems, and machine learning influence everything from health care access to mental health outcomes, particularly in marginalized communities. His work sheds light on how predictive algorithms, rather than solving problems, can often entrench systemic inequalities.

The DEA’s use of AI-based predictive algorithms to monitor and prosecute doctors has disproportionately affected physicians serving low-income and minority populations. These algorithms scan prescription patterns, patient data, and geographic factors to identify doctors who are statistically more likely to be overprescribing opioids. While the goal is to stop bad actors from contributing to the opioid crisis, the reality is that many legitimate doctors, particularly those serving African American and low-income communities, are being targeted. In urban communities where access to health care is already limited, the closure of clinics due to DEA crackdowns is a significant blow. When doctors are charged or forced to close their practices, patients—many of whom are managing chronic pain, addiction recovery, or mental health issues—are left without medical care. This not only deprives vulnerable individuals of necessary treatment but also forces them to seek illicit alternatives or suffer in silence.

Predictive algorithms are designed to identify patterns, but those patterns are often based on biased data. In the case of the DEA, the data used to train these algorithms reflects decades of policing that has disproportionately focused on poor and minority communities. As a result, doctors in these areas are more likely to be flagged as high-risk, even if their prescribing practices are appropriate for the needs of their patients. The result is an AI-driven crackdown that disproportionately targets physicians serving marginalized populations, further limiting access to health care in already underserved areas.

Dr. Craig Watkins has explored the alignment problem in AI—how algorithms that are technically efficient may still be misaligned with broader social values such as equity and justice. The DEA’s reliance on AI to target doctors in urban communities exemplifies this problem. Rather than promoting public safety, these algorithms are amplifying disparities in health care access and contributing to a broader public health crisis. One of the most alarming consequences of these closures is the rise in suicide rates among African Americans in urban areas. Dr. Watkins, in collaboration with the University of Texas and Cornell’s School of Medicine, has been investigating the factors behind the rising suicide rates in the U.S., particularly among young African Americans. His research explores the complex interactions between demographics, social environments, and health care access.

For many African Americans living in urban centers, mental health care is already difficult to access. The criminalization of doctors in these communities only worsens the situation. Chronic pain sufferers, those with addiction issues, and individuals with mental health challenges are left without the medical guidance they need. This void can lead to increased feelings of hopelessness, isolation, and despair, contributing to higher rates of suicide. Without adequate health care, people are often forced to turn to illicit substances or go without pain management entirely, further deepening their struggles. The shutdown of these clinics also exacerbates social stigma around mental health and substance use in African American communities, where seeking help can already be fraught with cultural and institutional barriers.

Dr. Watkins’ work delves deep into the unintended consequences of AI in health care, focusing on the intersection of technology, race, and mental health. Funded by the National Institutes of Health, his research explores how demographic, social, and environmental factors contribute to rising suicide rates among African Americans. His team is developing algorithmic models to better understand these interactions, with a particular focus on how AI and machine learning can be used ethically to improve mental health outcomes. The role of AI in health care, particularly in addressing or worsening systemic inequalities, is a central theme in Dr. Watkins’ research. His efforts aim to challenge the unchecked use of AI in high-stakes environments like health care, where decisions have life-or-death consequences. By combining the computational with the social and ethical aspects of AI, Dr. Watkins advocates for more humane, thoughtful implementations of technology that address the needs of marginalized communities rather than contributing to their marginalization.

The tragic consequences of the DEA’s reliance on predictive algorithms to target doctors in poor, urban communities highlight a larger issue: AI must be carefully aligned with social values like fairness and equity. As Dr. Craig Watkins’ research demonstrates, the misuse of AI can perpetuate systemic health care inequalities, leaving vulnerable populations even more exposed to harm. The solution is not to abandon AI but to reform how it is used. As the health care sector increasingly turns to technology, it must prioritize transparency, fairness, and inclusion in algorithmic and artificial intelligence design. AI can be a tool for justice—identifying gaps in care, improving access, and providing support to those most in need—but only if it is developed with a deep understanding of the social contexts in which it operates.

The DEA’s use of predictive algorithms should serve as a cautionary tale of how technology, when misaligned with human values, can cause more harm than good. By incorporating insights from experts like Dr. Watkins and his research on the intersection of AI, health care, and systemic inequality, we can begin to create a future where technology serves to heal, rather than harm, our most vulnerable communities. Dr. Watkins’ ongoing work continues to shine a light on the ways in which technology can either help or hinder progress in the fight for racial justice and mental health equity. His focus on using AI to understand the causes of rising suicide rates, combined with his advocacy for ethical AI practices, offers a path forward for a more just and compassionate health care system. In a world where the line between machine and human decision-making is increasingly blurred, scholars like Dr. Watkins remind us of the importance of keeping human values at the center of technological progress.

In closing, during the 2022 trial of a physician serving a predominantly minority population, the prosecutor helpfully reminded the jury that “[the young Black man] had a prior criminal history … He had been in prison.” Because, of course, criminal records are the new gold standard for determining who does or doesn’t deserve medical care. The defense attorney, clearly not in on the logic, responded, “I’ve never heard the government argue that having a criminal record would disqualify someone from receiving medical care. But I guess that’s where we are now.” A glance at DOJ press release data reveals that a whopping one-third (33 percent) of all physician prosecutions just so happen to occur in areas where 90 percent of the population is minority. Meanwhile, in areas where the minority population drops to 25 percent, physician prosecutions magically drop to a mere 1.5 percent. It seems a doctor is 20 times more likely to be prosecuted for prescribing controlled medications if they serve a minority community. Coincidence? We doubt it.

Neil Anand is an anesthesiologist. L. Joseph Parker is a research physician.






Source link

About The Author

Scroll to Top