How AI systems are entrenching global racism

Experts advocate for mandatory third-party bias audits, transparency laws forcing companies to disclose training data, and global standards anchored in human rights frameworks like the UN’s International Convention on Racial Discrimination

Equally vital is involving marginalized communities in AI development to ensure technologies serve rather than harm them. Image/ Courtesy

Artificial intelligence (AI) systems, increasingly embedded in sectors like healthcare, law enforcement, and education, are not the neutral tools they are often portrayed to be. Instead, they risk deepening racial inequalities on a global scale by replicating and amplifying historical biases. This occurs through several interconnected mechanisms, each compounding the marginalization of already vulnerable communities.

As governments and corporations race to harness artificial intelligence, a groundbreaking United Nations report warns that poorly regulated AI systems are amplifying racial discrimination worldwide—from predictive policing algorithms that target marginalized neighborhoods to healthcare tools that downgrade care for Black patients.

The findings, presented at the Human Rights Council’s 56th session, urge nations to confront AI’s role in perpetuating systemic racism or risk cementing digital-era inequality.

“Generative AI is changing the world and has the potential to drive increasingly seismic societal shifts,” said Ashwini K.P., the UN Special Rapporteur on contemporary forms of racism, during a charged dialogue at the Palais des Nations. “I am deeply concerned about the rapid spread of AI across fields like law enforcement and education, not because it lacks benefits, but because its veneer of neutrality hides deeply embedded biases.”

The myth of neutral algorithms

Central to Ashwini’s report is a rebuttal of the “dangerous myth” that technology is objective. Predictive policing tools, widely used in the U.S., Europe, and Latin America, exemplify this fallacy. These systems analyze historical crime data to forecast where future offenses will occur and who might commit them. But as Ashwini notes, this data often reflects decades of over-policing in Black, Indigenous, and Roma communities.

“When officers in overpoliced neighborhoods record new offenses, a feedback loop is created,” Ashwini said. “Algorithms generate increasingly biased predictions, justifying even more policing. Bias from the past leads to bias in the future.”

In Chicago, a 2023 study found predictive policing software disproportionately flagged majority-Black neighborhoods, despite crime rates being similar across racial lines. Similar patterns emerged in London, where gang databases targeted young Black men at rates 40 times higher than their white peers.

One critical issue lies in the data used to train AI. These systems learn from vast datasets that often reflect historical inequities. For example, predictive policing tools analyze crime statistics skewed by decades of over-policing in Black, Indigenous, and minority neighborhoods. In the U.S., Black individuals are disproportionately arrested for drug offenses despite similar usage rates across racial groups.

When AI interprets this biased data as objective, it directs more police patrols to these areas, creating a self-fulfilling cycle where increased policing leads to more arrests, which the system then uses to justify further surveillance. Similarly, healthcare algorithms like the Epic Deterioration Index, designed to prioritize patient care, have been shown to underestimate the needs of Black patients due to historical data that ties health outcomes to race-based metrics rather than clinical factors.

Racial proxies and digital redlining

Even when AI avoids explicit racial categories, it frequently relies on proxies correlated with race, such as ZIP codes, income levels, or education. Mortgage approval algorithms in the U.S. and South Africa, for instance, deny loans to applicants from predominantly non-white neighborhoods by citing “risk factors” like lower property values—a legacy of racist redlining policies.

A 2024 Stanford study revealed that U.S. mortgage approval algorithms denied loans to Black applicants at twice the rate of white applicants with identical financial profiles, using neighborhood crime statistics as a covert race marker.

Employment tools are no exception: Amazon’s scrapped hiring AI downgraded résumés from graduates of historically Black colleges and universities, favoring traits linked to white male candidates. These systems effectively repackage systemic racism as neutral data points, evading accountability while perpetuating exclusion.

In healthcare, AI tools like the widely used Epic Deterioration Index—which predicts patient mortality risks—have been shown to underestimate the needs of Black patients. “Race-based correction factors in medical AI are not just flawed—they’re deadly,” said Dr. Rhea Boyd, a pediatrician and health equity advocate.

Education tools are not immune. AI-driven “success algorithms” used by universities and employers in Brazil, India, and the U.S. frequently score racial minorities as less likely to excel academically. “These systems aren’t predicting potential; they’re replicating exclusion,” Ashwini said.

The problem is further compounded by feedback loops, where biased AI outputs reinforce existing inequalities in education, where predictive algorithms flag Black and Latino students as “high risk” for academic failure based on socioeconomic data, diverting resources away from them and worsening outcomes.

In child welfare, U.S. tools like the Allegheny Family Screening Tool disproportionately target Black families for investigations, citing factors like poverty as indicators of neglect. These systems trap marginalized groups in cycles of disadvantage, denying them opportunities for advancement.

While the EU’s AI Act and Brazil’s AI ethics frameworks mark progress, Ashwini’s report criticizes most nations for “glacial” regulatory efforts. Only 12 countries have enacted binding AI laws addressing racial bias, per UN data.

Volker Türk, UN High Commissioner for Human Rights, echoed these concerns: “In high-risk areas like law enforcement, the only option is to pause AI use until safeguards exist.” Türk’s statement references incidents like the 2022 wrongful arrest of a Black Detroit man due to faulty facial recognition—a technology error rate 34% higher for darker-skinned individuals, per MIT research.

Global power imbalances

Facial recognition technology exacerbates these issues with alarming precision. Studies reveal that these systems misidentify people of color at far higher rates than white individuals. In 2022, a Black man in Detroit was wrongfully arrested after faulty AI matched his face to surveillance footage.

Despite Google having  introduced the Monk Skin Tone (MST) Scale to tame internet bias, online platforms are still awash with skin color discrimination, according to research by Sony AI.  The report points out additional layers of bias related to apparent skin color within computer vision datasets and models, noting that AI datasheets and model cards still leave a lot of room for discrimination against under-represented groups. The study found that existing skin color scales affects the way AI classifies people and emotions.

Globally, AI tools enable mass surveillance of marginalized groups, as seen in China’s targeting of Uyghurs through biometrics and India’s monitoring of Muslim neighborhoods under counterterrorism pretexts. The consequences range from eroded privacy to wrongful imprisonment, disproportionately burdening communities already subject to systemic discrimination.

Global power imbalances in AI development worsen these dynamics. Most AI technologies are designed in Western tech hubs, sidelining perspectives from the Global South. Language models like ChatGPT underperform in African languages, reinforcing linguistic marginalization, while agricultural algorithms in Kenya prioritize crops grown by white-owned agribusinesses over indigenous farming practices. This digital colonialism perpetuates hierarchies rooted in historical exploitation, privileging Western norms and economic interests.

Regulatory failures compound these risks. Few countries have enacted laws requiring racial bias audits for AI systems, and corporate self-regulation often lacks teeth. Meta and Google, for instance, have faced repeated lawsuits over racially skewed ad-targeting systems, yet internal ethics boards remain powerless to enforce meaningful change. Without binding oversight, AI developers face little pressure to address the discriminatory impacts of their technologies.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter

I consent to the terms and conditions