Researchers suggest that algorithms should “consider race more explicitly.”
Khari Johnson, an author at Wired wrote that “technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI.” He discussed a paper from Big Data & Society to emphasize the role technology plays in racial inequality.
To compensate for these “inequities,” researchers and sociologists suggest that AI models should use critical race theory and intersectionality.
“[T]he authors [in the paper] describe algorithmic reparation as combining intersectionality and reparative practices ‘with the goal of recognizing and rectifying structural inequality,’” Johnson wrote.
The paper suggested that that “reparative algorithms” provide a solution to racial inequities in technology.
“Reparative algorithms prioritize protecting groups that have historically experienced discrimination and directing resources to marginalized communities that often lack the resources to fight powerful interests.”
The paper continued: “Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.”
An example Johnson cited as an area where AI discriminates is mortgage applications. White House Office of Science and Technology Policy adviser Rashida Richardson is publishing a paper on AI’s effect on racial segregation.
“Racial segregation has played a central evolutionary role in the reproduction and amplification of racial stratification in data-driven technologies and applications. Racial segregation also constrains conceptualization of algorithmic bias problems and relevant interventions,” Richardson wrote. “When the impact of racial segregation is ignored, issues of racial inequality appear as naturally occurring phenomena, rather than byproducts of specific policies, practices, social norms, and behaviors.”
Researchers conclude that audits and “algorithmic impact assessments” are a step in regulating algorithms that are “discriminatory.”