Does AI have Racist Tendencies?

Should We Fear AI?

AI systems are just a reflection of the humans who create them, and no amount of machine learning is going to change that. When examining AI, we can’t simply ignore the biases of the people who design these systems. These programmers craft every line of code, after all. Some people will argue that because AI isn’t human, it shouldn’t be judged by human standards. We’re not so sure about that.

After all, we’ve only just begun to scratch the surface of what AI can accomplish, and a lot more work will likely be needed before the technology fully transcends its human-made limitations.

Last month, the video posted on Facebook with a group of black males ended with a question asking viewers if they would like to “keep seeing videos about Primates.” The Facebook apology later said the caption resulted from an “error,” which was “unacceptable.”

A growing number of algorithms that create biases against people of color are mentioned by the companies responsible for using increasingly familiar terms: “problematic,” “unfair,” or”unfair,” “glitch,” or an “oversight.” 

Campaigners are now urging more recognition from these businesses that the AI systems they’ve developed and which have an increasing influence on our lives – might be racist.

“This animalization of racialized people has been going on since at least 2015, from a great many companies, including Google, Apple, and now Facebook,” claims Nicolas Kayser-Bril. He is a journalist who works for the data advocacy organization AlgorithmWatch.

The notorious incident of 2015, where two people of color were labeled in Google Photos as “gorillas,” provoked a furious response; however, Kayser-Bril is very critical of the lack of response.

“Google simply removed the labels that showed up in the news story,” He declares. “It’s fair to say that there is no evidence that these companies are working towards solving the racism of their tools.”

The bias that algorithms reveal goes beyond mislabeled labels of digital images. Tay Chatbot, which was created in 2016 by Microsoft 2016, began using racist language within hours of its debut. The year before was a flimsy-conceived AI beauty contest that repeatedly found white women more attractive than those from other races.

The software for facial recognition is significantly more effective for whites than blacks which puts people of color at risk of being wrongly arrested when police utilize the systems.

AI is also known to bring certain levels of bias and prejudice into online gaming and even in government policies; however, apology-based explanations later attributed the blame on the AI itself, rather than parents trying to explain the behavior of a naughty child.

One might believe AI is neutral, a good thing. 

However, as some campaigners say, AI only has one teacher, humans. AI can be neutral and a good approach to removing the biases that affect human decision-making; however, it appears to be infused with all the discrimination inherent in the human race.

In a stunning moment from the document, Buolamwini is one of the women of color using an automated facial recognition system that reports back “no face detected.” When she is wearing an uncolored mask, her face is detected, she passes the test immediately. The reason is that the algorithm that makes the decision was trained using many White data sources.

Despite all the efforts being put into place worldwide to create an open community, AI only has the past to draw lessons from. “If you feed a system data from the past, it’s going to replicate and amplify whatever bias is present,” Kayser-Bril. “AI, by construction, is never going to be progressive.”

Data could create feedback loops that self-fulfill prophecies in the US police that use predictive software to conduct greater surveillance of black neighborhoods since the current systems emphasize. Credit agencies and prospective employers that rely on biased systems could make a mistake and make ill-informed choices, and those on the top of the list will not be aware of the computer’s role.

The apparent opacity, says Kayser-Bril, is both alarming and not surprising. “We have no idea of how widespread the problem is because there is no way to audit the system systematically,” he claims. “It’s unclear, but I’d say it’s not a major issue for private firms. Their role isn’t to be transparent, but to make a difference.”

Some companies seem to be doing things positively. For 2020, Facebook said it would “build products to advance racial justice … this includes our work to amplify black voices”.

Every apology made by Silicon Valley is accompanied by an agreement to address the issue. However, a UN report released in the first week of the year clarified which party was responsible.

“Developers mainly design AI tools in the West,” it stated. “These developers are overwhelmingly white men, who also account for the vast majority of authors on AI topics.” The report continued to solicit greater diversity in the field of data science.

The people who work in the field may be resistant to allegations of discrimination against minorities. Still, as Ruha Benjamin points out in her book Race After Technology, it is possible to continue to promote the racist system without the intention to harm anyone.

“No malice needed, no N-word required, just lack of concern for how the past shapes the present,” she writes.

But with AI systems meticulously constructed and taught from the beginning in the last few years, what hope is there to repair the harm?

“The benchmarks that these systems use have only very recently started to take into account systemic bias,” Kayser-Bril says. Kayser-Bril. “To remove systemic racism would necessitate huge work on the part of many institutions in society, including regulators and governments.”

This struggle for survival was well written through Canadian researcher Deborah Raji, writing for the MIT Technology Review.

https://www.suryasys.com/bridging-the-trust-gap-in-data-collection-ethics/



Leave a Reply

This website uses cookies and asks your personal data to enhance your browsing experience.