Tech

Unbiased: How can AI be designed to be unbiased towards black women

By Iyanuoluwa Adenle | Jun 11, 2022

Artificial Intelligence (AI) has become even more popular in recent years. Governmental bodies, large organizations, and small online businesses are all using AI to make smart business decisions. AI is a type of technology that enables computers to learn on their own, without being programmed with the help of advanced algorithms. 

 

People often assume that AI is neutral, except they aren’t. The problem with AI today is the algorithm. By design, algorithms are built using past data sets in order to learn from them. If the past was biased towards white males then this will lead to biased results. This is known as ‘algorithmic bias’ or ‘machine learning bias’. Algorithms can be trained on an inherently limited set of data and this means that the gender and race gap in the data sets mess with the algorithms. Machines learn the data that it is being fed and if the data recommended to the machine is already biased, AI  can be programmed to be racist, sexist or prejudiced.

 

When we factor in racial and gender bias that is being recommended into the systems, therefore influencing the algorithms, it becomes clear that because the people aren’t diverse enough in their thinking and tastes, our algorithms are biased as well. It is important that the problem is fixed by having diverse teams building these kinds of systems so that they can catch these types of biases at the beginning stage before they get into production code. 

 

The algorithm is biased because of the lack of diversity in AI which raises more than just a question of fairness and equality, but also a question of quality. 

 

Diversity in AI has been an ongoing concern among scientists since 2015 when Google’s Photos app labelled African-Americans as gorillas and identified dark-skinned humans as “gorillas” or “chimpanzees.” The African-American engineer who was able to identify the problem explained that it was due to faulty algorithms gotten from racist data sets from online websites like Reddit and Twitter where users often submit photos tagged with offensive words like “monkey” or “ape” and other racial slurs. Despite identifying the problem in 2015, nothing was done to solve this problem.

 

 Joy Buolamwinia, a computer scientist and founder of the Algorithmic Justice League, wrote in Artificial Intelligence Has a Problem With Gender and Racial Bias published in Time, about how harmful the bias of artificial intelligence can be to people of colour, especially to women of colour. In 2015, she made a discovery that a particular facial analysis software couldn’t detect her dark-skinned face until she put on a white mask. 

 

She wrote, 

“These systems are often trained on images of predominantly light-skinned men. And so, I decided to share my experience of the coded gaze, the bias in artificial intelligence that can lead to discriminatory or exclusionary practices.”

 

When women and people of colour are underrepresented in technology, the results shape the data which in turn influences the AI. The consequences of this can be seen everywhere: from voice assistants that don’t understand non-binary genders, to facial recognition software that cannot recognize people with darker skin tones because they were trained on mostly white faces.

 

The lack of diversity in AI is a direct result of a disproportionate representation of white men within the field, which ultimately impacts how artificial intelligence is developed. Since AI is often trained by humans who undoubtedly have biases, it could result in harmful consequences for people who don’t fit with the current understanding of diversity. For example, if an algorithm is trained with biased data sets, then it will learn those same biases. If these algorithms are used to analyze data sets that contain information about ethnicity or gender identity, they could help to preserve existing stereotypes instead of challenging them. 

 

Just because a tool is tested for bias, which assumes that engineers who are checking for bias actually understand how bias manifests and operates, against one group doesn’t mean it is tested for bias against another type of group.

 

Tech is already making important decisions about people’s lives and potentially ruling over which political advertisements they see, how their application to their dream job is screened, how police officers are deployed in their neighborhood, and even predicting their home’s risk of fire. 

 

This is also true when an algorithm is considering several types of identity factors at the same time: A tool may be deemed fairly accurate on white women, for instance, but that doesn’t necessarily mean it works with black women.

 

When people’s lives, livelihoods, and dignity are on the line, AI must be developed with care. Diversity in race will lead to a better AI for everyone. Biased AI will only reinforce existing social inequalities, creating more inequality rather than doing the opposite.

 

The solution is to have more diverse teams working on building AI. We need more women and people of colour in tech so algorithms that don’t reinforce gender stereotypes or racial biases can be created.  To build unbiased AI, we have to create an inclusive society which will in turn decide how our lives will change in AI-dominated environments. AI researchers need to include all communities in their design process from the beginning.




 

HIDDEN - to trigger update. rm later