info@ethicallyspeaking.net
According to a recent study, Artificial Intelligence is racist, at least secretly. Research indicates that Artificial Intelligence, simply known as AI, generates covertly racist decisions about people based on their dialect. The research article entitled, “AI generates covertly racist decisions about people based on their dialect”, was published August 2024 in the journal, Nature. According to the article, dialect prejudice also has the potential for harmful consequences. For instance, language models are more likely to suggest that speakers of African American English (AAE) be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. In addition, these language models embody covert racism in the form of dialect prejudice among other negative outcomes. Language models are a type of AI, such as Chat GPT, that processes and generates text.
When I heard about this study, it reminded me of a similar study that discovered many qualified candidates for employment positions were disqualified because of their ethnically identifiable “black” first names. Similarly, our own human prejudicial views have seeped into how AI views people who speak black English. From my perspective, a person’s dialect, whether it be ethnically, regionally, or culturally diverse, should not serve as a determining factor as to how one is perceived. These are peripheral factors that are insignificant to a person’s humanity, character, or even intelligence.
But of course, the research outcomes of this study come as no surprise. In most cases, those who program language models are less likely to represent the larger society. Instead, computer programmers tend to represent a small, elite group of people who are less diverse in their thinking and reasoning. If we are to develop AI technology that is less racially biased, people who represent a more globally diverse society in thought and experience must be encouraged to enter the computer programming profession. Their background and experience will positively affect how AI is programmed. Most importantly, they will program computers that are less biased and less likely to perpetuate negative stereotypes.
There are other initiatives that can help make AI more just in its perception of speakers who use African American dialect. Because language models were trained on huge amounts of text, these texts tend to be infused with the very same biases we hold as a society. In many ways, these models learn social stereotypes and biases. Therefore, if these negative attributes can be learned, they also can be unlearned. We, as humans, can change the way we understand and conceptualize “African American speech or dialect”. Once these false perceptions are re-conceptualized, language models will become more objective. Fairness must be our ultimate priority. It must supersede hidden racism to a point where AI becomes a tool for human good rather than a weapon for racial bias.
Ethically Speaking,
Obiora N. Anekwe