top of page

Biases and Stereotypes in Generative AI

The tech race for Generative AI officially began in 2022 when Open AI launched ChatGPT, a conversational chatbot based on a large language model (LLM).


According to an article published in The Verge, ChatGPT might be the fastest-growing consumer internet app of all time, reaching an estimated 100 million monthly users in just two months. However, recent articles, particularly one published in Rest of World, suggest that "generative AI systems have tendencies towards bias, stereotypes, and reductionism" when it comes to portraying diverse identities.


Bias occurs in most algorithms and AI systems; these technologies are also prone to "hallucinations," meaning they generate false information.


A recent analysis of more than 5,000 AI-generated images by Bloomberg, revealed that images associated with higher-paying jobs featured people with lighter skin tones, and results for more professional roles were male-dominated.


Rest of World, a tech-focused media outlet covering technology’s impact in Latin American, African, and Asian societies, analyzed 3,000 images generated by Midjourney, an AI tool that generates images based on text prompts. Some of the results they obtained include:



Credits: Rest of World

Credits: Rest of World

"Essentially what this is doing is flattening descriptions into particular stereotypes, which could be viewed in a negative light," said Amba Kak, Executive Director of the AI Now Institute, in the article.


The Rest of World article also explains that even if stereotypes are not "inherently negative, they are still stereotypes: They reflect a particular value judgment and a winnowing of diversity."


Bias and stereotypes are not only related to the negative depiction of certain cities or ethnicities but also, as mentioned earlier, to gender:


"Across almost all countries, there was a clear gender bias in Midjourney’s results, with the majority of images returned for the 'person' prompt depicting men."

In conclusion, AI experts and researchers agree that bias in these kinds of large language models and image generators is "a tough problem to fix" because, after all, "the uniformity in their output is largely down to the fundamental way in which these tools work: the AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends."


These tools are designed and trained to mimic what has been done before, not to ensure and promote diversity: "Any technical solutions to solve bias would likely have to start with the training data, including how these images are initially captioned."


Read the whole article and research here.


39 views0 comments
bottom of page