Over the past week, Google's conversational AI, Gemini, has come under scrutiny for showing bias in its ability to generate images of people. Users reported that when they asked Gemini to illustrate people in historical contexts or fictional scenarios, the AI was more likely to depict figures with darker skin tones. This raised legitimate concerns about potential bias in the system.
In a statement, Google acknowledged that Gemini did not perform well when it came to representing people of different ethnicities and backgrounds in a balanced and inclusive way. The company has decided to temporarily disable its AI chatbot's ability to generate images for people.
Google stated that inclusion and diversity are extremely important principles to it. The company did not want its technologies to propagate or be susceptible to real-world bias in any way.
We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ
— Google Communications (@Google_Comms) February 22, 2024
When users now ask Gemini to create an image, they receive a message explaining that the feature is currently unavailable but will return once the bias issues have been adequately addressed.
Experts have said (via AP News) that AI image generators struggle with diversity due to biases in their training data, which often lacks representations of people from all backgrounds. Google seems determined to address these limitations in Gemini before re-enabling the feature. It wants to ensure that the system can represent all types of people in an objective and balanced way.
The move is seen as a proactive step toward making AI technologies fair and inclusive. As part of this movement, the US government took steps in February to promote the safe development of generative AI. Apple, OpenAI, and Microsoft joined the first-ever consortium to address protocols around AI.
27 Comments - Add comment