Google issued an apology on Friday, acknowledging flaws in its Gemini AI chatbot's image generation feature. The company admitted that the tool produced images devoid of white people, a mistake that sparked outrage and accusations of racism. Prabhakar Raghavan, Google's senior vice president, clarified that the intention was never to exclude any particular group or create historically inaccurate images.
In a candid blog post, Raghavan took responsibility for the oversight, attributing it to the failure of Gemini's tuning to ensure a diverse representation. He also highlighted the unintended consequences of the model becoming overly cautious and refusing certain prompts altogether. These issues resulted in the creation of embarrassing and inaccurate images, such as Black Nazi soldiers and Black Vikings, alongside the absence of images featuring white individuals.
To address these concerns, Google temporarily disabled Gemini's image generation feature and pledged to release an improved version after thorough testing. However, Raghavan cautioned against expecting immediate changes, emphasizing the need for extensive refinement before reactivation.
While Raghavan acknowledged the possibility of future errors, he assured users that Google remains committed to rectifying them promptly. Nevertheless, he advised users to recognize Gemini as a creativity tool, acknowledging its limitations, particularly in generating content related to current events or sensitive topics.