Google Responds to ‘Woke’ Criticism, Announces Fixes for AI
Google is urgently addressing issues with its AI-powered image generation tool, Gemini after users reported it was over-correcting against the risk of being racist. The tool, designed to create pictures in response to written queries, was criticized for supplying images that inaccurately represented various genders and ethnicities in historical contexts. For instance, a request for images of America’s founding fathers yielded pictures of women and people of color. Google acknowledged that Gemini’s AI image generation was “missing the mark” and announced immediate efforts to improve the tool’s depictions. The company later suspended Gemini’s ability to generate images of people while working on a fix.
This incident highlights the ongoing challenges of AI systems in addressing diversity and avoiding biased outputs. Similar issues have arisen, with Google facing criticism for its photos app labeling a photo of a black couple as “gorillas.” Rival AI firm OpenAI also faced accusations of perpetuating harmful stereotypes through its Dall-E image generator, which often produced results dominated by pictures of white men in response to queries for specific roles.
Google’s Gemini was released last week, generating quick criticism for what some perceived as over-sensitivity to bias concerns. Users complained that the tool struggled to acknowledge the existence of white individuals, prompting Google to take immediate action to address the inaccuracies. The company emphasized its commitment to seriously addressing representation and bias issues and desired results that reflect its global user base. In response to user feedback, Google aims to refine the tool to accommodate historical nuances and ensure more accurate and diverse image generation.
Comment Template