AI-generated illustrations can be produced by Google’s Gemini chatbot (previously known as Bard) using a user’s text description as input. For example, you can ask it to draw images of happy couples or people dressed historically strolling through contemporary cities. However, some users are taking issue with Google for portraying certain white figures or historically white groups of people as racially diverse persons, as the BBC reports. In a statement, Google has now acknowledged that Gemini “is offering inaccuracies in some historical image generation depictions” and that it will promptly address the issue.
Daily Dot reports that the allegations began when a former Google employee tweeted pictures of women of color along with the statement, “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.” He instructed Gemini to produce images of American, British, and Australian women in order to obtain those results. Other users—mostly well-known right-wingers—chimed in with their own findings, displaying AI-generated pictures that represent the popes of the Catholic Church and the founding fathers of the United States as persons of color.
When we asked Gemini to draw pictures of the founding fathers for our tests, what came out were mostly white men with one woman or person of race. We asked the chatbot to create pictures of the pope throughout history, and what we got were pictures of Native Americans and Black women holding the position of authority inside the Catholic Church. We were able to obtain pictures of American women by asking Gemini to create photographs of white, East Asian, Native American, and South Asian women. Although we were unable to get Gemini to produce Nazi photos, The Verge claims that the chatbot also portrayed Nazis as persons of color. “I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party,” the bot said.
Given that chatbots and robots trained on artificial intelligence have a history of displaying racist and sexist behavior, Gemini’s actions may be the consequence of overcorrection. For example, in one experiment from 2022, when asked which of the faces it scanned was a criminal, the robot continually selected a Black guy. Gemini Product Lead Jack Krawczyk noted that Google “takes representation and bias seriously” and that the company created its “image generation capabilities to reflect [its] global user base” in a statement that was released on X. He stated that for open-ended requests, Gemini will keep producing visuals that reflect racial diversity, including pictures of people walking their dogs. He did say that “[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that.”
We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.