BSV
$68.7
Vol 207.79m
-6.42%
BTC
$99067
Vol 117760.99m
2.64%
BCH
$494.84
Vol 2231.79m
4.76%
LTC
$90.77
Vol 1462.78m
5.77%
DOGE
$0.39
Vol 10338.59m
2.79%
Getting your Trinity Audio player ready...

Google (NASDAQ: GOOGL) has suspended its image generation feature on Gemini, the company’s flagship chatbot, after its attempt to promote diversity led to “embarrassing” and “offensive” images.

Google launched the new feature three weeks ago to compete with market leaders like Midjourney, Stable Diffusion, and OpenAI’s DALL-E 3. However, last week, the feature grabbed headlines over some controversial output, such as including native Americans and Asians when prompted to generate “a portrait of the Founding Fathers of America.”

In response, Google took down the feature, with Senior Vice President Prabhakar Raghavan
acknowledging that “it’s clear that this feature missed the mark.”

According to Raghavan, Google wanted Gemini to work for everyone around the world. As such, it included diversity in its development so that when you ask for an image of a person walking a dog, for instance, it doesn’t only produce images of white people.

However, two things went wrong with Gemini: first, Google failed to account for instances when images shouldn’t have a range, like if you ask for a white teacher or historically accurate events.

Second, Google was so concerned about falling into the traps other AI models have faced, such as producing racially insensitive or sexually violent output, that it made Gemini too cautious. Several users revealed that Gemini would refuse to produce images that were available on other AI models.

The challenge could be that Google has been adding diversity terms “under the hood” to make Gemini more inclusive, speculates Margaret Mitchell, the chief ethics scientist at Hugging Face, an AI startup valued at $4.5 billion last August.

Mitchell, who previously served as the head of Ethical AI at Google, says that when a user keys instructions like “portrait of a chief,” Google’s LLM adds the term “indigenous” to produce better results.

Mitchell also speculated that it could be an issue with prioritization in which Google pushes the more diverse results higher up.

“Rather than focusing on these post-hoc solutions, we should be focusing on the data. We don’t have to have racist systems if we curate data well from the start,” she told the Washington Post.

Google pledged to continue fine-tuning the image generator.

“I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate, or offensive results — but I can promise that we will continue to take action whenever we identify an issue,” Raghavan said.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI and blockchain

Recommended for you

Sch. Post test

Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum

November 7, 2024
Post with chaching

Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum

November 4, 2024
Advertisement