BSV
$68.31
Vol 164.05m
-11.19%
BTC
$99473
Vol 94561.55m
2.43%
BCH
$495.13
Vol 1537.31m
-6.2%
LTC
$90.13
Vol 1215.94m
0.66%
DOGE
$0.39
Vol 9946.82m
2.55%
Getting your Trinity Audio player ready...

Amid growing concerns about the misuse of generative artificial intelligence (AI) tools, Google (NASDAQ: GOOGL) has grabbed the bull by the horns by unveiling invisible watermarks for AI-generated images, dubbed SynthID.

Google CEO Sundar Pichai confirmed the latest feature to its suite of expansive AI tools, noting that the move will bring the firm closer to its goals of ensuring safe AI use. He noted that the watermarks, powered by Google’s DeepMind, will remain undetectable to humans, with no compromise on image quality.

“Today, we are pleased to be the first cloud provider to enable digital watermarking and verification for images submitted on our platform,” said Pichai. “Using technology powered by Google Deepmind, images generated by Vertex AI can be watermarked in a way that’s invisible to the human eye without damaging the image quality.”

However, Pichai said the watermark will be easily spotted by an AI detection tool, making it nearly impossible for users to remove the labeling on their AI-generated images. With traditional watermarks, users may employ a range of methods to eliminate them, including cropping, resizing, or using existing photo tools.

Google’s new AI tool has been in the works since 2022, but critics have poked holes in its applications. They point to a lack of technical details in the advisory, but the company noted that the nature of the announcement is designed to limit the operations of bad actors.

“The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” said Google DeepMind CEO Demis Hassabis.

Although still in the early stages, Hassabis believes that SynthID can be deployed across several forms, including video and text. The tool will be open to a select number of users before a full-scale rollout, with Hassabis noting that despite the promise, it is not “a silver bullet to the deepfake problem.”

Generative AI innovation has led to the rise of deepfakes on social media. This trend has triggered grave concerns for regulators around the globe. Ahead of the 2024 election season, the issue of deepfakes has taken center stage, with the U.S. Federal Election Commission (FEC) launching a public consultation for proposed rules to govern the use of AI-generated images and videos.

How to label AI-generated content

Google is not the only firm racing to develop labeling standards for AI-generated with OpenAI and Meta (NASDAQ: META) throwing their hats in the ring. All three industry-leading firms have pledged to introduce safeguards for the use of their AI tools, including the use of cryptographic metadata to tag AI-generated content.

The incoming European Union AI Act includes a provision for firms to “clearly label” AI-generated content, with the United Nations warning that deepfakes could trigger violence in certain regions of the world. Concerned civil society groups have warned that if left unchecked, deepfakes may adversely affect the election process, stocks, Web3, and security.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

Sch. Post test

Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum

November 7, 2024
Post with chaching

Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum

November 4, 2024
Advertisement