BSV
$66.91
Vol 193.25m
-1.73%
BTC
$97280
Vol 118150.86m
3.82%
BCH
$482.19
Vol 2118.59m
9.36%
LTC
$88.5
Vol 1416.69m
5%
DOGE
$0.38
Vol 10154.18m
1.52%
Getting your Trinity Audio player ready...

OpenAI has developed a tool to detect when someone uses ChatGPT to generate content with 99.9% accuracy but has no plans to release it to the public.

The California company has long hinted that it was researching technology that can detect artificial intelligence (AI) content, but it has led its clients to believe that this technology was years away. However, according to insiders who spoke to the Wall Street Journal (WSJ), this tool has been available for months, but the company worries that it could make its products less appealing.

Detecting AI-generated content has become a significant challenge as adoption has soared. Legislators have formulated laws that require AI developers to include watermarks and other distinctive features in such content, but none has taken hold.

This challenge is more prevalent in some fields, like the education system, where a recent study found that 60% of middle- and high-school students use AI to help with schoolwork.

According to OpenAI insiders, this challenge was solved over a year ago, but the company doesn’t plan on releasing the tool to the public.

“It’s just a matter of pressing a button,” said one of the sources.

OpenAI says the delay is necessary to protect the users as the tool presents “important risks.”

“We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” a company spokesperson told WSJ.

The firm also claimed that if the technology is available to everyone, bad actors could decipher the technique and develop workarounds.

However, sources say that the real motive is user retention. A company survey last year found that 70% of ChatGPT users were not in favor of the new tool, with one in three saying they would quit the chatbot and turn to its rivals.

Since then, senior executives have suppressed the tool, claiming it wasn’t ready for a public launch. In a meeting two months ago, the top brass stated that this tool, which relies on watermarking outputs, was too controversial and that the company must explore other options.

OpenAI rivals, led by Google (NASDAQ: GOOGL), have not fared any better. The search engine giant, whose Gemini LLM is one of the industry leaders, has developed a similar tool, dubbed SynthID, but it has yet to launch it publicly.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Transformative AI applications are coming

Recommended for you

Blockchain enables autonomous AI agents to learn
Utilizing blockchain tech, a group of Belgian scientists enabled autonomous AI agents to learn and communicate securely, contributing to the...
September 17, 2024
WhatsOnChain gets own UTXO endpoints for BSV blockchain services
With ElectrumX set to retire in October, WhatsOnChain is gearing up to implement a new UTXO set of API endpoints,...
September 16, 2024
Advertisement