Getting your Trinity Audio player ready... |
In the heated race to build the most powerful artificial intelligence (AI) models, industry leaders Google (NASDAQ: GOOGL) and OpenAI have abandoned their commitment to safety, insiders from the two companies alleged this week.
In an open letter warning about the dangers of unchecked AI development, 13 former and current employees at OpenAI and Google’s DeepMind revealed that the two firms are solely focused on launching new products without regard for their long-term risks.
The 13 included Daniel Kokotajlo, a former OpenAI governance researcher who left the firm in April after “losing confidence that OpenAI will behave responsibly.” Four others who signed the letter have also recently quit the San Francisco firm over safety.
Six of the signatories withheld their identities as they are still working at OpenAI, with DeepMind’s Neel Nanda the only actively employed researcher who disclosed his identity.
The letter points out that AI poses serious risks, including exacerbating existing inequalities, manipulation and misinformation. They also believe that with the rapid pace of development, the risk of AI autonomy—a scenario under which AI becomes self-aware and plots against humanity—is a distinct possibility.
Kokotajlo, who organized the group, believes that a doomsday scenario is no longer science fiction. After working for two years at OpenAI, he says there’s a 70% chance AI will rise against humanity.
Reckless AI development
AI firms continue to dismiss the need to prioritize safety. The whistleblowers say these firms have substantial information about the risks their AI models pose, but current regulations don’t oblige them to share.
“When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterwards,’” says William Saunders, one of the signatories who left the company in February.
These companies have also insulated themselves against whistleblowers revealing their secrets through comprehensive contracts that force ex-employees into silence.
OpenAI was recently blasted for its non-disparagement agreements with employees who left the startup. Under these agreements, it threatens to withhold vested equity if they say anything negative about the company. CEO and co-founder Sam Altman claimed to be unaware of the agreement but promised to have it scrapped.
Kokotajlo, for instance, stood to lose $1.7 million in vested equity in the $80 billion startup for speaking up against it.
It’s not just the junior employees who are concerned about OpenAI’s reckless AI deployment. Jan Leike, the head of the company’s superalignment team, whose mandate was AI safety, left last month, reiterating the same message: “Safety culture and processes have taken a backseat to shiny products.”
The company has since launched a new committee to address safety concerns.
In response to the open letter, spokeswoman Lindsey Held said OpenAI is proud of “our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing.”
She added that the company is always willing to engage with its employees to improve its safety standards.
However, a former safety researcher at OpenAI dismissed the company’s statement, revealing that it’s notorious for firing any employee who questions its safety programs.
Leopold Aschenbrenner revealed in an interview on Tuesday that he was fired after he wrote a letter to the OpenAI board over concerns about its lax safety measures. He was a member of the superalignment team led by Leike.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Blockchain & AI unlock possibilities
Recommended for you
Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum
Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum