Recently, an open letter signed by 13 former and current employees of OpenAI and Google DeepMind has garnered widespread attention. The letter expresses concern over the potential risks of advanced artificial intelligence (AI) and the current lack of regulation over AI technology companies.
In addition, the letter mentions that AI may exacerbate existing inequalities, manipulate and spread misleading information, and potentially become uncontrollable autonomous AI systems, ultimately threatening human survival.
Among those who endorsed the letter are Geoffrey Hinton, known as the "godfather of artificial intelligence," Yoshua Bengio, who received the Turing Award for his pioneering AI research, and Stuart Russell, a scholar in the field of AI safety.
The letter states that AI technology has the potential to bring unprecedented benefits to humanity, but at the same time, these technologies pose serious challenges. Governments around the world, as well as other AI experts and AI companies themselves, are already aware of these risks. However, AI companies often avoid effective regulation due to financial interests, and "we believe that specially designed corporate governance models are insufficient to change this situation."
Advertisement
The letter mentions that AI companies have access to a vast amount of internal information, including the capabilities and limitations of their systems, the adequacy of protective measures, and the risk levels of different types of harm. However, their current responsibility to share this information with the government is limited, and there is no obligation to share it with civil society.
Current and former employees of the aforementioned companies are among the few who can be held accountable to the public, yet non-disclosure agreements hinder the expression of such concerns.
The letter calls on leading AI companies to commit to following certain principles, including a pledge not to enter into or enforce any agreements that prohibit negative evaluation or criticism of the company's risk-related concerns, nor to retaliate against employees for their risk-related criticism by hindering their vested economic interests.
The joint letter hopes to establish a verifiable anonymous mechanism for current and former employees to use.
Daniel Kokotajlo, a former employee of OpenAI, is one of the signatories of this joint letter. He posted on social media, "Some of us who recently resigned from OpenAI have come together to demand a broader commitment to transparency from the lab." In April of this year, Daniel resigned from OpenAI, one of the reasons being a loss of confidence in the company's responsible behavior in building general artificial intelligence.
Daniel mentioned that AI systems are not ordinary software; they are artificial neural networks that learn from vast amounts of data. The scientific literature on explainability, alignment, and control is growing rapidly, but these fields are still in their infancy. While the systems being built by labs like OpenAI can bring tremendous benefits, if not handled carefully, they may cause instability in the short term and catastrophic consequences in the long term.Daniel stated that when he left OpenAI, he was asked to sign a document that included a non-disparagement clause, prohibiting him from making any critical remarks about the company. Daniel refused to sign and lost his vested equity.
When Daniel joined OpenAI, he had hoped that as AI capabilities grew stronger, the organization would allocate more funding to safety research internally, but OpenAI never made this shift. "After people realized this, they began to resign. I am neither the first nor the last to do so," said Daniel.
At the same time, Leopold Aschenbrenner, a former member of OpenAI's Super Alignment department, also revealed the real reason for his dismissal in a public interview. He shared an OpenAI safety memorandum with several board members, which led to dissatisfaction from OpenAI's management. Leopold stated on social media that achieving AGI by 2027 is extremely likely and that stricter regulation and more transparent mechanisms are needed to ensure the safe development of artificial intelligence.
This public letter incident is one of the many crises that OpenAI has faced recently.
Shortly after the release of OpenAI's GPT-4o model, Ilya Sutskever, the former Chief Scientist at OpenAI, officially announced his departure. Soon after, Jan Leike, the co-lead of OpenAI's Super Alignment team, also announced his resignation on Twitter. He stated that there has been a consistent disagreement within the OpenAI leadership about the company's core priorities, and that the Super Alignment team has been sailing against the wind for the past few months, facing numerous internal obstacles in the path to enhancing model safety, "(OpenAI's) safety culture and safety processes have been overshadowed by shiny products."
Leave A Comment