_
_
_
_
_

OpenAI researchers warned of breakthrough that threatens humanity before Altman’s dismissal

The news agency Reuters says programmers sent a letter to the board warning of the potential dangers of a project called Q*, but other media outlets have downplayed its importance

OpenAI
Sam Altman, head of OpenAI, arriving at a session on artificial intelligence at the Senate in Washington, in September 2023.JULIA NIKHINSON (REUTERS)
Miguel Jiménez

Several researchers at OpenAI sent a letter to the board of directors shortly before the dismissal of Sam Altman as CEO, according to the news agency Reuters and the specialized media outlet The Information, citing anonymous sources familiar with the matter. The researchers allegedly warned of a breakthrough discovery tied to artificial intelligence that, according to them, could threaten humanity. It’s unclear what role the letter played in Altman’s firing. Sources cited by Reuters described it as decisive, while the specialized media The Verge said the letter did not reach the board and that it had nothing to do with Altman’s dismissal.

On the eve of his sudden departure, Altman spoke at an executives’ conference held in parallel with the recent Asia-Pacific Economic Cooperation (APEC) summit in San Francisco. “Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he said. In principle, no special importance was given to that comment, nor is it known what specific developments he was talking about.

According to The Information, some OpenAI employees believe that Altman’s words were alluding to an innovation made by the company’s researchers earlier this year that would allow them to develop much more powerful artificial intelligence models, according to a person familiar with the matter. The technical breakthrough, led by OpenAI chief scientist Ilya Sutskever, raised concerns among some employees that the company did not have adequate safeguards in place to commercialize such advanced AI models.

According to sources cited by Reuters, the letter was one of the factors in a long list of grievances that led to Altman’s dismissal. The agency has not been able to obtain a copy of the letter or speak to the researchers who wrote it. According to one of the sources, board member Mira Murati, initially chosen as Altman’s interim replacement, mentioned the project, called Q*, to employees on Wednesday and said a letter had been sent to the board before Altman’s firing. The CEO has since been reinstated due to pressure from employees and investors.

An OpenAI spokesperson told Reuters that Murati had informed employees of what the media was going to publish, but did not comment on the accuracy of the information. According to Reuters, the company acknowledged in an internal message to staffers a project called Q*, which some believe could be a breakthrough in the search for AGI (artificial general intelligence), which surpasses human intelligence. Other sources familiar with the Q* project, however, did not consider it to be a groundbreaking advance.

Math problems

Researchers consider mathematics to be the frontier of generative AI development. Today, generative AI is good at writing and translating languages by statistically predicting the next word, like ChatGPT does. But the answers to the same question can vary greatly. Gaining the ability to do mathematical calculations where there is only one correct answer implies that AI would have a greater reasoning capacity that would resemble human intelligence. This could apply, for example, to new scientific research, AI researchers believe.

According to Reuters, in their letter to the board, the researchers underlined the prowess of AI and its potential dangers. Computer scientists have been debating for a long time about the dangers posed by superintelligent machines, if the latter should determine, for example, that it was in their interest to destroy humanity. On the OpenAI board there were two people very sensitive to these risks, who have now resigned as part of the agreement for Altman’s return.

According to the letter that the employees sent to the board asking for their resignation, these independents conveyed to the directors that allowing the destruction of the company could be consistent with its mission in favor of humanity. In fact, they hired a new interim boss, Emmet Shear, who defines himself as a doomer who is in favor of stopping the development of artificial intelligence, although he later collaborated in Altman’s return.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_