Money imposes its law on OpenAI
Employees and investors had powerful financial incentives to force Sam Altman’s return to the company that developed ChatGPT
The unusual crisis at OpenAI has enough ingredients for multiple movies. In a thriller, Sam Altman would be the hero: the visionary leader unfairly fired who returns to the company to the acclaim of his employees. In a sci-fi scenario, he would be the villain: the executive who handed over artificial intelligence to commercial interests and doomed humanity to extinction. Altman’s dismissal and reinstatement have their roots in the tension between doomsayers and pragmatists on the account of artificial intelligence, but they also reflect the struggle between the conception of OpenAI as a non-profit and its rise as Silicon Valley’s most valuable start-up. In that latest battle, money seems to have imposed its law, while the multiple plot twists prove the firm’s governance problems.
Walter Isaacson, the biographer of Elon Musk, says that the decision to found OpenAI arose at a private dinner between the founder of Tesla and Sam Altman in Palo Alto (California), in the heart of Silicon Valley. At the time, Google was leading the artificial intelligence race, but Musk and Altman thought it was doing so without moral qualms about safety and potential risks to humanity. The initial idea was to create a non-profit artificial intelligence laboratory that would design open source software and attempt to counter Google’s growing dominance in the field. “We wanted to have something like a Linux version of AI that was not controlled by any one person or corporation,” said Musk to Isaacson.
Musk and Altman hired a Google research engineer, Ilya Sutskever, as chief scientist, with a salary of $1.9 million plus bonus. That caused the breakdown of relations between Musk and Larry Page, one of the co-founders of Google. The relationship with the other Google founder, Sergey Brin, broke down over an alleged affair between his wife and Musk.
OpenAI was founded as a non-profit organization in late 2015 with the proclaimed goal of “building safe and beneficial artificial general intelligence for humanity.” It was launched with the goal of achieving $1 billion in donations. After several years the amount collected was $130.5 million, which served to finance operations and early exploratory work. Superintelligence that surpasses human intelligence is also known as artificial general intelligence (AGI).
Musk broke with OpenAI in 2018, after trying to integrate it with Tesla’s artificial intelligence projects, which Altman refused to do. So he looked for ways to access more resources. “It became increasingly clear that donations alone could not offset the cost of the computing power and talent needed to drive basic research, jeopardizing our mission,” the company explains. So they came up with a new structure to preserve the main mission while raising the necessary funds for it.
The non-profit structure was maintained, with its board as the governing body of the entire group, but a new subsidiary was created with the capacity to issue shares, hire new employees and raise capital. That new company has limited profits, is required to pursue the mission of the non-profit entity, and is monitored by the latter to ensure that it works “to research, develop and deploy superintelligence, in a way that balances commercialization with security and sustainability, rather than focusing on mere profit maximization,” according to OpenAI.
The company made and continues to make it clear that investing in OpenAI is a “high risk” bet. Investors can lose all their money without achieving any return, the website warns, then goes even further: “It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.”
The company established that the main beneficiary should be humanity, not OpenAI investors. Even so, that structure was enough to start raising multimillion-dollar sums. Venture capital investors also made contributions in 2018, and shortly thereafter the company reached a strategic agreement with Microsoft. The company founded by Bill Gates first injected $1 billion as part of an agreement in which it became its technological and computing partner; then another $2 billion, and finally reached an agreement for an additional $10 billion without demanding representation on the board.
Microsoft has access to intellectual property and marketing licenses for certain developments, but the board has the power to determine that general artificial intelligence has been achieved and Microsoft would not have the right to it.
Despite all the precautions, the business line raised suspicions. The siblings Daniela and Dario Amodei, linked to the effective altruism movement — which emphasizes the risks of artificial intelligence — left OpenAI due to their differences of opinion with the agreements with Microsoft and with the direction it was taking, and they founded another artificial intelligence company, the San Francisco-based Anthropic, along with other former OpenAI employees. One of the main initial investors in Anthropic was Alameda Research, the parallel firm of Sam Bankman-Fried, convicted of various crimes for the collapse of the FTX cryptocurrency market. Anthropic has ended up turning to Google and Amazon to finance itself.
Musk himself still does not accept OpenAI’s change of direction very well. According to Isaacson, his biographer, Musk challenged Altman earlier this year to legally justify the change with OpenAI’s founding documents in hand. Altman tried to show him that everything was legitimate, but he was not convinced. “OpenAI was created as an open source company (that’s why I named it OpenAI), a non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum profit company effectively controlled by Microsoft,” he said, according to Isaacson.
Musk’s wound, in fact, has not healed. This week, in the wake of the artificial intelligence firm’s crisis, the entrepreneur shared on X a letter allegedly written by disgruntled former OpenAI employees. “Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity,” they said in that letter, which has since been taken down.
“Many of us, initially hopeful about OpenAI’s mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI’s work,” they added, asking the board of directors not to give in to Altman’s plans: “We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI’s original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.”
OpenAI’s board of directors was dysfunctional. Several of its members had left due to differences with the company, conflicts of interest or personal projects. The members were unable to agree on replacements, and the board had been reduced to six individuals. Three of them were OpenAI employees and founders: Greg Brockman, president; San Altman, CEO, and Ilya Sutskever, chief scientist. The other three were independents: Adam D’Angelo, founder of Quora; Tasha McCauley, engineer and entrepreneur, and Helen Toner, from Georgetown University. These last two are linked to the current of so-called effective altruism, which advocates putting a stop to the development of artificial intelligence, which they see as a Pandora’s box and a possible existential threat to humanity.
The effective altruism movement is running into more and more opposition. One of its staunchest critics, Marc Andreessen, another long-time Silicon Valley investor, recalls that “the fear that technology of our own creation will rise up and destroy us is deeply rooted in our culture.” He believes that with artificial intelligence the myth of Prometheus, Frankenstein or Terminator is being repeated. “My view is that the idea that AI will decide to literally kill humanity is a profound category error. [...] The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave,” he wrote in a lengthy manifesto where he also claimed that the doomsayers are part of a long Western tradition that generates apocalyptic cults.
Steven Pinker, a cognitive scientist at Harvard University, agrees: “I was a fan of Effective Altruism (almost taught a course on it at Harvard) together with other rational efforts (evidence-based medicine, data-driven policing, randomista econ). But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips,” he tweeted on Friday.
Former OpenAI board member Toner published an academic article in October that criticized some decisions by OpenAI and praised those of its rival Anthropic, a fact that Altman took as an affront. The article said that Anthropic had reinforced the credibility of its commitments to AI safety by delaying the early launch of its model and absorbing potential future revenue losses.
At the same time, other moves by the CEO did not please the board. He had just returned from holding, along with the head of Microsoft, Satya Nadella, a conference with developers in the purest style of companies like Apple. He was also in the process of attracting investors with a company valuation of close to $86 billion. At the same time, he was looking for financing for new projects. On the eve of his dismissal he had mentioned a recent advance that “pushed the veil of ignorance back and the frontier of discovery forward.”
Distrust had established itself in the governing body on which the entire group depended. Right then, Sutskever unexpectedly aligned himself with the three independents and together they decided to fire Altman and kick Brockman off the board by video conferences at noon on Friday. The company accused Altman of not having been candid with the board, and gave no further explanation. The 38-year-old manager found out about his dismissal from a hotel in Las Vegas, where he had gone to watch the Formula 1 Grand Prix that weekend.
Most companies of OpenAI’s size and importance have boards of between eight and 15 directors, most of whom are independent and all of whom have more experience on boards of this size than OpenAI’s independent directors, noted Marissa Mayer, a long-time Silicon Valley executive, when the crisis broke out. “I don’t think they have solid legal advice or good governance structures.”
Altman’s dismissal caused an earthquake. Brockman decided not only to leave the board, but to resign from the company altogether. Investors and employees began to ramp up the pressure, even more so when no specific reasons were provided for the dismissal. In a first meeting with the board, they reproached its members for putting the future of the company in danger. The surprise was monumental when the board’s response was that allowing the destruction of the company would be consistent with the mission they felt had been entrusted to them, to protect humanity.
That was too much for investors and employees. A few independent board members seemed willing to take down the most promising firm in artificial intelligence over poorly substantiated threats to humanity. For the employees, their jobs were in danger. They stood. On social media, employees said “OpenAI is nothing without its people.” Altman went to OpenAI headquarters on Sunday to negotiate his reinstatement. He was carrying a guest card. He tweeted a selfie: “First and last time I ever wear one of these.” The independents refused to give in and signed on an equally apocalyptic interim CEO, Emmett Shear.
first and last time i ever wear one of these pic.twitter.com/u3iKwyWj0a
— Sam Altman (@sama) November 19, 2023
For investors it was also a nightmare. They were aware that the group’s structure is peculiar, but they could not imagine that the board itself would act like a kamikaze. Microsoft, the investor with the most at stake, moved quickly and announced it was hiring Altman. This guaranteed not losing momentum in the artificial intelligence race. For the rest, the risk was of losing the bulk of their investment.
With Microsoft’s offer (and that of other companies) on the table, employees threatened to leave if Altman was not reinstated. 95% of employees signed a letter calling for the board’s resignation, including Mira Murati, the technology chief, initially appointed as Altman’s interim replacement, and Sutskever, chief scientist, who regretted having participated in the coup, according to a message that was published on X.
The financial incentives to stay with OpenAI instead of going to Microsoft were very strong. The employees have a stake in the capital and the company’s valuation before the current crisis had skyrocketed to nearly $90 billion on the back of the dazzling success of ChatGPT, whose launch marks one year on November 30. In fact, a sale of employee shares to investors was underway that would have made several OpenAI workers millionaires.
Despite Microsoft’s announcement, the head of Microsoft was still willing for Altman to return to lead OpenAI. Negotiations continued for two days. Shear himself, the second interim substitute, supported Altman’s return as the way to “maximize security while continuing to do the right thing for all parties involved.” In the end, the solution included the resignation of the two independent directors and Sutskever. A new board was formed with Bret Taylor, former president of X and former CEO of Salesforce, as chairman, with two more members: Adam D’Angelo, the third independent who remained, and former U.S. Treasury Secretary Larry Summers. Altman was once again the company’s chief executive, but not a member of the board. For the staff, it was a party. “I dis-resign,” Brockman said.
we are so back pic.twitter.com/YcKwkqdNs5
— Greg Brockman (@gdb) November 22, 2023
OpenAI employees had Thanksgiving week off. This week, things will no longer be the same as 10 days before. In reality, it is not yet known what they will be like. An independent investigation has been announced into Altman’s decisions and actions and the circumstances that led to his dismissal. At the same time, the group’s governance system is in question and could be reformed in the coming months.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.