_
_
_
_

Why are the people who pushed for artificial intelligence now signing so many doomsday manifestos?

In recent weeks, the creators of this technology and the sector’s big investors, along with thousands of academics, have been warning about the extraordinary dangers of AI

Sam Altman, the co-founder and CEO of OpenAI, during a lecture in Paris on May 26, 2023, during his European tour.
Sam Altman, the co-founder and CEO of OpenAI, during a lecture in Paris on May 26, 2023, during his European tour.JOEL SAGET (AFP)
Jordi Pérez Colomé

As if the world didn’t already have a wide variety of apocalyptic visions, we now have a new one: the rise of artificial intelligence.

In just two months, thousands of experts have called for AI to be paused, monitored and regulated. First, back in March, it was more than 30,000 people – led by Elon Musk, the co-founder of Tesla and Steve Wozniak, co-founder of Apple – who called for research into AI to be halted for six months. Then, Geofrrey Hinton – one of the fathers of this technology – left Google to be able to freely warn society about the risks. Shortly afterwards, Sam Altman – the CEO of OpenAI, the company that created ChatGPT – testified before Congress to say that “everything can go very wrong.” He’s now touring the world to warn about the epic dangers of the technology that his company makes.

As if this wasn’t enough, another manifesto – this one only consisting of 22 words – was published online on Tuesday, May 30: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signed by 350 people, the first two signatories are Hinton and Yoshua Bengio… two winners of the 2018 Turing Prize (the “Nobel Prize” in computing) precisely for being the “fathers” of the AI revolution. Three other illustrious signatories are the leaders of today’s top AI companies: OpenAI’s Altman, Demis Hassabis of DeepMind (owned by Google) and Anthropic’s Daniel Amodei.

All of this sudden concern raises several questions.

1. What needs to be regulated?

Why all this sudden bombast against something that is still so unknown? There are several answers – some more sincere, others more cynical. First, it’s important to note that one of the three fathers of AI didn’t sign: Yann LeCun. Along with Hinton and Bengio, the Frenchman also won the Turing Award in 2018. He has spent several days on Twitter, explaining why he’s not approaching AI with a sense of existential fear. His hypothesis is that it’s difficult to regulate something that we still don’t understand:

“Superhuman AI is nowhere near the top of the list of existential risks. In large part, because it doesn’t exist yet,” he wrote on his Twitter account. “Until we have a basic design for even dog-level AI (let alone human-level), discussing how to make it safe is premature.”

LeCun is by no means the only person who holds this view. Altman himself believes that for there to be a real leap in AI capability, things must occur that we currently have no idea about: “A system that cannot contribute to the sum of scientific knowledge [or] discover new fundamental science cannot be a superintelligence. To get it right, we’re going to have to expand the GPT model in some pretty big ways that we’re still short on ideas for. I don’t know what those ideas are. We’re trying to find them,” he said in a recent interview.

2. Why so many manifestos?

If there was a manifesto in March, why do we need another one? Well, the difference with this week’s short manifesto is that it’s signed by industry leaders – Altman (OpenAi), Hassabis (Google), Amodei (Anthropic) and Hinton (formerly Google) – who didn’t sign the first one, which called for a moratorium on the development of AI programs. Evidently, these companies don’t want to pause their research. The rest of the signatories are made up of a small part of those who already supported the original March manifesto, which includes more than 31,000 people, mainly academics. On May 19, the promoters – from the Future of Life Institute – sent an email to signatories, asking them to “join Hinton, Bengio and Amodei,” because it’s “essential to normalize and legitimize debate on the most serious risks of AI.”

3. Is all of this a smokescreen?

A few days before the short manifesto was released, Altman and two other OpenAI leaders published an article titled Governance of Superintelligence. There, they asked everyone to stop worrying about current AI models and to instead focus on legislating about future danger:

“Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other internet technologies… by contrast, the systems we are concerned about will have power beyond any technology yet created.”

In London, during his European tour, Altman said that his company would consider withdrawing ChatGPT from the continent if the European Union advances in its “over-regulation,” which will be debated in a plenary session of the European Parliament. One legislative option is to force companies with models such as ChatGPT to reveal copyrighted data in the corpus with which they train their machines, making for a contentious scenario. Days later, however, the OpenAI CEO tweeted that his company was no longer considering leaving Europe. Now, according to reports, it even seems like they’re looking for a European headquarters.

Thus, one possible explanation behind the manifestos is that big tech companies – such as OpenAI – prefer that politicians spend their time discussing future doomsday scenarios, rather than focusing on laws that may complicate the expansion of big AI firms right here in the present.

By making these threats, Altman is ensuring that he will be consulted when the time to propose legislation comes around. If lawmakers see these AI models as incomprehensible, they will need help from “experts” within companies to regulate them, says Jake Browning, a researcher on the philosophy of AI at New York University, who hasn’t signed the manifesto. Funnily enough, this past week, European Commissioner for Competition Margrethe Vestager made public her meetings with Altman and Amodei… just after announcing her imminent proposal for a “voluntary code of conduct.” A day later, she met with the president of the European Commission, Ursula von der Leyen.

Brussels is more focused on the here and now: “The EU is a threat [to these companies] because it ignores the AI hype and just looks at the consequences, treating these new models as services,” Browning explains. He also poses some more pressing questions: “Is the information provided by GPT reliable? [Does OpenAI] comply with existing privacy regulations, such as the right to be forgotten? Do [these companies] respect copyrights? Will Google’s AI search lead to monopolies? Do companies market these products honestly?”

“Across all metrics, these language models go awry: they’re prone to hallucinating, sharing private data and not respecting copyright laws. [These AI systems] are designed to bolster the power of big tech and increase their ad revenue… they’re marketed deceptively, without proper warning about their limitations.”

4. Does the old story about the end of the world really apply in this context?

The end of the world is a debate that generates great interest in Silicon Valley. Altman himself has said that we need to think more about the extinction of humans. But critics of these exaggerated manifestos believe that benefiting humanity is not among the priorities of their famous signatories. Timnit Gebru – an expert in computer ethics at Stanford University – warned about the biases and dangers of these AI models long ago… but Google, her former employer, fired her.

Today, Gebru continues to see nothing but hypocrisy in this debate about the unpredictable superintelligence of the future: “This is a movement that has been decades in the making, with the same billionaire funders of OpenAI, Deepmind, and now Anthropic. The ideological father of all of this, Nick Bostrom (author of the best-selling book Superintelligence), speaks of so-called “dysgenic pressures” – meaning that the people who are considered to be ‘stupid’ reproduce too much, which poses an existential risk to humanity – and has said that ‘blacks are more stupid than whites’ and slurred us. Do [the AI CEOS and investors] really want us to believe that they are the ones who care about humanity? Where were they when we raised awareness about the real damage and faced backlash? It’s almost like a coordinated cyberattack to distract us from holding the real organizations doing the harm accountable.”

Émile Torres – from the Leibniz University of Hannover – has spent years studying these theories of future annihilation: “There’s a lot of focus on AI… but not so much on AI companies. This attention gives companies a kind of free rein. Look away from what they’re doing to this kind of mysterious, extraterrestrial mind that will emerge via some kind of law of technological progress.”

“This utopian vision was invented by a bunch of super-privileged rich white guys and now they’re trying to impose it on the rest of the world,” Torres notes. “For them – who can be called transhumanists – a brain more privileged than the human one would be necessary to survive. That is, [they want] a digital one.”

5. But what if the hype is all true?

Along with all these possible reasons, we mustn’t ignore the most explicit reason to support these manifestos: truly believing that artificial intelligence poses an existential danger. We must consider the possibility that the signatories – although a good handful have obvious economic interests – sincerely fear an end of the world caused by mismanaged AI.

There are thousands of researchers from dozens of disciplines – many with no ties to industry – who believe that research should be paused and AI’s existential risks be closely observed. EL PAÍS has asked 10 signing academics from disparate fields – from physics and computing, to law, economics and psychology – for their reasoning. Their responses can be categorized into four groups:

a) The speed of AI’s development. For anyone outside of the AI industry, the speed of innovation is inconceivable. It’s perhaps the basic fear. “Not so long ago, the danger of machines posing an extinction risk seemed fanciful,” says Andrew Briggs, a professor emeritus in Nanomaterials at Oxford University. “The pace of progress in AI is now accelerating so fast – as shown by engines like ChatGPT – that it has become urgent to seek security measures before it’s too late.”

Speed can bring unforeseen problems, says Juan Pavón, a professor of Software Engineering and AI at the Complutense University of Madrid: “Development of large AI models is progressing faster than understanding them. Since we’re dealing with complex systems – with a multitude of interactions between the elements that compose them – unwanted and unforeseen effects can occur.”

b) Ignorance about how AI works. Lack of knowledge is another factor that worries the signatories. “AI programs such as GPT4 and its probable successors are extremely complex systems… we don’t really understand them,” says Alessandro Saffiotti, professor of Computer Science at the University of Örebro, Sweden. “Even so, we could end up delegating critical decisions for the planet and our society to these technologies: power plants, financial transactions, or even military systems. If companies don’t pause rolling out these systems until we understand them better, we need to prepare for potentially disastrous scenarios.”

“We don’t know that we don’t know… that is, the potential for serious problems can arise in unforeseen ways,” opines Henning Grosse Ruse-Khan, a Cambridge University law professor. “The potential of AI is so significant that we have no realistic way of predicting, or even guessing, its consequences.”

c) Doubt forces you to be prudent. The 22-word sentence from the latest manifesto is easy to comprehend from the perspective of risk analysis. The text seems to say: if you have a gun loaded with three bullets in a 100-round magazine, would you pull the trigger? Even if we don’t know if that gun exists, the natural response is to try to get those three bullets out of the magazine. “It’s important to talk about [the risk] because of the great uncertainty that exists,” says Edoardo Gallo, professor of Economics at the University of Cambridge. “We have very little idea of the probabilities. I’m pretty sure the risk of human extinction from AI in the next 100 years is very low… but I’m also pretty sure it’s not zero.”

In the end, the debate boils down to a bet. Browning – who did not sign the manifesto – is comfortable in denying all risks. “If you believe that language is the core of intelligence, you may be inclined to think that a talking machine is one step away from superhuman intelligence,” he explains. But he doesn’t think that this is the case. “Philosophically, I don’t think superintelligence makes sense as a concept. Technically, I don’t think anything that happens under the ‘AI’ label – no matter how broad – poses an existential threat.”

Bojan Arbutina – a professor of Astrophysics at the University of Belgrade – is one of the voices who doesn’t mind being overly cautious: “The threat [of AI] may be exaggerated, but if it’s not, we won’t have time to reconsider it. Seriously, we cannot understand all the risks. Superintelligence could, for example, perceive us humans as we see insects or even bacteria.”

d) There are many other problems in the world, but it’s not necessary to be dismissive of the potential threat of AI. Helena Matute – a professor of Psychology at Deusto University in Bilbao, Spain – emphasizes that “existential risk must not be mixed with the discussion about consciousness and intelligence… it has nothing to do with this.” For Matute, the number of challenges facing humanity should not be an excuse for ignoring AI: “Limiting the discussion to only the risks that some people already consider to be obvious is avoiding the problem. Global agreements must soon be reached that minimize the risks of AI – all the risks. I don’t understand why some people believe they have a kind of license to say: ‘This can be regulated, but this cannot.’”

These experts also note immediate and current regulations, which parties with economic interests in the development of AI may look down on: “My goal, in highlighting existential threats from AI, is the exact opposite of trying to rule out short-term harm,” says Michael Osborne, a professor of AI at the University of Oxford. “Instead, I want to emphasize that we aren’t doing enough to govern AI, a technology that, today, is strictly controlled by a small number of opaque and powerful technology companies.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_