_
_
_
_

Yann LeCun, chief AI scientist at Meta: ‘Human-level artificial intelligence is going to take a long time’

Nick Clegg, president of the company’s Global Affairs, argues that regulation should not be rushed in a conversation with EL PAÍS and four other media outlets in Davos

Yann LeCun (left, pictured at this year's Davos), head of AI at Meta, and Nick Clegg (in September in New York), president of Global Affairs at Meta.
Yann LeCun (left, pictured at this year's Davos), head of AI at Meta, and Nick Clegg (in September in New York), president of Global Affairs at Meta.AP/ Getty
Andrea Rizzi (special correspondent)

The extraordinary potential and enormous risks of the generative artificial intelligence (AI) revolution have been a major focus of discussions at the World Economic Forum’s annual meeting in Davos. Nick Clegg and Yann LeCun, president of Global Affairs and chief AI scientist at Meta, respectively, shared their views on the matter in a meeting with journalists from five international media outlets, including EL PAÍS.

Meta — Facebook’s parent company — is one of the companies at the center of the revolution. This is due to both Meta’s notable capacity in this specific sector, and because it goes hand in hand with the enormous power granted by the control of massive social media platforms. In recent years, Meta has faced serious criticism and accusations over how it manages its social networks, with critics warning of their detrimental impact on democracy.

In the conversation, LeCun emphasizes that “contrary to what you might hear from some people, we do not have a design for an intelligent system that would reach human intelligence.” The expert believes that “asking for regulations because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at near the speed of sound in 1925.” He explains: “Human-level AI is not just around the corner. This is going to take a long time. And it’s going to require new scientific breakthroughs that we don’t know of yet.” That’s why LeCun thinks it’s premature to legislate AI thinking that the technology could get out of human’s control. The European Union passed the world’s first AI legislation in December, and other countries, such as the United States and the United Kingdom, are also working on specific laws to control this technology.

Clegg, for his part, is calling on the lawmakers who deal with the issue around the world to regulate products, but not research and development. “The only reason why you might think it would be useful to regulate research and development is if you believe in this fantasy of AI systems taking over the world, or being intrinsically dangerous,” says Clegg, who is a former British deputy prime minister and the ex-leader of the Liberal Democrats in the U.K.

Both Clegg and LeCun are pleased that, after the commotion following the arrival of ChatGPT, public debate has moved away from apocalyptic hypotheses and focused more on concrete issues and current challenges such as disinformation, copyright, and access to AI technology.

The state of technology

“The systems are intelligent in the relatively narrow domain where they’ve been trained. They are fluent with language and that fools us into thinking that they are intelligent, but they are not that intelligent,” explains LeCun. “It’s not as if we’re going to be able to scale them up and train them with more data, with bigger computers, and reach human intelligence. This is not going to happen. What’s going to happen is that we’re going to have to discover new technology, new architectures of those systems,” the scientist clarifies.

LeCun explains that there is a need to develop new forms of AI systems “that would allow those systems to, first of all, understand the physical world, which they can’t do at the moment. Remember, which they can’t do at the moment. Reason and plan, which they can’t do at the moment either.”

“So once we figure out how to build machines so they can understand the world — remember, plan and reason — then we’ll have a path towards human-level intelligence,” continues LeCun, who was born in France. In more than one debate and speech at Davos, experts discussed the paradox of Europe having very significant human capital in this sector, but no leading companies on a global scale.

“This is not around the corner,” insists LeCun. The scientist believes that this path “will take years, if not decades. It’s going to require new scientific breakthroughs that we don’t know of yet. You might wonder why the people who are not scientists believe this, since they are not the ones who are in the trenches trying to make it work.”

The expert explains that there are currently systems that can pass the bar test, but cannot clear the table or throw the trash out. “It’s not because we can’t build a robot. It’s because we can’t make them smart enough. So obviously, we’re missing something big before we can reach the type of intelligence we observe, not just in humans, but also in animals. I’d be happy if, by the end of my career [he is 63 years old], we have systems that are as smart as a cat or something similar.”

The state of regulation

The debate on how to regulate this technology in its current state and with the possibilities of development close at hand has been one of the key issues at the annual Davos forum. The legislation being introduced in the EU, which in many ways is pioneering, was one of the main focuses of discussion.

With respect to regulation, Clegg — who was a Member of the European Parliament — avoids making a definitive statement on the matter, but does have some sharp rebukes for the EU. “It is still a work in progress. It was a very classic EU thing. There is fanfare, it is said that something has been agreed upon, but in reality, they haven’t actually finished it yet. We will look at it closely when it is complete and published, I think the devil will really be in the details.”

“For example, when it comes to data transparency in these models, everyone agrees,” Clegg continues. “But what level of transparency? Is it the data sets? Is the individual bits of data? Or, for example, in copyright. We have existing copyright legislation in the EU. Is it just going to limit that? Or is a new specific layer going to be added? When these models are trained, a huge amount of data is devoured. You wouldn’t be able to tag every single bit of data for IP reasons. It just wouldn’t be practical. So I think the devil really will be in the detail. We’ll look at it.”

He continues his criticism: “Personally, as a passionate European, I sometimes get a little bit frustrated that in Brussels they seem to pride themselves on being the first to legislate rather than on whether the legislation is any good or not. Remember the EU-AI Act was initially proposed by the European Commission three and a half years ago before the whole generative AI thing [like ChatGPT] erupted. And then they tried to retrofit, through a series of amendments, provisions to try and capture the latest evolution in technology. That’s quite a clumsy way of making legislation, retrofitting for something as important as generative AI.”

The debate between establishing protections and ensuring development is the source of strong tensions, both in politics and between politics and the private sector. In that subtle line that legislators have to draw, there is incalculable value at stake: productivity, jobs, capabilities that will define the balance of geopolitical power.

Clegg touches that nerve. “I know that France and Germany, Italy in particular, have been, I think sensibly, asking MEPs and the European Commission to be really careful not to put something into legislation which will really hinder European competitiveness. Because of the 10 biggest companies in the world, not a single one is European.” On the other hand, a group of experts called on the EU, in an open letter published by EL PAÍS, for even stronger legislation “to protect the rights of European citizens and European innovation.”

Optimism and caution

Amid this intense power struggle, a technology is advancing. Although it is still far from fully reaching human- or superhuman-level intelligence, it has already entered our lives with extraordinary force.

“The effect of AI will be to amplify human-corrective intelligence. There’s a future where all our interactions with the digital world will be mediated by an AI system,” says LeCun. “What that means is that at some point those AI systems will become smarter than us in certain areas. They already are. And smarter than us perhaps in every area at some point. What that means is that we’re going to have assistants with us at all times that are smarter than us. Should we feel threatened by this? Or should we feel empowered? And I think we should be empowered.”

Throughout the interview, LeCun evidences his cautious optimism. “If you think about the effect this could have on society long-term, it could have a similar effect as the invention of the printing press. So basically creating a new renaissance where you may be smarter; it’s intrinsically good. Now, of course, there are risks. And you have to deploy the technology in responsible ways that maximize benefits and mitigate the risks or minimize them.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_