Geoffrey Hinton: ‘We need to find a way to control artificial intelligence before it’s too late’
The British-Canadian scientist spoke with EL PAÍS about why he resigned from Google and the fears he has about AI
On April 8, Geoffrey Hinton announced that he had resigned from his position as Google’s vice president of engineering. According to an interview he gave to The New York Times, he now wants to dedicate himself to warning the world about the dark side of artificial intelligence (AI).
Born in Wimbledon 75 years ago, this British-Canadian gentleman is known as the “godfather of AI.” His work has been decisive in illuminating some techniques that have made ChatGPT, automatic translators, or autonomous vehicle vision systems possible. However, Hinton believes that the technology he has helped develop could actually lead to the end of civilization in a matter of years.
This scientist has always been obsessed with how the brain works, trying to replicate it in computers. In 1972, he coined the concept of a “neural network.” The underlying idea is to apply mathematics to data analysis, so that the system is capable of developing skills. While his proposition didn’t make many waves at the time, today, neural networks have spearheaded AI research.
Hinton’s big moment came in 2012, when he showed the true potential of his line of research with a neural network that could analyze thousands of photographs and teach itself to distinguish certain objects from others, such as flowers, cars, or dogs. He also trained a system to be able to predict the next letters of an unfinished sentence… the foundation of today’s great linguistic models, including ChatGPT’s.
His work earned him the Turing Prize – considered to be the Nobel Prize in computing – back in 2018, which he received alongside researchers such as Yann LeCun, his former student, or Yoshua Bengio. Hinton spoke with EL PAÍS via videoconference from his home in London, where he’s lived since departing from Google.
Question. What are the dangers of AI that humanity is facing?
Answer. There are many different dangers… I think a particularly bad one is the creation of so much fake news, which makes it impossible to know what’s true. That’s causing greater divisions in society. Another danger is that [AI’s elimination] of some jobs will reduce the number of people required to do those jobs, increasing the disparity in wealth between the rich and the poor. This always makes society more violent.
I recently realized that the kind of digital intelligence we’re developing might be a better form of intelligence than what biological brains have. I always used to think that deep learning was trying to mimic the brain, but that it wasn’t as good as the brain and we could make it better by making it more like the brain. [But now] I think [AI systems] may be doing some things more efficiently than the brain!
Q. Why do you think that’s the case?
A. With a digital system, you can have many, many copies of the exact same model of the world. These copies can work on different hardware. Thus, different copies can analyze different data. And all these copies can instantly know what the others have learned. They do this by sharing… [but] we cannot do that with the brain. Our minds have learned to [function] individually. If I gave you a detailed map of the neural connections in my brain, it wouldn’t do you any good. But in digital systems, the model is identical. They all use the same set of connections. Thus, when one learns anything, it can communicate it to others. And that’s why ChatGPT can know thousands of times more than anyone else: because it can see thousands of times more data than anyone else. That’s what scares me.
Q. You’ve spent decades working in this field. How come you’ve reached this conclusion now?
A. [For a long time], I’ve been trying to figure out how a human brain can implement the same learning procedures that are used in digital intelligence, such as what’s behind ChatGPT-4. But from what we know so far about how the human brain works, our learning process is probably less efficient than that of computers.
Q. But can AI really be intelligent if it doesn’t understand the real meaning of words and if it doesn’t have a sense of intuition?
A. Deep learning – if you compare it to symbolic AI (the dominant current in the discipline until the emergence of neural networks, which tried to make the machine learn words and numbers) – is a model of intuition. If you take symbolic logic as a reference – if you think that’s how reasoning works – [ChatGPT] can’t answer the questions I’m going to ask. But if you have a computer model of intuition – of deep learning – the answers become obvious.
Here’s an example: you know there are male and female cats, and you know there are male and female dogs. But suppose I tell you that you have to choose between two possibilities, both ridiculous: all cats are male and all dogs are female, or all cats are female and all dogs are male. In our culture, we’re quite clear that it makes more sense for cats to be female, because they are smaller, smarter, and surrounded by a series of stereotypes, and for dogs to be male, because they are bigger, stupider, louder, etc. I repeat: it doesn’t make any sense, but if you’re forced to choose, I think most people would say the same thing. Why? Because in our minds, we represent the cat and the dog – the man and the woman – with patterns of neural activity based on what we’ve learned. And we associate with each other the representations that are most similar. That’s intuitive reasoning, not logic. This is how deep learning works.
Q. In the past, you’ve said that you thought that AI would surpass human intelligence in between 30 to 50 years. How long do you think it will take now?
A. Between five and 20 years.
Q. That’s around the corner.
A. I’m not very confident about my prediction, because I think I made a mistake in my previous prognosis. But it’s clear that everything is moving faster.
Q. Do you think that AI will eventually have its own purpose or objectives?
A. That’s a key question, perhaps the biggest danger surrounding this technology. Our brains are the result of evolution and have a series of integrated goals – such as not hurting the body, hence the notion of damage; eating enough, hence the notion of hunger. Making as many copies of ourselves as possible, hence the sexual desire.
Synthetic intelligence, on the other hand, hasn’t evolved: we’ve built it. Therefore, it doesn’t necessarily come with innate goals. So, the big question is, can we make sure that AI has goals that benefit us? This is the so-called alignment problem. And we have several reasons to be very concerned. The first is that there will always be those who want to create robot soldiers. Don’t you think Putin would develop them if he could? You can do that more efficiently if you give the machine the ability to generate its own set of targets. In that case, if the machine is intelligent, it will soon realize that it achieves its goals better if it becomes more powerful.
Q. What should we do now?
A. You have to draw people’s attention to the existential problem that AI poses. I wish I had a solution, [like in the case of] the climate emergency: we must stop burning carbon, even though there are many interests that prevent it. I don’t know of any AI-equivalent [solution]. The best thing I can think of right now is that we should put as much effort into developing this technology as we do into making sure it’s safe. And that’s not happening right now. How [can that be] accomplished in a capitalist system? I don’t know.
Q. Do you think part of the problem lies in the fact that the development of AI has taken place within private enterprises?
A. This has been the case for the past few years. Google internally developed chatbots, like LaMDA, which were very good… [but the company] deliberately decided not to open them to the public, because [management] was concerned about the consequences. And then, while Google was leading [the development of] this technology, Microsoft decided to put an intelligent chatbot on its Bing search engine. [Subsequently], Google had to respond, because these companies operate in a competitive system.
Google has behaved responsibly – I don’t want people to think that I left to criticize the company. I left Google so I could warn you about the dangers without having to think about the impact it might have on the business.
Q. Have you spoken about your concerns with your colleagues? Do they share your worry?
A. We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us? We have no experience dealing with these things.
There are people I respect – like my colleague Yann LeCun – who think that what I say doesn’t make sense. I suspect we really have to think hard about this. And it’s not enough to say that we shouldn’t worry. Many of the smartest people I know are seriously concerned. It’s what has convinced me to step up and use my reputation to make people realize that this is a very serious problem.
Q. You didn’t sign the letter signed by more than a thousand AI experts calling for a six-month moratorium on research in the field. How come?
A. I think that approach is completely naive. There’s no way that [such a freeze] will happen. Even if [we stop] big companies from competing, entire countries won’t. If the US decided to stop developing AI, do you really think China would stop? The idea of stopping the research draws people’s attention to the problem, but [the proposal] isn’t going to happen. With nuclear weapons – since people realized that we would all lose if there was a nuclear war – it was possible to get treaties. With AI, it will be much more complicated, because it’s very difficult to check if people are working on it.
Q. What do you propose as an alternative?
A. The best I can recommend is that many very smart people try to figure out how to contain the dangers of these things. AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI. There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences. For instance, I think all governments should insist that all fake images be flagged.
Q. Are you optimistic about the future that awaits us?
A. I tend to be quite an optimistic person. There’s a chance that we have no way to avoid a bad ending… but it’s also clear that we have the opportunity to prepare for this challenge. We need a lot of creative and intelligent people. If there’s any way to keep AI in check, we need to figure it out before it gets too smart.
Q. Do you trust governments to find a way to regulate this technology?
A. In the United States, the political system is incapable of making a decision as simple as not giving assault rifles to teenagers. That doesn’t [make me very confident] about how they’re going to handle a much more complicated problem such as this one.
Q. Last summer, Google engineer Blake Lemoine became famous around the world by saying that the chatbot he worked on – LaMDA – had gained consciousness. Was that a premonition?
A. I think that what happened contains two different ideas. Firstly: will machines get [so smart] that they’ll take over? And secondly: are they conscious or sentient?
The most important debate is the first one. In the second idea, personal beliefs get involved – that doesn’t interest me. In any case, I’m surprised that there are so many people who are quite sure that machines aren’t conscious, but who, at the same time, don’t know how to define what it means for someone or something to be conscious. This seems like a stupid position to me.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.