_
_
_
_

Michael I. Jordan, artificial intelligence pioneer: ‘There’s a little too much hubris in the world of AI’

The mathematician believes that the current explosion of artificial intelligence is powered by the hope for a magic solution to all problems that might never come

Michael I. Jordan, pionero de la IA
Michael I. Jordan, mathematician, doctor of cognitive sciences, UC Berkeley emeritus professor and researcher at Inria Paris.(c) Gus Figuerola
Jordi Pérez Colomé

Michael I. Jordan, 68, is a mathematician and doctor of cognitive sciences. His work influenced the creation of AI applications like ChatGPT and recommendation systems. Today, he is a UC Berkeley professor emeritus and researcher at Inria Paris, but he’s never been interested in joining the entrepreneurial fray of Silicon Valley.

“Everybody should find what gives them pleasure, not try to overextend ourselves. Fundamentally, I’m someone who wants to understand things. I don’t want to just build things,” he says.

His knowledge and disinterested point of view allows him to see the hype surrounding generative AI with a hearty amount of skepticism. He recently won a BBVA Foundation Frontiers of Knowledge award for his work. Jordan is less than tickled by jokes regarding his name — among other reasons, because he was a dean at MIT long before the basketball star rose to fame, and the scientist continues to innovate to this day. “At first it was funny, but then I got tired of it,” he says, in regards with sharing a name with a sports icon.

Question. Is there too much hype regarding AI?

Answer. The hype will just continue. The people who develop the technology love to talk about it and they love to extrapolate, but I think there’s a little too much hubris, also. Collective efforts are what make humanity go forward. We’re not that smart individually. These companies to me are developing tools, they’re powerful tools. But a tool laying on the ground doesn’t do very much. It has got to be in the hands of a human being, probably groups of human beings, and then good things start to happen. If you’re going to solve something, find new medicines, the tool will help you look in the right places and make advice. If you’re going to think about climate issues, the tools will help you make better predictions that allow you to think in new ways. If you’re going to do creative arts, the tool will allow you to make new sounds and make new connections. The tools alone, however, don’t solve the problem.

Q. Were you expecting something like ChatGPT to appear two years ago?

A. It’s better than I would have imagined. They said that there would be brute force AI by 1990, which means AI that just does something, doesn’t try to understand anything directly or develop any particular structure. It just lets gradient descent find everything and does it on vast amounts of data. All of these were ideas in 1990. The extra ingredients were two: the easy access to vast amounts of data taken from the internet, and then the GPUs [graphics processing unit] that really powered it. You put all the ingredients together and suddenly it really, really gets good. I think the right way to think about ChatGPT is that it’s a collective presence of all of humanity. It takes little pieces from hundreds of millions of people, especially people that wrote really good things, like on Wikipedia. A great deal of its intelligence is coming right through from people’s contributions.

Q. The Chinese model, DeepSeek, has just burst onto the scene. Are the advantages of DeepSeek as extraordinary as they say?

A. I don’t follow all the details, but they do seem to be significant. It’s not entirely surprising, keeping in mind that the structures that have worked, based on transformers and layered networks, have been designed in a somewhat improvised manner. Often, brute force has been enough to move forward quickly. But that doesn’t mean there aren’t clever tricks or simpler structures that also work.

Q. What could its success mean now for Silicon Valley’s priorities when it comes to AI, considering its enormous investment in software and databases?

A. I think that Silicon Valley should spend more time thinking about the business model of generative AI and the large language models, and not solely depend on brute force to advance.

Q. The founder of Anthropic, Dario Amodei, who is the creator of the Claude chatbot, said a few days ago in Davos that in two or three years, AI will be “better than humans at almost everything.”

A. I think that’s just wrong. That person is someone who didn’t study computer science, who didn’t study linguistics, who didn’t study social science. I think physics was his background. And physicists tend to have a lot of hubris about figuring out how the universe works. But I think that they’re underestimating the human genius, the human collective genius especially. It’s already true that computers can do math better than any of us. And they can write songs and they can even, probably, write a novel. I don’t believe in two to three years that it’ll write novels like Dostoevsky. Those are novels that speak to the human condition and they speak in a way that resonates to us because they speak to the experience of the person writing it, that they had a life, they lived, they died, they suffered, they prevailed. You get some of that by imitating humans, by predicting the next word in lots of old sentences, but it’s not the same.

Q. Google’s DeepMind seems to focus more on tools.

A. I’m fine with that. I distinguish between companies. DeepMind seems to me to be one of the more productive companies, in terms of creating really useful tools. I don’t follow all the details, but there’s probably a little hubris of, “we’ll solve the world’s problems,” but I think it’s not meant in the same way. I think they’re simply trying to create the best tools they can. There are other, more crazy people who think that their AI tower is going to have all the world’s knowledge in it and it’ll know everything, and if we want an answer to any question, we’ll go to it and it’ll give us the answer. That’s very much implausible. Humans in the moment have thoughts that are contextual, that are complex. This big machine on the hill does not know all that stuff. Saying that some entity will be smarter than us, it’s just not very well-defined.

Q. You’ve remarked that saying an entity will be more intelligent than we are is very naive.

A. There are many kinds of intelligence. I like to talk about the intelligence of a large-scale market. It is composed of lots of small decisions. You don’t need to know a lot to make that decision, but you put all the decisions together in the right structure, with incentives and with certain kinds of connections, all that emerges. And this market does these amazing things. It stabilizes transactions, it makes things available, it adapts and all sort of things. It is by any definition an intelligent entity. That’s not human intelligence, that’s another kind of intelligence. There’s probably 10 other kinds of intelligence.

Q. When someone says that we are going to “solve intelligence,” what do they mean?

A. You’re going to solve all these things? You’re going to create a mega intelligence that encompasses all of them? It just becomes science fiction, and not very helpful to engineering-oriented people and scientists like myself. And I don’t think it helps society to think that that’s what we’re about to get. I think society is best helped by thinking that we’re going to get some very powerful tools and that we’ll find creative ways to use them. I often think about the 25-year-olds who are coming into this world, who think their goal is to develop the autonomous robot that dances on the stage. And no, their goal should be to develop a system of cars that is federated and linked, that makes nobody ever die ever again in a car.

Q. And the robots that Elon Musk presented that served beer?

A. It’s public relations. It’s games, it’s toys. It’s serious engineering, unquestionably, but I don’t think it’s a particularly good path. Again, it’s trying to imitate humans, mimic humans, and therefore, replace humans. I don’t see that that should be the goal of technology. It should be to aid humans and to help us do things we don’t do very well. I’m not going to criticize, they entertain. But there are so many problems that are a better focus for this technology than having robots running into burning buildings or going up to Mars.

Q. Why doesn’t generative AI have more collective and general goals?

A. Generative AI is sexy. It’s fantastic PR. You show that you’re doing this thing, therefore everyone infers that behind this, there is this super technology — let’s invest! A great deal of this is driven by the desire for $100 million, you know, series A round or something. But the fact is, if you go into any company that’s solving real-world problems, like how do I get packages from one place to another, how do I ensure safety of people, how do I do education better and so on — in that company they’re all sitting around the table and working together. There is actually an engineering way of thinking inside of most companies solving real problems. And so, they would use a generative AI tool in some piece of that maybe, but they’re not going to spend all their time developing tools to get hundreds of millions of dollars of valuation. Most companies are actually using some generative AI, but they have a business model that is something else. In contrast, a lot of these startups developing generative AI and getting the big valuations, they are not going to make it, because they don’t have a business model.

Q. Is that where we’re at today?

A. I think it is, and I don’t want to say I know all of this, because I spend a lot of time and have a lot of experience, but self-driving cars were promised by Elon Musk five times. Every year, there would be “we’re going to have them out there.” And he has not done that, because he did not understand how hard it is. Now you do have Waymo, which I think is a more successful company. And there are Waymo cars in San Francisco doing it in a simple way. They’re moving relatively slowly and they’re relatively safe and that’s good. It’s taking time. Those kind of engineering projects are more like 10-year projects, they’re not two-year projects. And that’s just cars, which are not as complex as for example the human body or medicine or the physics of the climate. As soon as you get in any one of those problems, the utter complexity starts to matter.

Q. What would you say to workers who are worried that they’ll be replaced by AI?

A. First of all, I would say that we have got to have more labor economists in this discussion. When we talk about regulation, Europe loves to throw regulations first and then think later. And that’s a very bad idea. You need to first understand the phenomena. And then you try to add some regulation to make sure they’re good equilibria. This top-down management of technology because people are fearful of it is a very bad idea. Now, that doesn’t mean that you shouldn’t think about it. Absolutely, some jobs are going to go away and some of them maybe need to be protected. Maybe it just needs to be slowed down. If jobs disappear in a year or two, that’s too fast. If it takes 10 years for a certain kind of job to disappear, that’s better, that’s slow enough that people can adjust and start to understand that certain kind of jobs, like listening to a conversation and summarizing it, the AI can do that. So, if your career dream was to be the person taking notes in the room, you better think of a different career.

Q. And what would you tell a young person who is starting at university in the era of AI?

A. Saying that you should not do mathematics because it’s all going to be done by computers is just not true. There will be just new problems emerging and they won’t be the old computer science problems. There’ll be new computer science problems, and they won’t be just learning how to code in Fortran or C. They’ll be putting things together in bigger systems. Machines won’t be doing that by themselves. If you are the builder type and understand how to put things together, there are going to be plenty of jobs.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_