Terence Tao, mathematician: ‘It’s not good for something as important as AI to be a monopoly held by one or two companies’
The Fields Medal winner is attempting to solve one of the Millennium Problems, with a reward of $1 million, but he also applies his analysis to topical enigmas such as the Venezuelan election and the advance of artificial intelligence
Terence Tao snorts and waves his hands dismissively when he hears that he is the most intelligent human being on the planet, according to a number of online rankings, including a recent one conducted by the BBC. He is, however, indisputably one of the best mathematicians in history. When he was two, his parents saw him teaching another five-year-old boy to count.
“That’s what my parents told me. I don’t remember this myself. They asked me who I had learned it from. I said, from Sesame Street,” says Tao, 49, who was born in the Australian city of Adelaide. When he was 11, he won a bronze medal at the International Mathematical Olympiad. At 12, he took home silver. At 13, gold. At 21, he received his doctorate from Princeton University. At 24, he was already a professor at the University of California in Los Angeles. And at 31, he won the Fields Medal, considered the Nobel Prize in his discipline.
“He’s the Leonardo da Vinci of mathematics,” said his Spanish colleague Eva Miranda, during a talk organized on September 18 by the Center for Mathematical Research in Barcelona. “It is no exaggeration to say that he is the greatest living mathematician. What makes him special is that he is the most versatile,” explains Miranda.
Tao tackles the most difficult problems, such as the Navier-Stokes equations, which have been describing the movement of liquids and gases since 1845. Based on the temperature, viscosity and initial speed of a fluid, the equations calculate its speed at a later time. Almost two centuries after their enunciation, it is still not known whether the solutions always maintain a certain regularity or whether an explosion, a sudden change in behavior, could occur. Whoever finds the answer will win $1 million as it is one of the seven Millennium Problems for which the Clay Institute of the United States is offering a reward.
In his spare time, Tao applies mathematical analysis to topical puzzles, such as the recent presidential election in Venezuela. On his blog, the mathematician drew attention to the official results, with oddly round percentages. Nicolás Maduro is said to have received 5,150,092 votes, exactly 51.2%. His rival, opposition candidate Edmundo González Urrutia, garnered 4,445,978 votes, exactly 44.2%. The rest of the votes would tally precisely 4.6%. These results are practically impossible, argues Tao.
Question. You have applied mathematical analysis to the elections in Venezuela.
Answer. Bayesian probability is the mathematical way of updating your beliefs about something. You might think that the elections in Venezuela were fair or that they were rigged, or maybe that there is a 50% chance in each case. Every time you get new information, you can update your beliefs. If something happens that would be unlikely if the elections were fair, the probability that they were rigged would increase. And vice versa. There is a formula to measure this, but every time new information comes in you have to calculate the probability that that event would occur under one hypothesis or another. A good example is the winning numbers in the lottery. Every once in a while they follow a pattern, like 10, 20, 30, 40, 50. Those patterns are very unlikely if the lottery is random, but they are also very unlikely if the draw was rigged. It’s just that a curious pattern like that appears every million times, but it doesn’t mean anything.
Q. And in Venezuela?
A. What happened in the elections in Venezuela is that the number of votes in each constituency was not announced, but rather the total results, and each figure was an exact percentage. There were like 16 million votes, exactly 51.2%, with no error at all. That was very, very unusual. If the reported results were not erroneous or manipulated, then there is only a one in 100 million chance that the observed result of having extremely round percentages would have occurred. The president could have told the electoral council: “I want these percentages to be the result.” Manipulation is a plausible explanation, while chance is not. This increases the probability that the elections were rigged.
Q. What is the probability that the results are rigged?
A. There are three hypotheses. One is that the elections were fair and the numbers were reported accurately. The second is that the votes were manipulated. And the third is that the electoral commission just basically made a big mistake, and it was just incompetence, not malice. The first hypothesis is almost ruled out now, because it is extremely unlikely that those round numbers would have shown up. So it comes down to whether you think it’s more likely that the Venezuelan government is corrupt or whether it’s incompetent. Both could explain this particular data. It’s been two months since the elections and I think that, over time, it’s become more likely that there was manipulation, because they have still not released the individual data from for each district. If they were simply incompetent and made a mistake with the percentages, they would have corrected it by now. The fact that they haven’t after so many months makes it more likely that there was some sort of manipulation.
Q. You are a scientific adviser to the U.S. president and co-chair of the White House task force on the risks and opportunities of generative artificial intelligence. In a new book, Nexus, historian Yuval Noah Harari argues that artificial intelligence poses a threat that could even destroy humanity. What do you think?
A. It’s theoretically possible. It’s a very powerful technology. There have been a lot of transformative technologies in the past — the automobile, the airplane, the internet — but what makes AI special is that it affects almost every single thing we do: journalism, mathematics, medicine, or whatever. It could all potentially be done with artificial intelligence. It could also be used to help build weapons — maybe not so much military weapons, but, for example, creating deepfakes to change an election. People worry that AI will become a superintelligence that takes over the planet, like in science fiction, like Skynet [the artificial intelligence in the movie Terminator]. But the current technologies are extremely limited: they’re basically machines that are very good at guessing. You ask it a question, it will guess the answer, sometimes it’s almost correct, sometimes it’s complete rubbish. Once you get to really unusual situations for which we don’t have much data, the AIs are still terrible. So I’m not super concerned. In 10 or 20 years this technology will be much more capable and there could be powerful AIs that could maybe do dangerous things, but by then we will also have a lot of experience dealing with them and how to defend against them.
Q. Yuval Noah Harari claims in his book that a terrorist group could create a new lethal pathogen and release it...
A. It’s much more likely that they’d kill themselves trying to do that. The instructions for doing this are actually already on the internet, if you look hard enough, so it is a concern, but these AIs make a lot of mistakes. One advantage is that the most advanced AIs don’t fit on a phone or a laptop. You need a huge cluster of supercomputers and so forth. There are only a few places in the world where these machines can be built, so it’s not something that you can hide. And terrorists in particular don’t use electronic devices very well, as we’ve just seen.
Q. You’re referring to the detonations of Hezbollah pagers in Lebanon.
A. Exactly, they’re probably very suspicious of AI tools, because they could probably be used against them. It’s probably more dangerous to them than it is to us.
In mathematics, water could blow up, but it’s a lot less exciting than it sounds
Q. Speaking of explosions, according to the Navier-Stokes equations, theoretically, can water spontaneously explode and destroy the world?
A. In mathematics, it could blow up, but it’s a lot less exciting than it sounds. The Navier-Stokes equations govern fluids, or so we think. They’re a simplification of the laws of nature. The actual laws of physics are very complicated, because water has trillions of atoms. It’s impossible to model each one separately, so we make a simplification: we assume that the velocities of all the particles don’t fluctuate too much. And then we get these things with the Navier-Stokes equations. We use them to model, for example, the weather and the motion of the oceans, but there’s a scenario in which the Navier-Stokes equations stop being a good model for fluids.
Q. What scenario is that?
A. The energy in water has a certain amount of kinetic energy, because each particle moves at a certain velocity. So there is viscosity. Because of friction, the kinetic energy will slowly decrease over time. Water can’t just get faster and faster everywhere. But what could happen is that all the kinetic energy gets concentrated in a small region. So maybe initially, kinetic energy is spread over the entire fluid. But maybe the somehow all the energy gets transferred to a small amount of water moving very fast, and then to an even smaller amount of water moving even faster. This would collapse into what’s called a singularity, and then you’d see the water have what’s called a blow up in mathematics, but it’s not really an explosion. It just means that there’s one place where the speed becomes infinite. It’s like if you crack a whip. A whip starts off very thick and has a very small tip. When you crack it, there’s a wave that starts off slowly, but as the whip gets thinner, it moves faster. And in the end, it’s faster than the speed of sound and that creates the crack of a whip. That’s what we call a blow up. Can you crack water like a whip? We call it a blow up, but it’s more like a crack, a sonic boom.
Q. Google DeepMind announced a couple of months ago that its artificial intelligence systems AlphaProof and AlphaGeometry had won a silver medal at the International Mathematical Olympiad.
A. Yes, but with an asterisk: it was not in the official competition.
Q. You won the silver medal at the age of 12. Has artificial intelligence reached the level of 12-year-old Terence Tao?
A. It’s an impressive achievement, something we didn’t expect to happen now. We thought it would maybe be possible in two or three years, but there are certain asterisks. First of all, it was two separate programs, two separate AIs. There were six problems. One solved three problems, the other solved a fourth, which is enough for a silver medal. Another thing is that these AIs were not given the problems themselves, a human translated them, they had to be converted into a special language. They also had much more time. In the actual Olympiad, students have eight hours to solve all six questions. The AI needed three days to solve one of the problems, despite running Google’s huge cluster of supercomputers. It’s not like ChatGPT, where you type in a prompt and it gives you an answer in 10 seconds. So it isn’t quite equivalent to a human-level competition, but it’s still extremely impressive. There are certainly other areas where computers have been better than humans for a long time. A child with a calculator will always be much better at arithmetic than I am at calculating with paper and pencil. Computers are also better at chess, poker, and many video games, such as StarCraft, than humans. So I think that in two or three years, there will be AIs that are better at these math competitions than humans.
Q. Better than you? In just three years?
A. Yeah, but I haven’t competed in these competitions for many, many years. It’s a very different activity from research mathematics. These tournaments are like the 100-meter sprint at the Olympics, whereas research is like a marathon. You need months and months to solve a problem, and you have to consult the previous literature.
One of the reasons why human mathematicians become good at their job is because they make a lot of mistakes and learn what doesn’t work. AIs don’t have this data
Q. So do you think that artificial intelligence can become better than you at an activity as creative as mathematical research?
A. I think they’ll be very useful assistants. They are getting good at solving problems for which there is a lot of previous data about similar problems. The thing is that mathematicians usually only publish our success stories, we don’t share what we try and doesn’t work. And one of the reasons why human mathematicians become good at their job is because they make a lot of mistakes and learn what doesn’t work. AIs don’t have this data.
Q. So?
A. All modern AI systems are based on huge amounts of data. If you want to teach an AI what a glass of water looks like, they need millions of examples of images of a glass of water. If I pour a glass of water and show it to you, you say, “Okay, I get it.” There needs to be a breakthrough in teaching AIs to learn from very small amounts of data. And we don’t know how to do this at all. If we can figure it out, then maybe AI can become as good as humans at really creative tasks.
Q. What do you think about artificial intelligence systems being in the hands of the ultra-rich, like Elon Musk?
A. There are some open source AI models out there, although they are two or three years behind the big commercial models. It’s not good for something as important as AI to be a monopoly controlled by one or two companies, but the basic technology to build these AIs is fairly public. In principle, anyone can build an AI. The problem is that it needs a lot of hardware, a lot of data, a lot of training. It takes hundreds of millions of dollars to make one of these really large models, but the cost will come down over time. There will be lots of open alternatives to AI in the future. I think there will be some need to regulate some aspects of AI. The ability of AI to generate deepfakes can be quite damaging. There are a few that could influence elections.
Q. Some of these businessmen are also a bit eccentric.
A. When these AI models came out, there was some concern that they would be used to generate propaganda, that there would be a conservative ChatGPT, a liberal ChatGPT, a Chinese Communist Party ChatGPT that would only give party-approved answers about Taiwan or whatever. This hasn’t happened. We’re going to need some regulation, but so far it hasn’t been as damaging as we had feared. What will happen soon is that we will lose trust. Before, people would see a video of an event and believe that it had actually happened. There was no way to fake a video of a plane crashing into the World Trade Center. Now, with AI, it is possible. The result will be that even when something is genuine, people won’t believe it. People won’t believe photos and videos anymore. How do we convince someone that something happened if everything can be faked? That is a problem. We have to find new ways to verify facts.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition