_
_
_
_
_

‘Love is chemistry. Algorithms fail the more abstract and complicated a person is’

Inma Martínez, an expert in artificial intelligence, talks to EL PAÍS about natural language processing, smart cars and whether technology can predict the future

Inma Martínez in Málaga on November 26.
Inma Martínez in Málaga on November 26.garcía-santos
Manuel Jabois

Inma Martínez, 57, is an artificial intelligence visionary who has often been the first to see how digital transformation will change our lives. Martínez, who was born in Valencia, is a visiting professor at Imperial College London, and an advisor to the British and Spanish governments on the development and regulation of algorithms. She also advises companies on ethical AI as computers increasingly make their own decisions about our lives.

Question. You have said you spend your life looking at anomalies…

Answer. There is a quote from Frank Zappa: “Without deviation from the norm, progress is not possible.” If no one deviates from the norm, we are all sheep, and there is not going to be change and evolution. The nature of the world, and of life, is to be in constant transformation. And anomalies are the first clues that something is going to happen. You ask, this group, why haven’t they done what everybody else has done? There’s usually a reason, or several reasons, and you start to see a trend develop.

Q. Did that happen with the coronavirus?

A. A Toronto company called BlueDot found that. They process words from local newspapers, blogs, forums, chats, social media. It’s a branch of AI called NLP, natural language processing. And they started to see that there was a convergence of words: “corona,” “sick,” “Wuhan,” “market.” The way we analyze words in AI is through vector algorithms where you say: the word “corona” has come up 26,000 times next to the words “market” and “Wuhan.” So they said: “There’s something going on here.” And they saw that indeed, in the Wuhan market area there were people who were starting to have the same symptoms as what we then thought of as SARS. On December 31, 2019, BlueDot wrote a report for the World Health Organization. They said: “We believe that there is an emergency, a type 2 coronavirus outbreak, in Wuhan, and that this has a very clear pandemic look to it.”

Q. Did they keep going with the research?

A. They started to detect how the virus was spreading to other countries because they entered flight data, which people had been to Wuhan and where they had gone afterwards. Then they started putting in other kinds of data until they could find patient one, or zero. In Germany, for example, they got down to patient zero. Because if you know how to analyze data, and that data is good and reliable, you can perform wonders with it.

Q. Could you give an example of a trend that you have discovered?

A. When I was younger, I worked in an investment bank selling stocks. I was in emerging markets in Latin America and I didn’t understand what was going on in Brazil. The stock market there operated with patterns that made no sense. And my colleagues, who were older than me, would tell me: “People there are unpredictable. Tomorrow they will buy again. They don’t work on macroeconomics.” But I’m a very curious person, and I said: “There must be another reason. That’s not the way things happen.” And I found out, by investigating, that traders at the São Paulo Stock Exchange would come to work euphoric if their soccer team had won the day before and sell everything, and then in the afternoon they would buy again. It was all soccer-related.

Q. What did you do?

A. I constructed a little model on my Bloomberg terminal and started to watch the results of the five São Paulo teams. It was a prediction model, and I started to get it right. One day the boss came over and said, “What’s going on, how do you know this and what are you basing this on?” I answered: “Soccer.”

Q. Is that the future?

A. The future, and we are living in it now, is to know how to use artificial intelligence as a tool, not as an entity that operates alone and is going to work on its own.

The first AI system I set up with my team in Cambridge was focused on personalizing mobile internet services: at a certain point we knew what the person was looking for and we put it in front of them

Q. And does anticipating the future create any moral problems?

A. Everything that AI enhances, automates or optimizes has to be beneficial to us. It has to be within the parameters of safety and security, so that no one takes advantage of it. It should be used to make the world less complicated and to have better, more optimized services that cost less money.

Q. That is the ideal. But there are also harmful uses, like stealing our data and using it for someone else’s advantage.

A. Many companies have been using AI in a way that’s barely legal, and taking advantage of people: of their addictions, of their mental weaknesses. And those uses have to stop.

Q. But having the data is essential.

A. It doesn’t do anything on its own. People working with AI need to understand the context in which people operate. A website accumulates data about you: who you are, your email, your password, what you do on that website, where you go next, how many pages you have viewed, how many seconds you have been reading... But if you don’t put it in context, you will never know why that person does these things. And therein lies the ingenuity of people who work using AI to guess human behavior. Humans react to contexts, and we do things because there is a reason for doing them. You go to a website, you are looking for something, you want to buy a gift, and there is the ingenuity of those who start creating hypotheses: why does this person come here every Wednesday? You have to know about technology and anthropology. The first AI system I set up with my team in Cambridge was focused on personalizing mobile internet services: at a certain point we knew what the person was looking for and we put it in front of them.

Q. AI can also replicate stereotypes, prejudices or discrimination.

A. The first image recognition systems were trained only by men. And that had consequences. At the beginning of AI, when searching for image recognition, those training the Google algorithm would say “shoe,” and put in a man’s shoe. Of course, when a shoe with a high heel came up, the robot would say: “What is this?” That’s why there should always be a team of several people where all different visions add value.

Q. That filtering of things is what is also used to make the cars of the future.

A. Artificial intelligence has an element of craftsmanship. For example, how are cars being trained to have computer vision? By teaching them to recognize objects and label them: “person,” “cat,” “traffic light,” “van.” When you want to enter a website and you are asked: “Are you a robot?” and you have to say how many bicycles you see, you are training the computer vision algorithm of Google’s level 5 car, the famous Waymo. You are training it for free. Google should pay you a small fee, right?

Q. Your latest work is a book about the future of the auto industry. Autonomous cars.

A. When cars started going fast, every government called for safety measures. They started to introduce power-assisted brakes, seat belts, airbags, materials that can spread collision forces on impact, and so on. What happened? When we started to teach cars to drive by themselves, if we want to make it 1,000% safe, a system driven with the precision of machines is better than a human being driving. Because many human beings do not respect speed limits, or are not good drivers, or may be drunk. The safety rate is always worse with a human driving than with an automated system.

Q. But that’s not the whole story, right?

A. The car industry is already developing level 5 automation, but you won’t see it on the roads yet, because computer vision is not the only thing that guarantees cars won’t crash. You need smart roads. You need the internet of things sending signals between the vehicles so they can connect with each other and everything flows in a safe way. We will develop that over the next 10 years. And not only smart cities: roads are going to become smart to guarantee road safety. Japan and some countries in the European Union have started to create regulations to decide who will be held responsible in the case of an accident. In other words, we are not going to put objects the size of cars out into the world without being very clear about what happens if there is an accident.

We’re taking ourselves out of situations where we can make mistakes and letting a system come in that guarantees zero error. That’s the value of AI

Q. The application of AI in medicine is different.

A. In radiology, for example, work is being done to train algorithms to predict how rheumatoid arthritis will spread, or a brain tumor. Everything linked to diagnostic imaging is a great leap for society, because the human eye is the weakest sense we have, because it deteriorates. Human beings not only see with their eyes: the brain interprets what it sees, adds things, completes the image. It is not a very reliable sense. We are going to want precision. We’re heading to a world where that kind of precision is going to allow us to create better, more fine-tuned services for the person being treated.

Q. It’s also used in surgery

A. Especially in cardiology. And someone might say, “Ah, so surgeons are going to be out of a job?” No. Surgeons used to be incapable of removing a millimeter of heart muscle here and then another millimeter, but now they are capable. Now a robot cuts what they program it to do, at half a millimeter. More people used to die, now fewer do. You have to put a system in there so that if a person’s temperature drops one degree they will notice. It’s better that an intelligent machine does it, instead of a human being who’s tired, who’s forgotten to look, or whatever. We’re taking ourselves out of situations where we can make mistakes and letting a system come in that guarantees zero error. That’s the value of AI.

Q. You worked on a farm project.

A. We tried to find out if the most sociable cows produced the most milk, and it was true! Now I’m working on an international project on how artificial intelligence will allow us to predict crop yields better. We know that arable land is shrinking because of climate change, and we have more than seven billion people to feed. With artificial intelligence you can analyze the composition of the soil, how it overheats, what the moisture levels are, how it collects water and spreads it. You can do a calculation and say: “Look, instead of 50,000 hectares, which costs you X, if you just farmed 39,000, you would make more money: less effort, less cost.”

Q. Can artificial intelligence lend you a hand in romantic relationships?

A. Each person understands love in his or her own way. It is impossible to make one size fits all. A company called eHarmony was the first to use algorithms to match people based on the answers they gave on many topics: what you are looking for in your partner, how educated you are, and what you would do in a given situation. And it created little bots to match them. Are these people going to get along? Yes, because they have a lot in common and we’ve seen that they would react in the same way. But love is chemistry. These algorithms start to fail the more abstract and the more complicated the person is.

Q. Do you have Alexa?

A. I don’t have Alexa, and I turn off the microphone on my cellphone and whatever else I have around me, sometimes for a day and a half, to make it more difficult for them.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_