Skip to content

Pilar Manchón, director at Google AI: ‘In every industrial revolution, jobs are transformed, not destroyed. This time it’s happening much faster’

The head of research strategy at the multinational argues that artificial intelligence can be used to create a better world

Pilar Manchón, 53, is senior director of research strategy at Google AI. She radiates optimism and spreads it to those around her. At the helm of one of the most disruptive and transformative tools ever created by humanity, she acknowledges its dysfunctions but prefers to speak of AI as an instrument “capable of creating a better society,” of leading the human species toward a “new Renaissance.” During a stopover in her hometown of Seville, she grants this interview with the goal of inspiring and influencing the next generation about AI’s potential to build a new world.

Question. A group of experts has just called on the United Nations to establish “red lines” to prevent the harm caused by AI.

Answer. We have always been pioneers in security issues: ensuring that artificial intelligence leads to advanced, innovative, and highly ambitious developments, but doing so in a responsible and secure manner. We were the first to launch the principles for the development of artificial intelligence, and I believe they define the framework for development, innovation, and collaboration very well. Artificial intelligence is a positive force that will help us build a better world. But, of course, it must be done with caution, with common sense, and taking the necessary measures to ensure it works well. We have always put security before speed, even while keeping pace with the market, which is the hardest part. These initiatives [the letter to the U.N.] are very diverse and bring together the voices of many scientists who deserve to be listened to.

Q. Does AI humanization endanger personal relationships?

A. I always look at the world of human-computer interaction from the perspective of human cognitive processes. Humanization is a natural process. We all assign human characteristics, whether behavioral or physiological, to animals or things. It happens naturally. Can doing this with artificial intelligence somehow cross a red line? In some cases, it’s very beneficial, as with something as painful as loneliness. But abusing that and losing the notion that you’re interacting with an artificial system creates a risk.

The best policy is to ensure that developments are always transparent and that everyone is aware that they’re talking to an artificial intelligence so that it’s the person who makes their decisions and determines the red lines for certain behaviors. And, of course, educate users, because any tool can be misused or negligently used, or can cause harm; that’s why we learn to drive. But anthropomorphization cannot be demonized; rather, it must be understood as a natural process that can help us in some cases, but that, if misused or managed, can be harmful.

Q. But it’s easier to use AI than to educate ourselves on how to do it responsibly. Stuart Russell, a professor and computer science researcher at the University of California, Berkeley, compares the current situation to having a very fast car without a steering wheel or brakes.

A. The question is who knows how to drive, who should drive, and what vehicle is being driven. There are many types, and the level of risk varies. Only fully educated people, who have to provide information and have full control, can assume the risks of any tool. AI varies greatly depending on who wants to use it and what it’s for. The most important thing is that companies like Google are dedicating a large part of our resources to doing things well and educating people about what this tool is for, how it should be used, and how to optimize it, so that people can evolve. If you give a four-year-old a calculator before they begin to understand math, they won’t learn. But if you do it when they have the basics to perform calculations, you accelerate much more complex thought processes. AI increases our capabilities; it’s a human enhancer.

Q. Do AI agents, which can acting on the user’s behalf, give machines too much power?

A. Any tool misused or used for the wrong purposes causes harm. The key is the barrier we put in place, the education we provide, and the levels of control. I’ve been researching trust for a long time from a cognitive perspective, what it means to trust a digital agent. We trust the computer, the device, or the car to perform as expected. But the agent speaks to you in natural language and relates to you at a level that, previously, only humans could, and that creates a complex situation. We’ve spent our entire lives training to relate to people, read their expressions, understand their tone of speech, context, and everything else. And now you have artificial intelligence that tells you everything perfectly, at the right time, and with the right tone. It’s made for you, fits like a glove, and provides a service, but it’s not human.

When faced with something like this, our reactions still require adaptation. It’s like when you go to the movies and accept a bunch of things you know aren’t true, but you immerse yourself in the movie voluntarily and get angry or cry or laugh. But then you come out, and you know it’s not true. In Japan, you can hire people to give you hugs or pretend to be relatives or friends. Imagine that fake relative is a digital entity. You know perfectly well it’s not true, that they’re services you’re being provided. Using them is a personal choice, but it doesn’t have to be bad, unless the service provider has malicious intent and is trying to take advantage of a vulnerable situation. That’s where governance, legislation, and tool audits come in.

Q. Are we approaching an artificial intelligence capable of surpassing human intelligence?

A. We now have subhuman AI, meaning it’s below what we can do. The moment it can do the same things as humans, we’ll move straight into superhuman AI. We imagine a world in which there will be a slow transition between the two, but in technology, that’s not necessarily the case. Sometimes, a major qualitative leap changes the paradigm, the speed, and everything. The question is whether we’re clear about what we want to have, regardless of how we get there.

We have a tool that’s almost like a magic wand, which helps me create a new and better world, to identify things that could be improved and how to do it, or how to rebuild them, or to grow more professionally and personally, or to be healthier, or to develop creativity, or to have access to education. AI is a tool that allows you to build a much better society than the one we have. But to get there, we have to envision and express what that means, not just how we do it. We have enormous potential to achieve this. The tool is there, it’s evolving, and it will give us more power — if we can manage it with common sense — to build this better society.

Q. However, there are more pessimistic views

A. There’s a saying [attributed to industrial pioneer Henry Ford] that I like: whether you think you can, or you think you can’t, you’re right. Your success or failure largely depends on your confidence or lack of confidence in what’s possible. We have an enormous opportunity to create a world that, although it won’t be perfect, will be better. With artificial intelligence, we can. I believe we can, and we have evidence that it’s happening.

Of course, some things can go wrong, but the way to prevent this is to educate people so they know how to use these tools correctly and can spot errors. We have multidisciplinary expert committees where we put on our bad guy hats to see how a tool could be misused, with the goal of detecting potential harm, the associated risks, and how to prevent them. At Google, we’ve suspended the release of super-interesting tools because those committees, at the time, observed potential dangers that outweighed the benefits. We’re keeping them in-house until the capabilities, society, regulation, or ecosystem are mature enough.

Q. But will it eliminate jobs?

A. For every job destroyed with the creation of the internet, 2.6 were created. They weren’t the same, but in every industrial revolution, jobs have been transformed, not destroyed. What’s different this time is that it’s happening much faster, more horizontally, and in a much deeper way, and humans aren’t exactly good at adapting quickly to drastic changes. But that’s what we have to do.

Q. What if AI becomes accessible to anyone with bad intentions?

A. Anything that can be used to make wonderful things can also be misused. But by that logic, society wouldn’t progress. If people begin to understand the potential of AI, influence those around them, and educate, motivate, and inspire — especially young people — to create amazing things, we have the opportunity to build a better world.

Q. What about biases?

A. Do you want to remove bias from AI, or do you want to add your own? In the end, we’re not removing bias, but rather trying to impose what I think in accordance with the values of my society, my community, my political party, or my religious cult. It’s complex to do that with large models in which my values are not the same as yours. First, we have to decide how we do that, how far we want to go, and then generate the tools that can get us there. I advocate for non-negotiable values, which are, for example, human rights. But there are other values on which you and I may have completely opposite positions. The formula is transparency, control, and education.

More information

Archived In