_
_
_
_
_

Brad Smith, president of Microsoft: ‘We must have a way to slow down or turn off artificial intelligence’

The executive is in favor of governments and civil society putting pressure on the industry to regulate AI: ‘The more powerful the technology becomes, the stronger the safeguards and controls need to become with it’

Brad Smith Microsoft
Brad Smith, president of Microsoft, photographed on Monday at EL PAÍS headquarters.Claudio Álvarez
Patricia Fernández de Lis

There is no denying that Brad Smith is a lawyer. The president of Microsoft is extremely cautious when talking about the turbulent waters in which artificial intelligence finds itself. According to Smith, it’s the most important technology that’s been created since the invention of the printing press. But he admits there are problems regarding its use and control, from the complex issue of copyright protection and cyberattacks by countries such as Russia and North Korea, to what is, he confesses, his biggest concern: the use of deepfakes to alter election results, in a year when practically half the planet will go to the polls.

Last Friday, Smith, 65, presented an accord at the Munich Security Conference, signed by 20 other companies, that seeks to improve the technological response to these deceptions and improve the transparency of the companies that control AI. “The difference between the promise and danger of new technology has rarely been more striking,” says Smith.

The president of Microsoft, which is today the most valuable company in the world, met with Spanish President Pedro Sánchez on Monday to sign a collaborative agreement to expand AI infrastructure in Spain. The $2.1 billion-project is Microsoft’s biggest investment in Spain in its 37-year history in the country. It is a very important alliance, says Smith, who argues: “If AI is not used in the government, in the healthcare field, in the economy, I don’t see how Spain sustains its long-term economic growth.”

Question. I asked Copilot, Microsoft’s generative AI, to come up with a question for you. This is what it suggested: “What is your vision regarding the role of artificial intelligence in today’s society? And how do you believe we can ensure its ethical and responsible development?” I don’t know what you think of that question, could I lose my job to Copilot?

Answer. No, no. There were three questions there. The two from Copilot and the one from you. So let me start with the one from you, because it makes an important point. Copilot is a co-pilot. It’s not a pilot. You may have used it to prepare for this interview, but it is a tool. It helps you, it gives you ideas, but ultimately you will need to use your judgment. And I think that’s what we should ask all people to do. Use this new technology to exercise better judgment, to be more creative, to develop ideas, to help with writing but not delegate or outsource thinking or writing to a machine. That would be a mistake.

Regarding the other two questions, today and tomorrow [on Monday and Tuesday] we’re announcing new initiatives with the Spanish government around cybersecurity and responsible AI. Why does that matter? At the end of the year, Spain ranked number four in Europe on per GDP usage of AI. But when you look at AI skills, it’s 14th on a per capita basis. When you look at the creation of AI-based software, it’s 15th. This shows there’s a gap between using AI and creating the skilled base to create AI. Spain is an incredible place to live. It is a very prosperous country in many ways, but it is not growing quickly. The population is aging. The unemployment rate is 11%. And those challenges need to be addressed with creativity and urgency. Part of the solution is the use of AI. We must create the jobs of the future.

Q. We are fascinated and scared by the AI revolution. You have been working in this industry for many years, more than 30. Is the technology really that revolutionary?

A. I think that AI is the most important invention for the life of the mind since the invention of the printing press. That was almost 600 years ago. Think about the printing press and what it accomplished: it made it possible for people to write, for people to read, and for market economies to flourish. Now go to AI. It’s very similar. It’s a tool that can help people think in new ways. Hopefully, it helped you in some interesting way to think of questions for a conversation. That is an injection of creativity. It’s enormously valuable for finding patterns in large amounts of data, for giving people insights to advancing issues like drug discovery. If we use it well, it can be an accelerator for people and the kinds of work they do.

Q. However, the IMF warns that artificial intelligence will affect 60% of jobs in advanced economies and, for the first time, will impact the most qualified workers.

A. It’s a really important issue for us to address. First, I would put it in context. What percentage of jobs do we think have been impacted by the advent of digital technology, the personal computer or the cell phone over the last 40 years? It’s probably an even higher percentage. And yet, we’ve been adapting to this changing use of technology fundamentally for almost the entire working lifetime of everyone who’s working today. Many were impacted and some went away. The real lesson of the last 40 years is if people can stay at the forefront of knowing how to use technology, they are likely to be successful in their careers. Their careers may take them to places they didn’t necessarily anticipate. We should expect a real impact on the way we work. What it should do more than anything else is create a sense of inspiration, but also a little bit of urgency to learn how to use this technology and become better at whatever it is people want to do.

Q. You mentioned creation. This is one of the issues most affected by AI. Your company has been sued by The New York Times for copyright infringement.

A. I think this is a natural and inevitable aspect of the invention of a new technology that impacts how people create and distribute what they write. There are two legal questions we are going to face. One is relatively easy and the other is more complex. The one that’s easy is to ask, what do you do if an AI system generates output that copies something that is protected under the law? That’s a legal violation. There’s no doubt about that. We’ve done two things to address this. First, we built an entire architecture around Copilot and other tools [Copilot cites sources in its answers, unlike other generative AI] to avoid that. And second, we’ve said to all of our customers, this is our legal problem, not yours. If they use our system properly, we’re the ones who are liable, not them.

Then you go back to the first question that is more uncertain. Can you train an AI model by reading all of the works of the world? It’s always been understood that you can read as much as you want and remember as much as you can. I met with a government official in Germany last week who said he had read by his estimation about 5,000 books in his lifetime. But when he gives a speech, he doesn’t have to step back and ask, where did I read this? Do I have to credit where I first got this idea? We all have a right under copyright law to read and learn. Now we’re asking can we enable machines to learn in this same way. I think there’s a societal imperative to make that possible. If you want to advance the capability of this new technology, it will require that it be able to learn broadly. And more than that, if you really want to open up this new industry to open source developers and academics, to people beyond big companies, it’s critical that it be open to that kind of learning. At the same time, none of us should want this new technology to put creators, including newspapers like The New York Times, out of business. I think we’ll have to find a way to balance that learning with giving creators the ability to continue earning a good living.

Q. Is it possible for you to reach an agreement, not only with The New York Times, but with other creators and authors?

A. There are three goals that we must keep in mind. Goal number one, ensure that the technology can advance and compensate the creators of today and the future. Goal number two, let’s make sure this advances in a way that makes this content available broadly on economically affordable terms to everyone. Goal number three, think about the impact for companies that also control a lot of the content. The New York Times may seem like a large owner of content, but compared to YouTube, it’s tiny. We need to think about the other places where there are these critical repositories of content. And we have to make sure that they’re open on accessible and affordable terms to everybody and not just the single company that happens to own them for purposes of developing their own model.

Q. The EU has become the first place in the world to regulate AI. What do you think of this law?

A. We need a level of regulation that protects people’s safety. I’m sometimes surprised when there are people in the technology sector who say that we should not have that. When we buy a carton of milk in the grocery store, we buy it not worrying about whether it’s safe to drink, because we know that there is a safety floor for the regulation of it. If this is, as I think it is, the most advanced technology on the planet, I don’t think it’s unreasonable to ask that it have at least as much safety regulation in place as we have for a carton of milk. Regarding the [European] AI Act, the good news is it creates that kind of protection. The AI law does this; it looks at the safety and security standards and it imposes a floor for these advanced frontier models. And it’s not much that dissimilar from what the U.K. and the U.S. is doing.

Brad Smith, during his conversation with EL PAÍS.
Brad Smith, during his conversation with EL PAÍS.Claudio Álvarez

I also think we need to be careful. One needs to have a high level of safety without creating a level of onerous administration that would drive up costs, especially for startups. The companies that I’ve heard express the most concern about the AI Act are not the biggest ones. Frankly, we have the infrastructure to comply. It’s the startups that worry that they can’t get started. I’m not saying that to be critical of the AI Act, but it’s what I hear, especially in countries like Germany or France. It’s all about the implementation.

Q. You have called for an AI “safety brake” in the U.S. Senate. What does it consist of?

A. This is intended to address what people often describe as an existential threat to humanity, that you could have runaway AI that would seek to control or extinguish humanity. It’s like every Terminator movie and about 50 other science fiction movies. One of the things that is striking to me after 30 years in this industry is often life does imitate art. It’s amazing that you can have 50 movies with the same plot: a machine that can think for itself decides to enslave or extinguish humanity, and humanity fights back and wins by turning the machine off. What does that tell you? That we better have a way to slow down or turn off AI, especially if it’s controlling an automated system like critical infrastructure.

It’s been nine months since I first introduced this concept, and what’s most striking to me is that everywhere I go, the conversation is more or less the same. It starts with this concern: “oh my gosh, there’s this existential risk.” Then people say, “that’s decades away. We don’t have to focus on that now.” We have the capacity to solve more than one problem at a time. Let’s solve today’s and tomorrow’s: the best time to solve a problem is before it comes. We know how to do it: every bus and every train has an emergency brake.

Q. Microsoft and OpenIA just published a report on how China, Russia, North Korea and Iran are using AI for increasingly sophisticated cyberattacks. What can be done to prevent this from happening?

A. First, we need to recognize the problem. In this study, Microsoft and OpenAI [both companies have a partnersihip] discovered these four nation states were using generative AI in cybersecurity and cyber influence operations. We’re not going to let nation state actors that engage in this kind of adversarial and harmful conduct have use of our applications because we regard that as something that is likely to do harm to the world. But we also need to use AI to fight back and create stronger cybersecurity protection.

Q. On Friday you announced an accord to fight against deepfakes in elections. What does it consist of?

A. This tech accord is very important. First of all, it’s very important because the issue matters so much. We’re going to see elections between now and the end of the year in more than 65 countries, and all across the European Union. And we are seeing a rapid use of deepfakes to try to deceive the public about what a candidate, for example, has said. With this technology accord, we focus on three things. One is to better protect the authenticity of content with credentials and watermarking. Secondly, detect deepfakes and remove them if they want to deceive the public. And, third, we must focus on public education. And this is an enormous priority for Microsoft. I probably personally spent more of my time between the end of last year and last Friday on this one issue than anything else, because we just fundamentally believe that it is so important to where the world is going.

This matters a lot because it affects elections but, unfortunately, it is going to impact other problems as well, such as financial fraud and cyberbullying, especially of children and women. Those two groups are especially vulnerable. And we better do it well because if we don’t, the world’s going to be worse rather than better because of this technology.

Q. But the same industry that signs accords like this makes the technologies that cause these problems available to everyone. OpenIA unveiled its video AI, Sora, on Thursday, where AI-generated videos are almost indistinguishable from reality…

A. The more powerful the technology becomes, the stronger the safeguards and controls need to become with it. I think that all of us are going to have to push ourselves. The industry probably will benefit if it’s pushed by government and civil society as well, because the magnitude of responsibility and impact is potentially so high. After the Christchurch Commitment [an agreement by companies and governments to remove violent and extremist content from the internet after the massacre in the New Zealand city], we all had to adapt. And I think we’re going to need to adapt again. What I hope is that we remember it took a mass shooting on the internet to open people’s eyes to what could go wrong. I’m encouraged by Friday’s agreement, but we’re going to have to go even faster. And most importantly: we’re going to have to bring people together. The biggest mistake the tech sector could make is to think that it’s already doing enough and it can just do what it needs to be done if it’s left alone.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_