The Spanish presidency of the EU has conferred on the Secretary of State for Digitalization and Artificial Intelligence, Carme Artigas, the baton of conductor in the final phase of the negotiations of the long-awaited EU regulation on artificial intelligence (the AI Act). The law will set out the technology’s permitted uses, and any exceptions, based on the risks posed to the public. The higher the risk, the more control measures there will be, or its use may even be banned. For the moment, artificial intelligence (AI) systems that “manipulate human thought,” those that are used for social scoring or profiling, and those used for real-time remote biometric identification all fall into the last category.
The discussions between representatives of the European Parliament, in favor of limiting the potential dark applications of AI to the maximum; the Member States, who want to be able to resort to them in case of emergency; and the Commission are making these negotiations a slow and delicate process. “I am confident that we will be able to approve the regulations during this presidency [ending on December 31],” says Artigas.
Two weeks ago, Artigas was named co-president of an international council of United Nations experts in charge of drawing up a mechanism for the governance of AI. Her partner at the head of that group is James Manyika, vice president of Google, which is a clear sign that the industry is very present in the discussion. The Secretary of State talked to EL PAÍS before flying to London for the AI Security Summit, in which 28 countries, including the United States and China, agreed to cooperate to face the risks of this technology.
Great day at #AISafetySummit @SciTechgovuk at #BlentchleyPark (yes, where A.Turing deciphered #Enigma). Meeting the #BigTech boys @elonmusk and James Manyika #Alphabet @Google. Discussing about #AIGovernance and international cooperation. pic.twitter.com/GZlqO0u4bQ— Carme Artigas (@carmeartigas) November 1, 2023
Question. What is your role on the new U.N. advisory council?
Answer. Our mission is to analyze the opportunities, risks, and impact that AI represents for society and design an international governance mechanism for the technology. It is a wake-up call to the importance of regulating AI, which has to be coordinated internationally, and the global south must be involved. By December we have to have drawn up some initial conclusions, which we will present in New York. I think it is a unique opportunity to influence the process from the vision of technological humanism that we have in Spain and from the European approach, which is based on the protection of rights.
Q. What do you think this AI governance mechanism should be?
A. There are many models, such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change [IPCC], in which you first reach scientific consensus and then try to get a response from individual states. You have to think that there are many different points of view regarding AI. The vision that China has is not the same as that of the United States, and it is not the same as Europe’s. Right now there are many international forums on this topic. It is time to bring them together and join forces with the industry.
Q. The U.N. has given a lot of importance to the climate crisis, however, the problem is far from being solved. Why should it be any different with AI?
A. I’ve just had a conversation with the co-president about how to ensure that our work does not remain a series of theoretical recommendations. We are going to try to learn from best practices. The IPCC is a good starting point, but it needs improvement. We are looking for a mechanism that is not as complex to maintain as an international agency and that has contact with the [experts], because AI is evolving very quickly.
Q. How can a coordinated response from countries and industry be proposed if their interests often collide?
A. The United States had its model of unlimited support for the development of its industry. China also has one. The European way was the third way. We believe that technological progress cannot take away fundamental rights. At first no one paid attention to us, but the emergence of ChatGPT has allowed other latitudes, and the industry itself, to begin to realize that if this technology falls into the wrong hands, it could have very negative effects. The U.S. has just announced an executive order. In Europe, we think that we have to go further. We want the technology itself to be fairer and more transparent. The Chinese, contrary to what one might think, are very concerned about setting limits on deepfakes. Some countries did not become aware until 2023. Others among us have been working on them since 2020. And for this reason alone, now is the time to add those visions.
Q. Is regulating AI the way to address this problem?
A. We are convinced that it is. No one is proposing an approach like ours, based on the impact of this technology on fundamental rights, or on the prohibited uses of AI. Rather than putting in place a regulation or a legal or technical standard, we are developing a moral standard. We are telling the world what is and what is not going to be acceptable with AI. For example, I have to know if something is generated by generative AI.
Q. Have you already agreed which cases will be prohibited and which will be high risk?
A. That is perhaps the most important point of the debate we are having. We must decide what requirements we ask of high-risk systems [those that are allowed but are very limited] and what transparency requirements we ask of foundational models [on which generative AI, such as ChatGPT, relies]. The European Parliament wanted to extend the control of the standard to generative AI too, which was something that was not initially planned. We do not regulate technologies, but how they are used. A hammer can drive a nail or kill someone. What we have to make sure is that if the latter happens, the person responsible goes to jail.
Q. By occupying the presidency of the EU, Spain is leading the negotiations between the Commission, Council, and Parliament. Are they working well together?
A. We are bringing many positions closer together. We have defined very well what high-risk systems are, and we began to work on the debate on what prohibited uses are. Parliament gave a list that we see as excessive and that in some cases could go against the national security of the states. We are now defining what extraordinary guarantees we can establish so that a state cannot abuse its power through the use of these technologies. We have to set limits, but at the same time encourage and not stifle innovation. Striking that balance is difficult, but I believe that we will be able to approve the regulations during the Spanish presidency.
Q. What obstacles are there right now, beyond the prohibited uses?
A. There are a series of consensual prohibited uses, such as scrapping [data extraction] of images taken by surveillance cameras or social credit rating systems. We are seeing if we can find the right point in biometric recognition, with exceptions for some cases, such as using it in the investigation of certain serious crimes. Nobody wants there to be abuse of these techniques by the police or governments. If we do this, we have to offer extra control guarantees. This is now the center of the debate, but we cannot reveal what solution we are looking for. Work is being done at a legal, technical, and political level. The stakes are very high because such regulation has never been made in the world.
Q. Technology changes very quickly, as seen with the emergence of ChatGPT. How will those involved ensure that the regulation does not become obsolete?
A. The key to this regulation will be that it stands the test of time. For that, it must be able to be updated. For example, we are looking at how cases where AI use might carry a risk can be updated easily. There will be a European coordination mechanism of national artificial intelligence agencies that will ensure it happens.
Q. Are you still against a moratorium on AI research?
A. The letter [in which hundreds of experts requested a six-month pause in research on this technology in March] was a wake-up call from the scientific community: be careful, here we are developing something without knowing its real impact. It placed the irreparable damage in the very long term, when we think that we are already suffering it: fake news itself, identity theft with deepfakes, the possibility of bias, and discrimination. Before talking about the existential problems of mankind, we must address what is happening today. It is impossible to stop innovation. What you need to do is make sure that progress is going in the right direction. Innovation must accelerate so that the industry itself finds a way to solve the problems it has created.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition