Ibán García del Blanco, European artificial intelligence law negotiator: ‘We made them sit up and take notice’
The Spaniard participated in the final negotiations of the European Union’s pioneering AI Law that will regulate these new technologies
Ibán García del Blanco can already say that he has made history in Europe. The member of the European Parliament was the only Spaniard who participated in the marathon closed-door negotiations in December, which, at almost 37 hours, were the longest of this type of meeting in the history of the EU, and allowed the world’s first complete artificial intelligence law to be agreed. It is a regulation that seeks above all to ensure that foundational models of generative artificial intelligence (AI) that may pose a systemic risk do not violate fundamental rights. It was worth the effort. “It was now or never,” he emphasizes in an interview with EL PAÍS in Brussels, where the final wording of the law is now being overseen. It must be ready and translated into the 24 official languages of the EU in the coming weeks so that the European Parliament and member states can ratify it before the legislative body is dissolved ahead of the European elections in June.
Question. Why such a rush to close the AI law?
Answer. From the perspective of defending our rights, principles, and values, it was important that we had the regulations as soon as possible. There was a moral risk in not having it. We need protections in place for situations where the use of this type of technology can create especially vulnerable situations. And we knew that time was running out, and that it could only be approved hypothetically during the Belgian presidency [in the first half of 2024]. Furthermore, the Spanish presidency had been preparing this for a long time. It had an accumulated expertise that in practice was impossible to reproduce with any other negotiator. On top of that, there is the international prestige of the EU: if we had made a mistake, we would have made international fools of ourselves. In the same way that I believe that we made the world sit up and take notice by saying, “This is Europe,” I believe that if we had not made this regulation, after the expectations that we had raised, and with many regions of the world looking towards us — including the United States — with respect to how we regulate things, we would have been a laughingstock. And we would probably have called into question the EU’s own internal democratic model. So we all knew it was now or never.
Q. There are still those who argue that the sector can regulate itself.
A. We have the accumulated experience of what has happened in the technology sector in the last two decades. Not setting standards and expecting the sector to regulate itself is, in some way, a little naive and produces enormous imbalances. We have a lot of evidence and, in fact, we are trying to go back some way, with laws such as those on digital services and digital markets. In the case of AI, we are faced with a subject that has an intrinsic risk such as we have not known until now. We needed additional tools.
Q. The EU prides itself on being a pioneer in legislating AI. Aren't you worried about making a mistake about something that is also being legislated into the future, about things that don't even exist yet?
A. It is true that it is the only legal text that affirms the concept from an absolute, general, horizontal, and complete perspective, or at least it claims to do so. And for someone who, like me, comes from the world of law, in which there is a maximum rule, legal certainty, it is a challenge to think about a regulation that by its nature will have to be flexible and adaptable to new realities. It is, in itself, anathema. But we have to adjust our minds to the circumstances we are in. I do not think that AI is going to be the only subject in which we need regulation that can adapt to new realities or mutations. That is why it has been very intelligent to approach regulation from the perspective, not of the technology itself, but from the perspective of use, because that does allow us to establish general rules, that are immutable over time. Furthermore, it is a great competitive advantage that some values also have importance in regulating other countries and, of course, in the second step on which we now have to focus, which is establishing an international, collective framework.
Q. 2024 is a super-electoral year, with almost half the planet going to the polls. And AI is highlighted as one of the risk elements in the face of misinformation and manipulation of public opinion. Has this law not come too late, since it will only be fully applied at the end of 2026?
A. Systematically, the law normally comes after reality exists. This is not always the case. There are times that it goes ahead and in that sense creates social and physical realities by itself, as happened in Spain with the same-sex marriage law, but in general it always lags behind. In a technology that moves so fast, it is almost inevitable that we follow behind, trying to cover the gaps that are generated along the way. But what happened with the dotcom phenomenon or the large content generating platforms is not going to happen to us again. There was practically zero control over them and no [legal] requirements. They paid zero taxes, and earned enormous amounts and were not subject to even a minimum level of [regulatory] requirements. This is not going to happen to us again.
“Pretending that they regulate themselves is a bit naive and produces enormous imbalances.”
Q. Have we learned anything then?
A. I would say yes. I would say that the world is also aware that we need standards. Far from creating imbalances because a certain region chooses not to regulate, I believe that the reality of international politics at this time indicates that very soon we are going to have regulations that are very similar to the European ones and, above all, there will be an international framework of minimum requirements that are very similar to the values that we are trying to protect here in Europe.
Q. The New York Times has sued OpenAI and Microsoft over copyright issues, which is one of the keys to European law. Do you feel vindicated?
A. Intellectual property law in the United States and the United Kingdom is less protective. It leaves more things to the discretion of the courts. What we wanted was to give those rights to the copyright holders, who are basically the ones who generate wealth and are the ones who stimulate creativity. The law provides them with the certainty of knowing if their content has been used without authorization. In that sense, it is pioneering and will probably avoid a lot of litigation in the future, or it will greatly facilitate the work of the courts themselves by identifying exactly what content has been violated.
Q. Some say that with so much regulation, Europe may lose the AI race to the U.S. or China.
A. If state intervention or regulation had a decisive element in terms of technological development, then we would not have artificial intelligence in China, and it so happens that right now they are investing and developing models between 15 and 20 times what the EU is doing. Secondly, we are far behind the United States and China, and some other places, but fundamentally the United States and China, [where] there is no law. That is to say, it does not seem that this has been the decisive element in being able to perceive whether there is technological development or not. I believe that this fundamentally depends on us providing the resources needed to be able to develop our own models, and on us also being able — and this is also a message to the member states — to collaborate and to cooperate, because we do not have the necessary muscle to be able to compete abroad as individuals. And in the meantime, we will have a regulatory scheme that will protect our own principles, our own rights, and at the same time will shape the market more in accordance with our own interests.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition