_
_
_
_
_

The EU favors self-regulation in new AI law

France, Italy and Germany believe that overly strict legislation will hinder innovation in artificial intelligence

Bruselas IA
OpenAI and other tech companies will be affected by new EU regulations on artificial intelligence.Peter Morgan (AP)

Who bears the responsibility for overseeing the risks associated with artificial intelligence (AI), particularly foundation models like ChatGPT? The debate over a new European Union (EU) directive on the revolutionary and highly disruptive AI technology is leaning towards self-regulation. Documents reviewed by EL PAÍS show that Spain, the current European Council (EC) president, supports a self-regulatory approach with limited obligations. Several EC Member States advocate implementing codes of conduct for companies, along with intermediate supervision layers. The European Parliament (EP) seeks a stronger framework, while influential countries like France, Italy, and Germany argue that strict oversight could hinder innovation in European research and companies. European regulators are trailing behind the United States, which has already implemented legislation requiring technology companies to promptly report advancements posing significant risks to national security.

Spain, which will transfer the EC presidency to Belgium by the end of December, has prioritized finalizing an AI directive and has proposed a set of codes of conduct for foundation models (those trained on a broad corpus of unlabeled data that can be adapted to many tasks) that imply greater risk. The proposed directive calls them “foundation models with systemic risk.” In other words, they have high-impact capabilities that may pose systemic risks at the EU level, since their results may not be fully known or understood during development and release. The proposed directive also seeks to require codes of conduct that include both “internal measures” and active dialogue with the European Commission to “identify potential systemic risks, develop possible mitigating measures and ensure an adequate level of cybersecurity protection.”

The codes of conduct must include transparency obligations for foundation models, as well as energy consumption reporting. The directive could empower the European Commission to adopt secondary legislation on systemic risk models, specifying technical elements and keeping benchmarks up-to-date. This allows for potential new regulatory efforts. Spain’s proposal also recommends establishing a supervisory agency for artificial intelligence. This new organization would enhance security by implementing a centralized monitoring system. It could also satisfy the EP’s demand for a specialized oversight entity.

Representatives from the European Council, the European Parliament and the European Commission will meet on December 6 to debate and finalize the AI directive. It’s a crucial meeting to agree on the general architecture of a law that maintains a “technologically neutral” approach by regulating end uses of AI through a pyramid of risk categories proposed by the European Parliament.

“The European Union aims to be the first region in the world to establish legislation regarding AI. This legislation will cover its uses, limitations, protection of citizens’ fundamental rights, and participation in governance. At the same time, it will ensure the competitiveness of our companies,” said Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence. Artigas believes the EU should go beyond establishing a code of conduct and self-regulation models for high-risk uses like disinformation, discrimination, manipulation, surveillance and deep fakes. Innovation and advancement must be supported while addressing these challenges. “The European AI regulation is not only a legal and technical standard but also a moral standard,” said Artigas.

However, two crucial points still need resolution when negotiations resume on December 6. The first pertains to biometric surveillance systems, while the second concerns the control of highly unpredictable foundation models with “systemic risk.” These two concerns were behind the recent OpenAI drama that saw CEO Sam Altman get fired and reinstated within four days. Open AI researchers had alerted the board about a groundbreaking AI discovery with potential threats to humanity, leading to Altman’s dismissal.

A few weeks ago, Germany, France, and Italy expressed their support for broad self-regulation of companies developing AI systems, and proposed codes of conduct that would be mandatory. In a position paper sent to EU Member States, they advocated for self-regulation in general-purpose AI, emphasizing a balanced pro-innovation approach based on AI risk. They also advocated against unnecessary administrative burdens that could hinder Europe’s ability to innovate. Additionally, the confidential document revealed their commitment to initially eliminating sanctions for non-compliance through transparent codes of conduct and active dialogue between regulators and AI companies.

However, the approach espoused by Germany, France and Italy has been opposed by other Member States and various experts who want an undiluted law, calling for fewer codes of conduct and more rules. “Self-regulation is not enough,” said Leonardo Cervera Navas, secretary general of the European Data Protection Supervisor (EDPS). Cervera has vocally supported the idea of placing a future European AI oversight entity under the authority of the EDPS. This entity, says Cervera, could act as a bridge between those who favor self-regulation and those who want clearly defined legal obligations. It would enable a significant level of self-regulation, but with oversight from an independent legal authority, safeguarding the interests of companies. Cervera believes the ideal is “a flexible regulatory approach that strikes a balance between agility and strong oversight while avoiding excessive dogmatism.”

This is also the stance of the EP negotiators, who emphasize the need for a comprehensive directive to ensure citizen security and protect their fundamental rights in the face of potentially intrusive technologies, some of which we may not yet fully comprehend. “The Council needs to drop the idea of relying solely on voluntary commitments made by developers of the most powerful models. We need explicit obligations clearly stated in the text,” said Brando Benifei, an Italian EP legislator who is one of the negotiators in the inter-institutional talks on AI regulation. EP legislators say certain indispensable obligations should be legally established, such as data governance, cybersecurity measures and energy efficiency standards. Benifei cautioned against capitulating on these issues just to reach a quick agreement.

Every party in the negotiations seems to agree on the need to prohibit or restrict what the EP calls “intrusive and discriminatory uses of AI.” This includes real-time and public space biometric surveillance systems, with only a few exceptions for security purposes. The EP position is stricter than the one adopted by the EC, but there is cautious optimism about finding a middle ground. Still, the EP is refusing to bend regarding bans on predictive policing, biometric surveillance in public places, and emotion recognition systems in workplaces and educational systems. “We must adequately safeguard fundamental rights while imposing necessary restrictions when [these technologies] are used for security and surveillance purposes,” said Brando Benifei.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_