_
_
_
_
_

From autonomous weapons and nukes, to robots and AI: how do we regulate technologies that have unforeseeable consequences?

Given the advances in AI, experts in digital rights want to start supervise and improve algorithms, to prevent new technologies from violating human rights

Raúl Limón
AI
The world's first Al sculpture "The Impossible Statue", is displayed at the Tekniska museum in Stockholm on June 8, 2023JONATHAN NACKSTRAND (AFP)

This isn’t the first time that we’ve faced a technological development that has unforeseeable consequences for humanity’s existence. In Runaround – a story published in 1942 – writer Isaac Asimov proposed three rules to protect people from robots… and these are still being used as reference material today.

The International Atomic Energy Agency was created in 1957 “in response to the deep fears and expectations generated by the discoveries and diverse uses of nuclear technology,” according to the organization itself. International humanitarian law has spent years seeking an effective regulation of lethal autonomous weapons systems (LAWS), which can attack without requiring manual human control. Now, Europe has started drafting the first regulations in the world on artificial intelligence (AI). While the developments in the field of AI are capable of accelerating progress in fundamental fields, such as health or energy, they can also threaten democracy, bolster discrimination, or break privacy laws.

“Sowing unfounded panic doesn’t help – on the contrary. Artificial intelligence will continue to function and we must improve it and prevent [negative consequences],” emphasizes Cecilia Danesi, a lawyer specializing in AI and digital rights. She has written the book The Empire of Algorithms: AI that is Inclusive, Ethical, and in the Service of Humanity.

The first thing to understand is what exactly an algorithm is, since algorithms are the basis of artificial intelligence. Danesi – a researcher at the Institute of European Studies and Human Rights – writes that it’s a fundamental compendium for understanding the scenario facing humanity, as a “methodical set of steps that can be used to make calculations, solve problems, and reach conclusions.” The algorithm isn’t the calculation – rather, it’s the method. The method that can potentially enable the precise model to identify a cancerous tumor via images, discover a new molecule that has pharmacological uses, make an industrial process more efficient, develop a new treatment, or – on the other hand – generate discrimination, false information, a humiliating image, or an unfair situation.

OpenAI director Sam Altman, Turing laureate Geoff Hinton, AI researcher Yoshua Bengio and Tesla CEO Elon Musk – among others – have called for regulation and urgent action to address the “existential risks” that AI poses to humanity. These include the increase and amplification of misinformation – such as false and malicious content that proliferates on social media platforms – the biases that reinforce inequalities – such as the Chinese social credit system, or the mechanical assessment of people being viewed as potential risks due to their ethnicity – or the end of privacy, with data being harvested to feed hidden algorithms.

The EU has begun to debate what is being called the first AI law in the world. It could be approved in the near future: its objective is to prevent uses of AI that result in what are considered to be “unacceptable risks,” such as indiscriminate facial recognition or the manipulation of people’s behavior. AI could be heavily regulated in critical sectors such as health and education, while sanctions and sales bans could be imposed on systems and firms that don’t comply with the legislation. UNESCO has developed a voluntary ethical framework… but this voluntary nature is precisely its main weakness. China and Russia – two countries that use this technology for mass surveillance of populations – have signed on to these principles.

“There are fundamental rights involved and it’s an issue that we have to tackle and worry about, certainly… but with balance,” Danesi cautions. Juhan Lepassaar – executive director of the EU Agency for Cybersecurity (ENISA) – is of the same opinion: “If we want to secure AI systems and also guarantee privacy, we must analyze how these systems work. ENISA is studying the technical complexity of AI to better mitigate cybersecurity risks. We also need to find the right balance between safety and system performance.”

One of the societal risks that has manifested itself is the replacement of people by AI-operated machines. “The machines are going to replace us and they are already doing so,” Danesi affirms. “There are many that replace us, enhance the work, or complement us. The issue is what [jobs] and [in which spaces] we want to be replaced… and what requirements these machines have to meet to make certain decisions. We first have to identify a problem or a need that justifies using or not using AI.”

In the field of robotics, Asimov already anticipated this problem and established three principles: 1) A robot will not harm a human being or allow them to suffer harm through inaction; 2) A robot will obey the orders it receives from a human being, unless the orders conflict with the first law; and 3) A robot will protect its own existence, to the extent that such protection does not conflict with the first and second laws.

Permanent and preventative supervision

“It looks great. Done: artificial intelligence can never harm a human. Excellent. The problem is that, in practice, it’s not so clear,” Danesi explains. The researcher recalls “a case in which two machines were programmed to optimize a negotiation: the system understood that the best way forward was to create another, more efficient language. Those who had designed the program couldn’t understand said language [and the machines] were disconnected. The system was handled within the proper parameters… but AI can go beyond what is imagined.” In this case, the machine didn’t harm its programmers, but it excluded them from the solution and its consequences.

The key, for Danesi, is “permanent supervision [and] algorithmic audits of these high-risk systems, which can significantly affect human rights or security issues. They have to be evaluated and reviewed to verify that they don’t violate rights, that they don’t have biases. And this must be done on an ongoing basis, because systems – as they continue to learn – can acquire bias. And preventative action must be taken to avoid damage and create systems that are ethical and respectful of human rights.”

Another of the great dangers of the uncontrolled use of AI is its use for military purposes. The proposed EU regulation excludes this topic in its first draft. “It’s one of the most dangerous uses of artificial intelligence. Oftentimes, the laws prohibit something that, later, in practice, continues to operate and can do the most harm to people,” the researcher laments.

“Should we fear machines? The answer is no! We must, where appropriate, fear people, for how they may use technology,” Danesi writes in her book.

Respect for citizens’ data

Manuel R. Torres – professor of Political Science at Pablo de Olavide University in Seville, Spain – speaks in similar terms. “The problem is the proliferation of a technology that must be prevented from ending up in the wrong hands.”

Torres mentions a flaw in the proposed EU regulations: “The conflict is in how this technology is developed in areas that don’t have any type of scruple or limitation regarding respect for the privacy of the citizens, who feed [the technology] with their data.” The political scientist mentions the case of China as an example: “[Beijing] has no problem in using its own citizens’ data to feed and improve those [AI] systems. As scrupulous as we want to be with the restrictions we put on our local [technology] developers, in the end, if this doesn’t happen globally, it’s also dangerous.”

Torres concludes: “We find ourselves in uncharted territory, where there are few references on which we can rely on to know how we have to address the problem. Additionally, there’s a problem in terms of understanding the repercussions of this technology. Many of our legislators aren’t exactly familiar with these developments.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_