_
_
_
_
_
ARTIFICIAL INTELLIGENCE
Tribune
Opinion articles written in the style of their author." These texts are to be based on verified facts and must be respectful towards people, even though their actions may be criticized. shall feature, along with the author's name (regardless of their greater or lesser renown), a footer stating their office, academic title, political affiliation (if any) and main occupation, or the occupation related to the topic being assessed

Exposed in the face of artificial intelligence

When we’re able to analyze the risks of technology, only then are we able to limit and manage them through regulations

IA Inteligencia Artificial
A message published on Instagram, regarding the dissemination of AI-generated photographs of minors in schools in Spain.ROBERTO PALOMO
Paloma Llaneza

Whenever I go into doomsday mode (becoming a kind of technological Cassandra), I think about a mansplaining memo that reveals a truth that’s unknown to me, a woman. It expresses itself with forceful words, reproaching me: “The truth is that it’s not a knife that kills: a man kills,” it reads. Who could defend themselves against these words? Who dares contradict the engineers of progress?

What this memo tends to forget is that, if the instrument in question doesn’t have a sharp end — and if it wasn’t available in the stores of any neighborhood, town or district — it wouldn’t be suitable to kill anyone at any time. Before the authors of the memo sneeringly tell me that “we’re not going to ban knives” or “erect gates around the countryside,” I would like to bring up the following reflection (which probably won’t interest you, nor change your mind).

The argument of “guns don’t kill people, people kill people” is used repeatedly by the NRA (along with its extreme interpretation of the Second Amendment) to avoid the imposition of any type of gun control. In order to continue making money, this organization is capable of blaming the country’s mental health problem (which, by the way, they’re not willing to spend a penny on) rather than recognizing that the only function of a weapon is to injure or kill. A gun isn’t useful when you’re trying to cut a steak or open a box. It’s only capable of causing thousands of deaths — 31,059 deaths in the United States in 2023 so far, according to the Gun Violence Archive, a website that counts American firearm deaths in real-time.

In Europe, since we do put gates up around the countryside (that’s how paranoid we are), the possession and use of firearms is strongly limited. This is because we’re aware, precisely, that they’re instruments meant for killing. What’s more, in Spain — according to the national laws that regulate weapons — an individual can only possess or carry knives that are less than four inches long and have just a single edge. Automatic and double-edged knives are prohibited; no citizen can “possess knives, machetes, and other bladed weapons that are duly classified as weapons by the competent authorities.”

Thanks to the cultural evolution of our laws, we’ve been able to prevent many people from dying, simply by limiting the availability of tools that have the capacity to kill. No one thinks of limiting the number of people capable of killing as a solution to the problem, because — if that were to occur — nobody would be left.

This same form of thinking should be applied to technology. There are both civilian and military uses and classification for different types of tools, just as with pharmaceutical drugs: some can only be used in healthcare settings, under the prescription and control of a doctor, while some are over-the-counter. Meanwhile, certain aspects of medicine are subject to even strict international prohibitions, such as the cloning of humans. When we’re able to analyze risks, we’re able to limit and manage them through regulations.

And then there’s data, social media and internet technology — things that have been utilized by many people since birth. Generations have grown up and matured around technological instruments, not recognizing any danger in them. After all, who would have frowned upon the evolution of the personal computer or microchips, which allowed man to step on the Moon? Nobody, of course. Technology is neutral, cold, dispassionate and, therefore, beneficial. At least, that’s what the tech giants would have you believe. That is, the same men who are asking tech writer Douglas Ruskoff about how they can protect themselves from their own robots.

Many people have been enriched by making certain online tools available to eight-year-old boys, which teach them that sexual violence is a normal way of interacting with girls. They have created apps that allow 11-year-old kids to take pictures every 30 seconds and share them with billions of people. They’ve even created free babysitting services, in the form of the iPads that parents hand their toddlers.

The tech leaders are the ones to blame when — thanks to the “democratization” of AI — apps are used to turn innocent photos of young girls into child pornography, via digitally-generated nudes. They have given consumers total access to technologies that should never have left highly-controlled environments — technologies that shouldn’t be operated by simply anyone.

I could hide a snack in a nuclear briefcase and blame my dog for the extinction of humanity after he accidentally pushes a button while trying to get a hold of it. I could blame him, if I were a psychopathic billionaire, but since I’m merely a lawyer, what I’ll do instead is not leave anything lethal within my dog’s reach. I’ll work with his basic impulses, instead of blaming him for them. This is a reminder to the tech people who put out that memo: guns kill, and AI shouldn’t be accessible to teenagers raised on YouPorn.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_