The harm caused by AI: From suicide to deepfake pornography
The potential danger that artificial intelligence can cause to individuals is inadequately addressed in existing legislation regarding the new technological tool
Fourteen-year-old Sewell Setzer took his own life last February after developing a romantic attachment to a character generated by artificial intelligence on the Character.AI platform, according to a lawsuit filed by his family against the company. The late Paul Mohney never saw combat, nor did Olena Zelenska, the wife of Ukraine’s president, buy a rare Bugatti Tourbillon sports car.
False information generated by artificial intelligence (AI) has been disseminated with the aim of profiting from advertisements on obituary pages or furthering Russian propaganda.
In Edinburgh, a school cleaner, a single mother of two, lost her benefits due to biases in the AI system, like many women in similar situations. A customer of a payment platform was erroneously alerted by the algorithm about a transaction that never occurred. A lawsuit is challenging the safety of a vehicle due to an alleged programming error, and thousands of users have had their data used without consent.
At the end of the AI chain are real people, yet the responsibility for the harm caused remains unclear. “We face an alarming legislative vacuum,” warns Cecilia Danesi, author of Consumer Rights at the Crossroads of Artificial Intelligence.
Profiting from the deaths of strangers
Making money off the deaths of strangers has become easy and inexpensive with artificial intelligence, even if it comes at the expense of spreading falsehoods that heighten the grief of the deceased’s relatives. This practice occurs on obituary pages, where AI generates information about the deceased using both real and fabricated details — such as Mohney’s alleged military history — to drive traffic and generate advertising revenue.
“There’s a whole new strategy in search rankings,” SEO expert Chris Silver Smith told Fastcompany. “It’s based off of getting this information that someone has died, and seeing that there’s a little spike in traffic, perhaps in [a specific region], for that person’s name, and rapidly optimizing and publishing articles about the person to get these dribbles of search traffic.”
Misinformation and pornfakes
The website AI Incidents reports on dozens of alerts each month regarding incidents generated by artificial intelligence or cases of abuse. It has already identified over 800 complaints. Among its latest records are false reports about the attempted assassination of Donald Trump, misinformation concerning Democratic presidential candidate Kamala Harris, and realistic deepfake pornography involving British politicians.
Concerns about the impact of these creations and their potential to go viral in democratic processes are growing; a survey conducted for the European Tech Insights Report 2024 by the Center for the Governance of Change (CGC) found that 31% of Europeans believe AI has already influenced their voting behavior.
“Citizens are increasingly concerned about the role of AI in elections. And while there is still no clear evidence that it has caused substantial alterations in election results, the emergence of AI has increased concerns about disinformation and deepfake technology around the world,” said Carlos Luca de Tena, executive director of CGC.
“When it comes to creating a fake video or image using generative AI, it’s clear that AI serves as a medium — a tool — so the responsibility lies with the creator,” explained Danesi. “The main issue is that, in most cases, it is impossible to identify the creator. For instance, the case of porn fakes [AI-generated images with pornographic content] directly impacts the gender gap, as platforms often incentivize their use with images of women. The increased volume of such images leads to greater accuracy in mimicking women’s bodies, and the result is the greater marginalization and stigmatization of women. Therefore, in the era of misinformation and cancel culture, education is extremely important. As users, it is imperative that we double-check the content we encounter and verify it before engaging with it.”
Danesi — a member of UNESCO’s Women4Ethical AI and co-author of the report presented at the G20 Brazil on algorithmic audits — is also concerned about the effects of disinformation: “An algorithm can play a dual role: one in the creation of fake news through generative AI and another in amplifying false content via search engines or social media algorithms that make it go viral. In this latter case, it is clear that we cannot expect platforms to verify every piece of content published; it is simply not feasible.”
Automatic discrimination
Another concern about the misuse of AI is the bias that adversely affects single-parent families, 90% of which are headed by women, within a Scottish benefits system. “While the AI Act includes several provisions aimed at preventing bias [especially concerning the requirements that high-risk systems must meet], its lack of regulation regarding civil liability fails to provide victims with the means to receive compensation. The same applies to the Digital Services Act, which imposes certain transparency obligations on digital platforms,” explains Danesi.
Defective products
The AI incidents page features an open court case regarding a potential defect in a vehicle’s programming that may affect safety. In this context, Danesi explains: “Regarding the reform of the Directive on Defective Products, it remains incomplete. The problem lies in the types of damages that can be claimed under the law, as it does not encompass moral damages, for instance. Attacks on privacy or instances of discrimination are excluded from the protections offered by the Directive.”
According to Danesi, these cases highlight the urgent need for legal reforms concerning civil liability in light of AI advancements. “Consumers are highly exposed to the potential damage that AI can cause. Without clear rules on how to proceed in the face of such damage, individuals are left unprotected. But clear civil liability rules provide legal certainty, promote innovation, and facilitate agreements in the event of harm,” says the researcher, adding that these rules also allow companies to make more informed investment decisions.
Danesi notes that the European Union is discussing initiatives aimed at addressing these issues, including the Artificial Intelligence Act, the Digital Services Act — which establishes measures affecting the algorithms of digital platforms, social networks, and search engines — the proposed AI Liability Directive, and a reform to the Product Liability Directive.
“This Directive had become obsolete. There was even a debate about whether it was applicable to AI systems, since the definition of a product was based on something physical rather than digital. The amendment extends the concept of a product to include digitally manufactured files and computer programs. The focus of the regulation is on individual protection, making it irrelevant whether the damage originates from a physical or digital product,” she explains.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.