Deepfake porn users have no qualms about viewing it but would report it if the victim were from their social circle
Consumers of fake images sexualizing celebrities claim that they resort to this macho aggression out of curiosity, attraction to the characters and the visualization of a fantasy
She was not yet Italian prime minister, but she was already well-known when four years ago a fake pornographic video was published with Giorgia Meloni’s face on another woman’s body. Next July 2 she will testify in a lawsuit against those involved, the 40-year-old man, who produced the images, and his 73-year-old father, who lent the phoneline for posting it. Meloni is demanding €100,000 from them as an “exemplary symbolic measure” that “contributes to the protection of women targeted by this type of crime,” lawyer Maria Giulia Marongiu says. Deepfakes, hyperrealistic fake audiovisual materials, have been doubling every year since the first complaint for non-consensual nudity was filed in 2017, and little has changed since then. An investigation by Home Security Heroes (HSH) confirms a situation that had already been identified: 98% is pornography and 99 out of 100 victims are women, almost all of them well-known.
The most radical change has been technological. Whereas computer and image editing skills were once required, now one in three of the tools available allows fake creations to be made in less than 25 minutes at zero cost. Google, which, as the leading search engine, serves as an indicator, has removed 8 billion links, according to its latest transparency report. Thousands of them are deepfake pages, concentrated in two portals, according to Harvard University’s Lumen database. Forced by new laws, technology companies are beginning to act.
The accessibility of the tools (60% online and 40% downloadable) is coupled with the motivations of abusers, who convince themselves that they are only acting out of curiosity, attraction to celebrities (as in the case of singer Taylor Swift), and the visualization of a fantasy, according to MSM. This childish perception leads 74% of users to say that they do not feel guilty about viewing such images (according to a survey of 1,522 male participants).
But this alleged naiveté is as false as the material they consume. “It’s a problem of male violence,” Adam Dodge, the founder of EndTAB, a non-profit organization for education in technological uses, tells the Massachusetts Institute of Technology (MIT). The EU directive on combating violence against women includes these creations as a form of aggression.
According to the MSM study, the perception of this aggression is so clear that even the vast majority of deepfake users would report them if the victim was someone close to them (73%) and would be “shocked and outraged” (68%) by the violation of their privacy; the responses display the viewers’ hypocrisy.
The rise in non-consensual nudity has occurred despite laws that condemn these practices and protect the victims from the alleged freedom of expression of content creators. In the United States, most claims are based on the Digital Millennium Copyright Act (DMCA) of 1998.
“The moment you take the real image of a person but modify it regardless of intent, there is an instrumental conduct that consists of treating their image without consent for an unlawful purpose,” explains Ricard Martínez, the director of the Department of Privacy and Digital Transformation at the University of Valencia (Spain). He qualifies that it is “another thing [when] a humorist generates an image satirically in a clear context.”
But these regulations have proved insufficient. That is why Europe approved the laws on digital services and markets to ensure that “the fundamental rights of all users are protected” and “establish a level playing field [for companies]” in November 2022 (it went into effect last May). These regulations require large companies to collaborate in the risk assessment, identification, notification and removal of suspicious links.
“There are two important subjects: the party who offers the tool, who will always say that its application was not intended to commit a crime, and the party who does the creation, the one who acts as a loudspeaker. The law imposes stricter collaboration responsibilities on the latter,” Martinez adds.
Google recognizes the new responsibilities and, in a terse written response to the increase in complaints, states: “We have policies for nonconsensual deepfake pornography, so people can have this type of content that includes their likeness removed from search results. And we’re actively developing additional safeguards to help people who are affected.” Moreover, the company says it has a removal process that allows “rights holders” to protect their work on the internet.
The situation with Meta is similar. On February 6, President of Global Affairs Nick Clegg announced: “It’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying ‘Imagined with AI’ labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.” He is referring to Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as the company implements its plans to add metadata to images created with their tools.
Thus, the big tech companies are joining the legal crusade against deepfakes and the recent approval of the European law on artificial intelligence, which makes it mandatory to unequivocally label creations developed with this technology. The U.S. government is also moving in this direction. “It can no longer be argued that the use of the system or its results respond to the exercise of freedom of expression and freedom of creation,” Martínez says, celebrating these developments.
“The concern is common, and we are beginning to see a confluence of interests from two different legal cultures. The message is starting to be sent to these companies that not everything goes, that they can’t wash their hands and say, ‘hey, I’m just a platform and I can’t be responsible for everything.’ The service providers in the information society have a decisive influence on the virality of the content displayed. They are not neutral operators or a mere vessel. They are part of the operation, of the game,” concludes Ricard Martínez.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition