Mothers unite against AI-generated nude photos
Tech tycoons want regulation of artificial intelligence because it threatens the future of humanity, but some women just want to protect their daughters
French philosopher Paul Virilio believed that technology cannot exist without the potential for accidents, arguing that the invention of the locomotive also contained the invention of derailment. As politicians, technocrats, and the media ponder the concept of artificial intelligence (AI), a train has derailed in a small Spanish town. While everyone speculated about the end of the world brought on by the ever-growing technology, the lives of several girls from Almendralejo (southwest Spain) were under attack by quickly spreading AI-generated fake nudes.
The trainwreck happened on the same day that Elon Musk (X), Sam Altman (OpenAI), Mark Zuckerberg (Meta), Satya Nadella (Microsoft) and Sundar Pichai (Google) testified before the U.S. Senate about regulating AI. While tech royalty theorized with politicians, the real and painful consequences of AI were being felt in WhatsApp groups around a small Spanish town, and maybe beyond.
In recent months, the developers and promoters of this technology have expressed concerns about the potential dangers posed by thinking machines. Their open letters and manifestos talk about uncontrollable “powerful minds,” profound changes in the history of life on Earth, and risks comparable to nuclear wars. However, these lofty proclamations said nothing about involuntary sexual objectification of women. Geoffrey Hinton, one of the pioneers in this field, even left his job at Google to focus on raising awareness about the abstract risks of AI, but then downplayed the more immediate and tangible dangers.
However, the issue at hand is not insignificant, nor is it a recent development. In 2019, Barack Obama cautioned about the threat to democracy from deepfakes, manipulated recreations of real individuals. Their popularity surged with a fake video of Obama insulting Donald Trump. That same year, the developers of the DeepNude app chose to remove it from the market, despite their flood of cash from generating hundreds of thousands of images. “The probability of people misusing it is too high… We don’t want to make money this way.” It was a repentance compelled by the media, which had criticized DeepNude as a “horrifying app that undresses a photo of any woman with a single click.”
It is difficult to believe that the creator of DeepNude didn’t anticipate the tool being used to harm women. According to Sensity AI’s research in 2019, about 90-95% of deepfakes are non-consensual porn, with 90% targeting women. These figures are based on data from that year, and the technology has evolved significantly since then, with more realistic manipulations from generative AI models like Dall-E, Midjourney, and Stable Diffusion.
Stable Diffusion is committed to open source, allowing its basic architecture to be freely reused. This has helped prevent the harmful use of deepfake technology to sexualize women without their consent. Unfortunately, the internet is still riddled with forums and channels that discuss exploiting these capabilities to create non-consensual pornographic content. Emad Mostaque, the company’s CEO, addressed these concerns in an interview with TechCrunch. “A percentage of people are simply unpleasant and weird, but that’s humanity… Indeed, it is our belief this technology will be prevalent, and the paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society.”
To defend itself against criticism, Stable Diffusion tweeted, “Don’t generate anything you’d be ashamed to show your mother.” Mostaque criticizes paternalism on one hand, but then urges mothers to safeguard their daughters, burdening them with all the responsibility. It is unsurprising that, on the whole, women tend to be more apprehensive about new technologies compared to men in survey results.
But these tools depend on the technological muscle of the big guys. Google (Pichai), Amazon (Jeff Bezos), X (Musk) and Microsoft (Nadela) have tools and platforms that are “supercharging AI deepfake porn,” according to Bloomberg. “Google, for instance, is the main traffic driver to widely used deepfake sites, while users of X, formerly known as Twitter, regularly circulate deepfaked content. Amazon, Cloudflare, and Microsoft’s GitHub provide crucial hosting services for these sites.”
Recently, there has been another concerning incident. Some Real Madrid youth soccer players have shared non-consensual sexual videos of at least two girls. The people involved in spreading deepfakes in Almendralejo and these aspiring soccer players have a couple of things in common — objectifying girls they know and their youth. It’s often assumed that kids this age are tech whizzes because they are “digital natives,” a term the media has ardently embraced. However, the truth is that we all need to learn how to use these tools — no one is born knowing how to WhatsApp somebody, photoshop an image, or pornify content.
This generation has access to technologies that their parents may not even be aware of. It is crucial to teach them how to use these technologies, not just the technical aspects like editing photos, but also to educate them about toxic behaviors and the importance of respect. The kids are growing up in a technological environment, but they are not necessarily digital natives — they need guidance in using technology responsibly. This includes instilling values, humanity, and respect for women. It is important to educate not only the kids who share inappropriate images but also those who develop apps and profit from them, as they must be aware of the consequences and take responsibility.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition