Skip to content
_
_
_
_

Elon Musk limits Grok’s image editing features in response to surge in non-consensual sexual imagery

The owner of X responded to a flood of complaints about violations of the right to one’s own image, privacy, and honor, as well as threats of bans from numerous governments

A la derecha, ejemplo de una imagen editada con el programa de IA Grok, de X.

Elon Musk, owner of the X social network and the Grok artificial intelligence (AI) platform, which allowed users to create and recreate sexualized and violent images of women and girls, has decided to limit the application’s ability to generate and edit audiovisual material. This comes amid a flood of complaints about violations of the right to one’s own image, privacy, and honor, as well as threats of bans from numerous governments, including those of France and Spain. Musk announced the measure on his own social network, although he has left the application’s capabilities available to paying subscribers.

The furor first erupted when British book influencer and emerging writer Beth Eleanor posted a picture of herself on X that had become a meme. It showed her turning to look at the camera, in a large library, and giving a thumbs-up. The following day, December 28, an AI consultant with 100,000 followers on X reposted the post, asking Grok to change her clothes for a bikini.

Grok is the artificial intelligence model integrated into X. The AI carried out the request to the letter. A couple of days later, that order went viral and sparked a trend that highlights the vulnerability of thousands of women to sexist attacks enabled by the combination of social media and AI. The ability to create pornographic videos of women without their consent, as well as images simulating assaults and femicides, spread rapidly at the end of last year, outpacing the ability of regulatory bodies to prevent it.

Eleanor complained publicly on X: “How is this ok? If I wanted people to see me in a bikini, I’d post it myself not have some shit AI generate [a] version.” She didn’t report it on the platform, but others did and received an automated response stating that it didn’t violate their rules.

The AI itself explained its reasons: “The legality of AI-generated images depends on the country and how they are used. In the U.S., image rights laws protect against unauthorized commercial use, but non-commercial and transformative uses generally fall under fair use.”

The French government was the first to react, announcing on January 2 that it will file a lawsuit against Grok for creating and disseminating “sexist and sexual content,” EFE reported. In a statement, three ministries (those of Economy, AI, and Equality) accused Musk’s AI of generating and disseminating “sexist and sexual content, particularly in the form of deepfakes featuring people without their consent.” The French government has also asked the Audiovisual and Digital Communications Regulatory Authority (ARCOM) to investigate whether Grok violated the obligations of the European Union’s Digital Services Regulation, particularly regarding the prevention of the dissemination of illegal content.

On January 7, Spain’s Minister of Youth and Children Sira Rego formally requested in writing that the State Attorney General’s Office investigate X and its AI for alleged “crimes of disseminating material depicting sexual violence against children.” This measure joins those of other governments, primarily European, which have even proposed banning X, in addition to imposing sanctions.

At the end of last week, Musk responded. Grok’s X account now posts the following message to non-subscriber image requests: “Image generation and editing are currently limited to paying subscribers.”

The ability to create audiovisual content has therefore been restricted to users with paid accounts and who are fully logged in. This decision stems from Musk’s belief that the responsibility for the content generated by his AI lies not with the platform, but with the user. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he posted on January 3.

Following the initial post featuring the British influencer, thousands of X users created images of women who simply posted a photo of themselves doing everyday activities. One of those affected in Spain was Paula Fraga, a criminal lawyer, who saw a photo she posted wishing everyone a happy new year turned into a source of sexual images: “It’s sexual violence and it causes a great deal of distress,” she told this newspaper. “As a public figure, I’m somewhat resilient, but it’s affecting me emotionally because it’s very unpleasant to see certain things; there are some completely degrading photos.”

Ricard Martínez, a professor at the University of Valencia and director of the Microsoft-UV Chair of Privacy and Digital Transformation, had already warned of the illegality of practices involving the unauthorized sexualization of content: “No image can be sexualized without consent, even if it is specified as a recreation. Each and every one of us has the recognized right to our own image, to privacy, and to honor. In digital environments, we must understand this from a practical and material point of view. Using someone’s image without permission to sexualize it constitutes illegal behavior.”

The current trend focused on images featuring bikinis, but Grok creates all sorts of variations, from sexual positions and bodies covered in anything from micro bikinis to burkinis. The AI is trained not to show nudity or intimate parts. There are, however, other apps that do this, and the results can be uploaded to X, which allows pornography. As with any internet trend, some have taken advantage of it to make their own activity go viral, including women with accounts on OnlyFans or other adult content platforms. X’s algorithm shows more content of this type when it detects that it’s viral, regardless of whether that content is insulting or degrading.

Musk himself had initially joked about the trend and created an image of himself in a bikini inspired by a famous meme of actor Ben Affleck: “He is mocking and normalizing and trivializing it, while there are women who are very upset,” said Fraga.

The debate now centers on how to prevent photos uploaded to X from becoming material for AI, especially photos of women, but men are also being sexualized. Grok itself offers ways to prevent these images from being used, but it’s unrealistic to think they will work on a large scale for all users. There’s also the problem of placing the burden of responsibility on the victims: “They tell me not to upload the photos. It’s outrageous that we have to be the ones taking precautions instead of kicking these scumbags off the networks,” says Fraga. Spanish actress Sara Sálamo also denounced it with a post on X’s own network: “With AI and zero scruples, they can sexualize you again without your consent. Modify your image. Your body. Your expression. Turn you back into an object.”

The solution in these cases lies in the platform partially limiting this type of activity, especially when it’s about to become the main topic of online debate. In some cases, some of the photos created by Grok have been removed, although the specific reason is unclear. Fraga, for example, has reported all photos of her using all the options the platform offers, including “report illegal content in the EU,” but X has only deleted one of the photos, which was in a particularly degrading sexual position.

In addition to violating privacy, honor, and personal image, sexualized images can fuel extortion campaigns, a criminal practice known as sextortion.

Since the end of the year, cybersecurity company Kaspersky has detected a surge in Stealerium, an open malware, accessible to anyone, that combines data theft with automated sextortion.

The legal process is long and complicated. Fraga will report the case to the police and file a report so she can later contact X, she explained. It remains to be seen whether X will comply with the existing European regulations on deepfakes. Fraga believes that this provision is missing from the Spanish Penal Code. A law concerning such cases involving minors is currently being drafted. A temporary solution to prevent the dissemination of these images once published — and if X refuses to delete them — is to contact the global organization StopNCII, which stands for Stop the Abuse of Intimate Images Without Consent. They explain what to do if someone shares intimate images.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_