‘All eyes on Rafah’: The political meme, AI and the enshittification of the Internet

A meme generated with artificial intelligence has gone viral, providing a warning about a digital environment that will be dominated by the automatic generation of content

The meme that has circulated on social networks with the slogan "All eyes on Rafah."@chaa.my_

If we had been told a year ago that a viral image that would crystallize outrage against the killing of children in Gaza would be an artificial intelligence (AI)-generated meme, we would have thought it dystopian. Within 24 hours, the template created on Instagram with the slogan “All eyes on Rafah” in the shape of a synthetic refugee camp has been directly shared 33 million times (at the time this article was published). Among those who have done so are personalities as diverse as Nobel Peace Prize winner Malala Yousafzai, Spanish Deputy Prime Minister Yolanda Díaz, soccer player Ousmane Dembélé, former Finnish prime minister Sanna Marin and actress Bella Hadid, amid hundreds of thousands more screenshots on other social media accounts. Almost a decade ago, the iconic image of another global tragedy was that of the Syrian child Aylan Kurdi, who lost his life attempting to cross the Mediterranean, being carried from a beach. Now, after months of horror in the Gaza Strip, is it going to be an illustration generated in a matter of seconds by a machine?

This meme was created by a young man — who has not responded to the questions this newspaper sent him by direct message — with the flags of Malaysia and Indonesia on his Instagram profile: @chaa.my_. He had very few followers beforehand and shared the image via another account, @shahv4012. Looking at his activity, it is obvious that he is overwhelmed and proud of the success of his illustration, and also that he had made several attempts with a synthetic image creation tool until he managed to make this one go viral with a slogan — “All eyes on Rafah”— that had already been circulating for several days on social media. He hit the nail on the head: the slogan, the image, the moment, the template sharing tool (generally used to share all kinds of frivolous trends with friends).

And above all, there is the human emotion. We have become tired of repeating it: our psychology is the lever for a meme to go viral, for misinformation to be shared more often than truthful news, for us to hit the retweet or share button. It doesn’t matter that we have been discussing for months whether or not deepfakes will be a major problem in the future: millions of people have found in a synthetic image the best way to express their indignation over what Benjamin Netanyahu is perpetrating against Palestinians. The artificial image does not deceive us (the meme is not disinformation per se), but it suits us: for what we want to express and, above all, for what we want to display about ourselves.

Coincidentally, the Rafah meme has gone viral just as Mark Zuckerberg has announced that he will feed his artificial intelligence with the content we share on his platforms, Instagram and Facebook, unless we indicate otherwise. The big tech companies are running out of information with which to train their artificial intelligence models: they’ve already read the entire Internet, they’ve watched all of YouTube... Now they need photos of our drunken sprees, of our kids and dogs, to reach the Holy Grail of artificial general intelligence. But if we start sharing machine-generated images, like the Rafah meme, on Meta’s networks, and then Meta trains its models with those fake images, we’ll have a robotic whale chomping at the synthetic tail. As they say in the industry: shit goes in, shit comes out.

Forgive me for using vulgar words, but we are witnessing a progressive enshittification of social media, as defined by Cory Doctorow, one of the most illustrious minds to have analyzed the technological ecosystem. Platforms first satisfy users, then let their customers squeeze them, until finally it is the platform itself that decides to squeeze them all, with undesirable results. It has happened with Amazon, with TikTok and now Google is starting to provide another good example. For 25 years we have been convinced by the efficiency of its search engine, but little by little we have become accustomed to queries returning anything but an interesting link to click on. Now, Google is trying to answer us through its artificial intelligence, which confuses quality information with jokes and ends up recommending we eat rocks and pizzas with glue.

A little over a year ago, some of the most relevant players in the technological universe (and with the most-vested interests) were alarmingly calling for a six-month moratorium on the development of artificial intelligence. The catastrophe was so imminent that everything had to be stopped. Fourteen months have passed and it is laughable to see what has happened since then: the machines have not improved their capabilities exponentially, they continue to spout the same nonsense they did in 2022, and the company that led the revolution, OpenAI — which has a strategic agreement with Prisa Media, the publisher of EL PAÍS — has strung together one reputational crisis after another. And meanwhile, Elon Musk, one of the loudest alarmist voices at the time, has raised almost $6 billion for his own artificial intelligence company, xAI. This is today’s digital environment and the conflict that lies ahead: will we be able to stop the cycle of enshittification?

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In