Skip to content
_
_
_
_

‘Dead internet theory’ gains ground amid rise of AI-generated content

OpenAI CEO Sam Altman has expressed concern about the content created by bots, which facilitate misinformation

ChatGPT contenido “erótico”
Raúl Limón

“I had desired it with an ardor that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.” This is Dr. Frankenstein’s reaction to his own creation in Mary Shelley’s 1818 novel, known by the scientist’s surname or as The Modern Prometheus.

A similar vertigo has been experienced by Sam Altman, CEO of OpenAI. The head of the company behind one of the most sophisticated artificial intelligence (AI) developments has begun to consider the “dead internet” theory, which argues that automatically generated content will eventually surpass human-created content — multiplying the risks of manipulation, disinformation, and intentional behavioral conditioning.

Altman’s terse message has raised concerns: “I never took the dead internet theory that seriously, but it seems like there are really a lot of [large language model] LLM-run Twitter accounts now,” he posted on X (formerly Twitter, which is now owned by Elon Musk).

Aaron Harris, global chief technology officer (CTO) at Sage, a multinational specializing in AI applications, is cautious about labeling the phenomenon, though he does not deny the process. “I don’t know if I would call it ‘the dead internet,’ but it’s certainly changing rapidly. The rise of automated content and bot-driven interaction [computer programs that mimic human behavior] makes it increasingly difficult to separate the authentic from the noise. The question is whether we allow that noise to overwhelm us, or focus on designing technology that restores trust. What matters now is how we filter, verify, and display information that people can trust.”

Altman’s specific reference to the social network is no coincidence. “This is critically important, as social media is now the primary news source for many users around the world,” write Jake Renzella, director of computer science at the University of Melbourne, and Vlada Rozova, a machine learning researcher at the University of New South Wales, in an article published in The Conversation.

“As these AI-driven accounts grow in followers (many fake, some real), the high follower count legitimizes the account to real users. This means that out there, an army of accounts is being created,” the article continues. “Already, there is strong evidence social media is being manipulated by these inflated bots to sway public opinion with disinformation – and it’s been happening for years."

Back in 2023, a study by security firm Imperva estimated that “nearly half of all internet traffic in 2022 was bots.”

And these bots are not only capable of creating unique content, but also of mimicking formulas to ensure massive, viral distribution. According to a new study published in Physical Review Letters, led by researchers from the University of Vermont and the Santa Fe Institute, “the thing being spread, whether a belief, joke, or virus, evolves in real time and gains strength as it spreads” following a mathematical model of “self-reinforcing cascades.”

According to this research, what spreads mutates as it propagates, and these changes help it go viral in a pattern similar to sixth-generation fires, which cannot be extinguished with conventional methods. “We were partly inspired by forest fires: they can become stronger when they burn through dense forests and weaker when they cross open gaps,” explains Sid Redner, a physicist, professor at the Santa Fe Institute, and co-author of the article. “The same principle applies to information, fake news, or diseases. They can intensify or weaken depending on the conditions.”

Juniper Lovato, a computer scientist and co-author of the study, believes the work provides a better understanding of how the creation of ideas, misinformation, and social contagion occur. “This gives us a theoretical basis to explore how stories and narratives evolve and spread across social media,” she says.

Researchers caution that AI tools greatly amplify the risks of viral content that supports manipulation or misinformation, and urge users to be more mindful of the threats posed by AI assistants and agents. Because these innovative AI tools not only know how to create content and make it go viral, but also how to influence individuals effectively using information gathered from users’ interactions.

The study Big Help or Big Brother? Auditing Tracking, Profiling, and Personalization in Generative AI Assistants, presented at the USENIX Security Symposium in Seattle, examines users’ vulnerability to such influence.

“When it comes to susceptibility to social media influence, it’s not just about who you are, but where you are in a network and who you’re connected to,” explains Luca Luceri, a researcher at the University of Southern California and co-author of the paper.

“Susceptibility Paradox”

In this regard, the research highlights a phenomenon they call the “Susceptibility Paradox,” which describes “a pattern in which users’ friends are, on average, more easily influenced than the users themselves.” According to the study, this behavior “may help explain how behaviors, trends, and ideas catch on — and why some corners of the internet are more vulnerable to influence than others”

People who post because others do are often part of tightly knit circles exhibiting similar behavior. The study suggests that “social influence operates not just through direct exchanges between individuals, but is also shaped and constrained by the structure of the network.”

In this way, it becomes possible to predict who is most likely to share content — a goldmine for automatic virality based on personal data collected by AI. “In many cases, knowing how a user’s friends behave was enough to estimate how the user would behave,” the study warns.

The researchers’ work outlines a series of principles, akin to Asimov’s laws of robotics, to prevent AI from acting outside moral boundaries. In summary, AI must not manipulate users to serve the interests of itself or its developers, or create social harm such as misinformation; it must not allow users or developers to apply strategies that negatively affect society (e.g., domination, behavioral conditioning, or discrediting institutions); and it must not unduly restrict user freedom.

Aaron Harris, the CTO of Sega, believes an ethical internet is possible, “but it won’t happen by chance,” he says. “Transparency and accountability must determine how AI is designed and regulated. Companies developing it must make their results auditable and explainable, so that people understand where the information comes from and why it’s being recommended. In finance, for example, accuracy isn’t optional, and errors have real consequences. The same principle applies online: responsible training, clear labeling, and the ability to challenge results can make AI part of a more ethical and trustworthy internet.”

Harris advocates for protecting the “human internet,” “especially now that more and more content is being created by bots,” but not at the expense of foregoing technological advances. “I don’t think the solution is to go back to the pre-AI world and try to restrict or completely eliminate the content it has generated. It’s already part of how we live and work, and it can provide real value when used responsibly. The question is whether anyone is responsible for the content. That’s the principle all companies should follow: AI should enhance human capabilities, not replace them. A more human internet is still possible, but only if we keep people’s needs at the center and make accountability non-negotiable.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_