‘Hypnocracy’: The regime to numb critical thinking
Critics say Elon Musk and Donald Trump are the high ‘priests’ of this phenomenon, warning that the unchecked use of AI could make the end of informed societies


It’s a warning that’s been made by a multitude of studies: memes are not harmless; for extremists, they are the most effective language for spreading their ideas. Social networks are tools of polarization and sophisticated interference. AI-generated hoaxes create fake reality that is indistinguishable from the real and are threatening democracy. Artificial intelligence itself is inherently biased, and these biases are far from innocent.
Behind this arsenal is a strategy that Hong Kong philosopher Jianwei Xun defines as “hypnocracy,” a concept that Cecilia Danesi, a researcher at the Institute of European Studies and Human Rights, summarizes as “a digital dictatorship that allows the direct modulation of states of consciousness” through “manipulation via the stories we consume, share, and believe.” The goal is to eliminate a critical-thinking, informed citizenry. To achieve this, all democratic safeguards must be dismantled.
Xun, author of Hypnocracy: Trump, Musk, and the New Architecture of Reality, claims that this regime is the first to operate directly on consciousness. The book argues that it does not repress thought, but rather induces and manipulates emotional states. Xun claims the goal is to numb critical thinking by overwhelming the senses with constant stimuli, so that reality and simulation become synonymous.
For Danesi, a member of the recent AI Action Summit held in Cannes, France, which addressed the situation, this fragmentation “erodes and radically changes the way citizens perceive reality and make political decisions, a situation that demands in-depth analysis and effective regulation.” “The first to suffer is, without a doubt, democracy,” she warns.
“The hypnocratic system doesn’t merely manipulate information or conduct surveillance; it fundamentally alters the architecture of perception itself,” Xun explains in a recent interview. “The most profound transformation isn’t in our political systems or social structures, but in our relationship with reality itself. We no longer inhabit a shared reality with competing interpretations.”
As Gianluca Misuraca, scientific director of the European initiative AI4Gov, pointed out at the French forum, the high “priests” of this new regime are U.S. President Donald Trump and his right-hand man, billionaire Elon Musk. Both lead what Xun identifies as “digital capitalism,” where “algorithms are not tools of calculation and forecasting, but rather mass hypnotic technology.”
According to Danesi, “hypnocracy allows for deeper and more silent interference; it manipulates our thinking without us realizing it, which is even more dangerous because it is more difficult to detect.”
But for the hypnotic power of this heightened digital liberalism to work, there needs to be no regulation. Social media companies, such as the Musk-owner X or Mark Zuckerberg’s Meta, have eliminated content moderation. Other AI platforms have begun to remove restrictions on responses to potentially harmful issues.
The U.S. National Institute of Standards and Technology (NIST) required scientists at the Artificial Intelligence Security Institute (AISI) — created by Joe Biden in 2023 — to anticipate potential AI-related problems. The goal was to develop tools “for authenticating content and tracking its provenance” and “labeling synthetic content.”
“It’s a fallacy,” Danesi counters. “This idea that more regulation means less development or progress is a false idea because the most regulated sectors, such as pharmaceuticals or banking, are the most profitable. The problem arises when regulation is poorly implemented, and that does impede innovation. The key lies in how to regulate to guarantee supreme values, such as human and fundamental rights.”
According to the researcher, this lack of oversight and moderation has led to “the proliferation of AI-generated images that support deepfakes, the easy viralization of content, regardless of its veracity, and manipulated narratives.” “They have turned disinformation into one of the most serious threats to democratic systems,” she warns.
On average, 79% of respondents believe that online incitement to violence should be eliminated. The most supportive groups (86%) are Germans, Brazilians, and Slovaks, while in the U.S., support for such restrictions drops to 63%.
Only 14% of respondents believe threats should be made public so users can respond to them, and 17% believe offensive content should be allowed to criticize certain groups of people or to gain attention (20%). The country with the highest level of support for this stance is the United States (29%), and the lowest is Brazil (9%).
When asked whether they prefer networks with unlimited freedom of expression or ones free from hate and misinformation, the majority of respondents chose platforms that are safe from digital violence and misleading information.
“Most people want platforms that reduce hate speech and abuse. This is also true in the United States, a country with a long-standing commitment to freedom of expression in the broadest sense,” says Yannis Theocharis, lead author of the study and professor of Digital Governance at the Munich School of Politics and Public Policy.
However, according to Spyros Kosmidis, co-author of the paper and professor of politics at the University of Oxford, “the results also show that there is no universal consensus regarding freedom of expression and moderation. People’s beliefs depend largely on the cultural norms, political experiences, and legal traditions of different countries. This makes global regulation more difficult.”
It’s also unclear who should be responsible for maintaining internet security against harmful content. The percentages are split equally between those who attribute this responsibility to platforms, governments, or users themselves.
In any case, regardless of who is responsible, the majority of users (59%) consider offensive, intolerant, or hateful content to be inevitable and expect reactions of this nature every time they post something (65% on average, and 73% in the United States).
“We’re noticing a widespread resignation. People have the impression that, despite all the promises to address offensive content, nothing is improving. This acclimatization effect is a huge problem because it’s gradually undermining social norms and normalizing hatred and violence,” warns Yannis Theocharis.
“With our democracies under threat, AI-driven interference requires swift and concrete action from leaders, both nationally and internationally,” warns Professor Florian Martin-Bariteau, director of the AI + Society Initiative. “Without a concerted global effort to align laws, build capacity, and develop processes to mitigate AI risks, democracies around the world remain vulnerable.”
Europe began down this regulatory path with the AI Act, but Danesi laments: “Given the international situation, the EU has applied the handbrake due to this idea that if we overregulate, we stifle innovation.” “But it’s not about stopping regulation, but rather about how we do it, about what values we have and want to promote,” she insists.
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.