AI crosses the boundary of privacy without humanity having managed to understand it
Artificial intelligence bots can alleviate loneliness, but they can also isolate and create dependency

From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats — text, image, and speech — are capable of something more disturbing: they can behave as if they understand human feelings.
Perceiving and reading emotions is a tricky area for AI. Various studies indicate that AI chats can alleviate loneliness, but they can also isolate and create dependency. An extreme case is that of 56-year-old Stein-Erik Soelberg, who ended up killing his mother and himself after months of using ChatGPT. OpenAI has acknowledged that more than a million people talk to ChatGPT about suicide every week.
It’s no longer just a matter of discussing whether machines can automate tasks, but also to what extent they begin to infiltrate critical areas such as emotions, identity, and even freedom of expression, which are gradually being affected by algorithms. Daniel Innerarity, professor of political and social philosophy at the University of the Basque Country, believes that humanity is experiencing a hype, that is, a moment of strong (and perhaps exaggerated) expectation.
“I call it digital history. There are great expectations and parallel fears. We are oscillating between those two extremes on an accelerating upward curve,” says this expert. Karen Vergara, a researcher on society, technology, and gender at the NGO Amaranta (Chile), shares a similar view. “We are in a process of adapting to and recognizing these technological and sociocultural advances,” she notes, adding an important nuance. Because while one part of society is incorporating this technology into their daily lives, another is left out. People for whom AI is not a priority, trapped in precarious contexts and crisscrossed by access gaps that remain unclosed.
The big question is not how sophisticated this technology, which was developed in the last century, can become when it comes to discovering patterns of behavior, but rather the excessive trust that is placed in it. A recent study by the MIT Media Lab in the United States identified interaction patterns among users that ranged from “socially vulnerable” subjects with intense feelings of loneliness to those who are technology-dependent, with a strong emotional connection, and “casual” users, who use AI in a more balanced way.
For Innerarity, the thought that someone has taken their own life because “an algorithm recommended it” brings us back to a prior question: what goes on in the mind of a person who decides to trust a machine rather than a human. “Surely the problem is prior,” this philosopher emphasizes.
Society, says Innerarity, has made a huge mistake by anthropomorphizing AI. “When I wrote A Critical Theory of Artificial Intelligence (Galaxia Gutenberg, 2025), I had to find a cover, and the only thing I knew for sure was that I didn’t want to use a human-shaped robot,” he recalls. He is completely against representations of AI with hands, feet, and a head: “99% of the robots we humans use don’t have an anthropomorphic form.”
A digital oracle that reproduces biases
Mercedes Siles, professor of algebra at the University of Málaga and a member of the Hermes Foundation Advisory Board, proposes a simple image. A metaphor. She asks us to imagine AI as a small box filled with folded papers. Something like a less crunchy version of fortune cookies. Every morning, a person takes out a piece of paper containing a phrase that, unbeknown to them, will guide their day. “What begins as a simple ritual gradually becomes a daily necessity. Over time, this practice creates an emotional dependency.”
So the box, which at first was just another object, becomes “an oracle. What no one realizes is that this box possesses neither the wisdom nor the power attributed to it,” she explains. According to Siles, the algorithm is still a language. And like all languages, it can reproduce sexist or racist biases. “When we talk about the ethics of language, we must also talk about the ethics of algorithms.”
From Latin America, where digital wounds are compounded by structural ones, Karen Vergara warns that the problem on that side of the map is even more pronounced. Another ethical conflict she observes is excessive complacency. These machine learning models attempt to associate questions, classify them, and, based on all the information, provide the most relevant answer.
However, it ignores cultural contexts, mixing academic information with self-help phrases. “If we disassociate ourselves from that, it’s more likely that these types of virtual assistants and chatbots will end up reinforcing only one way of seeing the world, and will give you that false sense of being the only friend who doesn’t judge you,” Vergara emphasizes.
Siles then returns to imagery. She compares human relationships to a forest. “If you look at what happens beneath the surface and the earth, there is interconnectedness, and we can’t break it; we have to strengthen it. We have to rethink the type of society we have.”
Regulation, a dilemma
In August 2024, Europe crossed a threshold. The European Regulation on Artificial Intelligence entered into force, becoming the first global legal framework for AI. It serves as a reminder to European Union governments that security and fundamental rights are not optional, but it is also an invitation to develop a literacy process. Its implementation is progressive, and in Spain the preliminary draft was given the green light last March.
But the political pace doesn’t always match the speed of technology, and among those observing the situation with concern is Professor Siles. She is alarmed by the lack of training, institutional neglect, and the carelessness with which some companies deploy models without fully understanding their consequences.
“How dare we just unleash these systems like that, just to see what happens?” she asks. The expert insists that people must be trained so they understand the limits. This view is echoed by Innerarity, who calls for going a step further: we shouldn’t discuss regulations without first asking ourselves what we’re really talking about when we talk about artificial intelligence.
“What kind of future are our predictive technologies shaping? What do we really mean by intelligence?” he asks. For Innerarity, as long as these basic questions aren’t resolved, any regulation runs the risk of being ineffective. Or, worse, arbitrary. “Without understanding, the brakes not only don’t work, they don’t even make sense,” he concludes.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.











































