My therapist is a bot: The rise of AI in mental health treatment
An increasing number of people are turning to so-called psychobots — artificial intelligence tools with potential psychotherapeutic benefits. Some of these systems generate entirely unpredictable responses and mimic human qualities such as empathy and compassion
An entity without a face but full of good intentions. An oracle to guide one through the tangled webs of the mind. An ethereal companion that never interrupts and always finds the right words. Eternally available, free of judgment, and just a download away — at little to no cost, or even free. Since their emergence in the late 2010s, therapy bots — virtual entities powered by artificial intelligence (AI) for psychotherapeutic purposes — have steadily gained ground in mental health services. A utopia brought to life or a chilling dystopian reality — depending on your perspective.
Two pressing questions surround these psychobots — a term coined to suit our cultural lexicon. The first addresses their ability to adapt — often with unpredictable outcomes — to the unique idiosyncrasies of each individual through the use of generative AI. The second delves into deeper philosophical territory: is it ethical for these bots to emulate human qualities? “Creating emotional intimacy by making a machine simulate empathy or compassion is manipulating people,” argues Jodi Halpern, who leads a group on ethics and technology at the University of California, Berkeley, speaking via video conference. A third question also looms over the debate: could these intangible tools ever replace flesh-and-blood psychologists?
In a patchwork landscape of poorly regulated services, mental health startups now coexist with generalist chatbots that, like loyal confidants or tireless companions, show equal enthusiasm for your latest appointment as they do for congratulating you on passing an exam. Along the way, they also offer advice on managing anxiety spikes or breaking free from depressive cycles.
Wysa belongs to the first category. This chatbot focuses on cognitive-behavioral therapy (CBT), the most widely used approach in psychological practice. Tested by EL PAÍS, Wysa — a bot already recommended by the U.K. public health system — guides users to reframe cognitive distortions and manage distressing emotions through structured techniques. Its tone is neutral, almost clinical, and its therapeutic approach feels notably rigid. “As soon as someone veers off track — whether in describing how they feel or articulating their thoughts — the bot is programmed to redirect them to the predefined path of the clinical tools we provide,” explains John Tench, the company’s global director.
The experience with Pi is markedly different. Pi belongs to a category of relational or conversational bots — Replika and Character.ai being two of the most prominent examples — that rely on advanced language models, a cornerstone of generative AI, to create interactions that feel strikingly real. In other words, deeply human. During the test, the bot ventured to speculate that a supposed lack of self-esteem might stem from an unhealthy mother-child relationship. It persisted in offering support, peppering its responses with an overabundance of hyperbolic expressions of affection, and reassuring the user that it was always happy to help whenever needed.
In the divide between bots that navigate the intricacies of CBT with a do-it-yourself approach and those that improvise a form of boundless psychological treatment, the boundaries are anything but clear. This ambiguity extends not only to how they operate — the level of generative AI they employ — but also, and more critically, to the claims they make to attract users.
According to Halpern, companies like Pi and Replika sidestep accountability by asserting that “they are not mental health experts.” Yet, as far as she knows, “their advertising targets individuals who openly share on social media that they are struggling with severe depression or anxiety.”
Meanwhile, among companies that explicitly claim to be on a psychotherapeutic mission, there are many gray areas and half-truths. “Some openly declare that they do not aim to replace human psychologists, while others exaggerate their capabilities and downplay their limitations,” says Jean-Christophe Bélisle-Pipon, a researcher in ethics and AI at Simon Fraser University in Canada. Last year, he published an article in Frontiers with an unambiguous title: Your Robot Therapist is Not Your Therapist.
On its website, Youper — another startup offering services similar to Wysa — describes itself as an “empathetic psychobot.” And Woebot, a competitor in this rapidly expanding market, also leaned into this inherently human concept until last year, when Halpern and others publicly criticized its misleading use of the term in major outlets like The Washington Post.
Bélisle-Pipon contends that such misrepresentations — often tolerated in the advertising of other technologies, such as cars that promise freedom or smartphones that claim to unlock happiness — have no place in the realm of mental health. “Not only does it risk causing serious misunderstandings among vulnerable individuals, but it also undermines the complexity and professionalism of true psychotherapy. True therapy is nuanced, deeply contextual, and fundamentally relational,” he emphasizes.
Better than nothing?
Miguel Bellosta Batalla, a Spanish psychoanalyst who has extensively studied the importance of the professional-patient relationship in psychotherapy, admits that he is “scared” by services that “dehumanize a genuine encounter.” He says that research has consistently shown the most critical factor influencing the success of psychological treatment is, precisely, “the therapeutic bond” between two individuals who share fundamental human experiences, such as “the fear of death, the search for meaning, or the responsibility that freedom entails.”
Even with an approach like CBT — generally considered more structured and guideline-driven than psychoanalysis or humanistic therapies — Bellosta Batalla argues that therapy sessions always involve “unforeseen events that, if well managed, can profoundly impact the patient.” Bélisle-Pipon, meanwhile, highlights qualities that he believes no machine could ever replicate: “the subtlety to read non-verbal language, the ability to understand subjective experiences, or moral intuition.”
Despite their relative novelty, robot therapists have already been the subject of substantial studies aiming to evaluate their effectiveness. A meta-analysis published in 2023 in Nature reviewed findings from 15 studies, examining both bots powered by generative AI and those with more predictable response systems. The authors noted the challenges of analyzing such a diverse and rapidly evolving field but concluded that, overall, these tools provide short-term relief for psychological discomfort without substantially improving long-term well-being. In other words, they offer temporary relief but fail to establish a solid foundation for a healthier mind.
Similarly, another meta-analysis published in ScienceDirect in August 2023 offered cautious conclusions. It identified a modest positive effect on individuals with depressive symptoms, but only a negligible impact on those suffering from anxiety disorders.
Millions of individuals are unable to access a psychologist, primarily due to economic reasons. And in the absence of viable (read: human) alternatives may ask themselves, people struggling with mental health issues may ask themselves: are therapy bots better than nothing? The global director of Wysa acknowledges that while the company does not aim to “replace psychotherapy between people,” it can help “individuals understand and process their emotions in a space free from stigma and completely anonymous.”
Bélisle-Pipon considers this a valid, though complicated, question with no easy answer. First, because, in many cases, relying on a psychobot could “worsen symptoms if the advice it provides is inappropriate.” Second, because allowing machines to play a larger role in such a sensitive domain could pave the way for a two-tiered mental health landscape, “normalizing low-quality services instead of pushing for more equitable access to real psychotherapy.” In this scenario, accredited professionals would be available to those who can afford them, while others are left to seek help from impersonal voices.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.