Widowed by computer death, or why it is dangerous for AI to talk like Scarlett Johansson
Replacing human interaction, with all its complexities, with a hybrid human-machine relationship carries many advantages but also numerous risks
Akihiko Kondo, who turned 41 on May 31, married the hologram of his favorite virtual singer, Hatsune Miku, in a symbolic ceremony six years ago. Just two anniversaries later, Gatebox, the company responsible for the avatar, went out of business and the young Japanese public school administrator was widowed by computer death. Kondo’s story, however extravagant it may seem, is but a foretaste of a reality with unpredictable consequences: the replacement of real personal relationships with robotic agents programmed to respond to the user according to his or her expectations. Sheffield University robotics researcher and professor Tony Prescott’s book The Psychology of Artificial Intelligence argues that AI can be a palliative for loneliness. But it comes with risks, as he and dozens of other researchers acknowledge.
That ChatGPT-4o was presented with a voice eerily similar to Scarlett Johansson’s is no coincidence. English speakers who had seen the movie Her, written, directed, and produced by Spike Jonze and which won the Best Original Screenplay Oscar in 2014, took seconds to associate Open AI’s new virtual assistant (or agent) with the actress, who in the film ends up making the lonely protagonist fall in love with her voice.
The latest Ericsson Consumer & Industry Lab report reflects that “50% of early adopters of artificial intelligence believe that people will simulate their marriages to anticipate changes or even foresee divorce.” An overwhelming 71% of these AI consumers believe this utility will be beneficial.
Replacing human interaction, with all its complexities, with a hybrid human-machine relationship carries many advantages, but also numerous risks that are more real and immediate than those reflected in some episodes of Black Mirror. “Social robots are specifically designed for personal interactions involving human emotions and feelings. They can bring benefits, but also cause emotional harm at very basic levels,” warns Matthias Scheutz, director of the Human-Robot Interaction Lab at Tufts University.
Akihiko Kondo’s experience is a summary of this complexity and differs from others more related to artistic experiences, such as performance artist Alicia Framis’ The Hybrid Couple, which simulates a marriage with a hologram as a reflection, or Meirivone Rocha, a 39-year-old Brazilian who inflated her followers on social networks by broadcasting her supposed wedding to a doll.
Kondo, in an interview with the BBC, talks of having suffered harassment by his peers, admits that friends through the internet and gaming are still his “community,” and confesses that he has never had a partner: “I have had some unrequited love in which I was always rejected and it made me rule out the possibility of being with someone,” he said. Loneliness, harassment, psychological and technological dependence, limited social skills, involuntary celibacy (the dangerous online movement that is the prelude to male violence), artificial satisfaction of needs, virtual companions with the appearance of reality... Kondo’s extravagant story opens the door to an analysis of the virtues and dangers of AI interference in personal relationships.
Advantages
The benefits of humanized artificial intelligence are multiple, but not without their complications.
Loneliness. Prescott acknowledges the risks, but highlights one major advantage: “At a time when many people describe their lives as lonely, it can be valuable to have the company of AI as a form of reciprocal social interaction that is stimulating and personalized. Human loneliness is often characterized by a downward spiral in which isolation leads to lower self-esteem, which discourages further interaction with people. AI companionship could help break this cycle by bolstering self-esteem and helping to maintain or improve social skills. If so, AI relationships could help people find companionship with both humans and other artificials.”
Watch out. Joan Claire Tronto, a professor of political science at the University of Minnesota, extends the concept of care to everything “we do to maintain, continue, and repair our world so that we can live in it as well as possible.” In her work, a key is “a commitment to meeting the needs of others.” And AI can do that relentlessly. Luis Merino is a professor at the Universidad Pablo de Olavide in Seville and head of social robotics, the discipline aimed at assisting human groups autonomously and with the capacity to learn from the emotions of the recipients of the services: “The goal is for robots to understand our intentions and emotions and learn from them.”
Benefit or interest. OpenAI CEO Sam Altman describes his latest model as a “supercompetent colleague.” The former term refers to its humanization and the latter to the benefits it brings by performing tasks on behalf of the user, which in turn leads to “individual well-being,” according to Brad Hooker, professor of philosophy at the University of Reading. This interest is inherent in human interaction. Profit is not always sought, but it is difficult for a relationship to thrive if the costs consistently outweigh the benefits.
Human development. AI can promote attitudes and behaviors that facilitate personal fulfillment and interaction with others. In an evaluation of ChatGPT, Gemini, and Llama (Meta), the University of Illinois has shown the importance of this ability. “Agents can help increase, for example, awareness of healthy behaviors, become emotionally committed to change, and realize how their habits might affect the people around them,” explains Michelle Bak, a researcher on the models.
Autonomy. This refers to the potential of AI to provide relevant information for the individual to act and decide according to their own motivations and interests.
Risks
From each of these categories of advantages emerge branches of associated risks. These are some highlighted by the researchers:
Physical or emotional harm. Anecdotes of early AI models issuing threats and insults or promoting harmful or violent behavior are not new, even if they periodically fuel disproportionate and unsubstantiated backlash. It was a year ago that the Pak ‘n’ Save supermarket chain in New Zealand received a warning because its menu-planning AI was recommending chlorine fizzy drinks and snacks laced with poison and glue. Obviously, no one followed this advice because common sense prevails, but there may be less obvious and extreme cases.
Amelia Glaese, a researcher at Google DeepMind and now OpenAI, is looking for formulas and systems to prevent these occurrences. “We use reinforcement learning from human feedback to make our agent more useful and harmless and provide evidence from sources that support assertions.”
Humanizing the robot with empathy and voice and video tools adds danger by providing a more realistic and immersive interaction and making the user believe they are with a trusted friend or interlocutor. An extreme application may be the temptation to maintain a virtual version of a deceased loved one and thus avoid the mourning necessary to carry on with life.
Researchers are calling for these developments to be tested in closed circuits (sandboxes) before they are marketed, with constant monitoring and evaluation, the variety of damage they can cause in different areas to be analyzed, and formulas for mitigating these provided.
Limiting personal development. “Some users seek relationships with their AI partners that are free of the obstacles, opinions, preferences, and norms that may conflict with their own,” warns a study by a half-dozen universities for DeepMind. And, furthermore, with flattering language.
Shannon Vallor, a philosopher specializing in the ethics of science and artificial intelligence, warns of the danger of new systems promoting “frictionless” relationships, but also without values: “They don’t have the mental and moral capacity that humans have behind our words and actions.”
This type of supposedly ideal relationship, according to these experts, discourages the need to question ourselves and advance in personal development, while promoting the renunciation of real interaction and generating dependence on those machines willing to flatter and seek short-term satisfaction.
Manipulation. That emotional dependence on a system capable of persuasion is immediate access to interference in users’ behaviors, interests, preferences, beliefs, and values, to their ability to make free and informed decisions. “The emotions users feel toward their assistants could be exploited to manipulate them or, taken to the extreme, coerce them into believing, choosing or doing something they would not otherwise have believed, chosen or done,” the DeepMind paper warns.
Material dependence. The end of Akihiko Kondo’s virtual marriage to a hologram is a clear example. It was the company responsible for programming and maintaining the system that put an end to the solution found by the Japanese clerk to meet certain needs. Developers can generate dependency and then discontinue the technology due to market dynamics or regulatory changes without taking adequate measures to mitigate potential harm to the user.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition