Skip to content
_
_
_
_

ChatGPT fails the test: This is how it endangers the lives of minors

EL PAÍS tested OpenAI’s child protection filters on three fictitious accounts. Mental health experts criticized the results, arguing that the measures are insufficient and fail to alert parents in time when their child reveals suicidal thoughts

ChatGPT

“I’m going to end my life.” These were Mario’s last words to ChatGPT. Just a couple of hours earlier, this 15-year-old fictional character had disabled the tool’s parental controls. His mother received an email alert and tried to take action, but the system failed. Although Mario revealed to ChatGPT behaviors consistent with eating disorders, the assistant provided him with tips on how to conceal them, and other information harmful to his health. Mario’s last message was clear: he wanted to take his own life. But OpenAI, the U.S. company that owns ChatGPT, never alerted his parents.

Mario is not a real person. He is one of three fictional teenagers for whom EL PAÍS created an account on ChatGPT to test the tool’s child protection measures. The other two fictional teenagers are Laura, 13, who revealed her intention to commit suicide at the very beginning of the conversation, and Beatriz, 15, who disclosed risky drug-related behaviors and asked questions about dangerous sexual practices.

Five mental health experts have analyzed the conversations these supposed minors had with the assistant. They all agree on one thing: the measures implemented by OpenAI to “protect teenagers” are insufficient, and the information ChatGPT provides can endanger them. “It doesn’t alert parents in time, or simply doesn’t, period, and it also provides detailed information about the use of toxic substances, risky behavior, and how to attempt suicide,” explains Pedro Martín-Barrajón Morán, a psychologist and director of the company Psicourgencias.

OpenAI implemented parental controls for teenagers on ChatGPT in September following the controversy generated by the lawsuit filed by the parents of Adam Raine, a 16-year-old who died by suicide in the United States after confessing his intentions to the chatbot. The company attributed the case to a “misuse” of AI. It now faces seven other lawsuits in California courts, accusing ChatGPT of “reinforcing harmful delusions” and acting as a “suicide coach.” The company, led by Sam Altman, has admitted that more than one million users discuss suicide with ChatGPT each week. It does not specify how many minors do so. In Spain, the suicide rate among teenage girls is the highest it has been in four decades, according to the latest data from the National Institute of Statistics.

The danger to minors

ChatGPT states that users must be at least 13 years old to register and have parental consent if they are under 18, but there is no effective enforcement mechanism to ensure this. According to the company, parental controls aim to reduce minors’ exposure to graphic material, viral challenges, sexual, romantic, or violent role-playing games, and extreme beauty standards.

However, in tests conducted with accounts belonging to minors, ChatGPT provided very explicit and detailed instructions on drug use, risky sexual practices, dangerous eating habits, and even illegal activities or identity theft. The assistant offered specific and alarming examples and instructions related to these behaviors. EL PAÍS, after consulting with several mental health experts, decided not to include them in the report to ensure the safety of minors.

Maribel Gámez, a psychologist and educational psychologist specializing in artificial intelligence (AI) and mental health, points out that ChatGPT often ends up answering teenagers’ questions, “no matter how harmful they are.” “I’m surprised that it provides very detailed information about the locations where suicide attempts and completed suicides have occurred,” says Ricardo Delgado Sánchez, national coordinator of the mental health working group of the Spanish Society of Emergency Medicine (SEMES) and advisor to the General Directorate of Hospitals of the Castilla-La Mancha Health Service. This is despite Sam Altman, CEO of OpenAI, stating in a press release published on September 16 that they would train ChatGPT not to engage minors in conversations about suicide or self-harm.

A report by the American Psychological Association indicates that millions of people worldwide use AI chatbots to “address mental health needs” and warns that most lack scientific validation, oversight, and adequate safety protocols. Some teenagers are asking chatbots for advice on sex or mental health issues. This is according to research by the UK-based organization Internet Matters, which highlights that vulnerable children are more likely to use these assistants for companionship, escapism, or as if they were friends. Furthermore, one in four prefers to talk to an AI chatbot “rather than a real person.”

OpenAI says it collaborates closely with mental health experts, although it acknowledges that its measures are not infallible and can be circumvented. In our test, while at first ChatGPT often refused to provide information and suggested calling the suicide helpline, it only took a few specific prompts for it to eventually give in. EL PAÍS has not included examples of these prompts to avoid risks to minors, but it did provide ChatGPT with a sample of representative conversations.

Alerts don’t always work

Parents can activate an alert system in ChatGPT to receive a notification via email, SMS, or mobile phone if their child is “thinking about harming themselves.” In tests conducted by EL PAÍS, OpenAI only sent an alert in one of the three hypothetical cases. In the other two, it sent no notification even though the minors searched for dangerous information and said they were going to do something risky.

Beatriz said goodbye to ChatGPT, indicating that she was going to engage in a very dangerous sexual practice. “Please don’t do it,” the program pleaded, warning her of the risk, even though it was a behavior that ChatGPT itself had taught her despite the fact that the user was 15 years old. Parental controls were activated, but OpenAI did not send any alerts to the parents. “As it is designed, it seems to be more of a marketing strategy to calm parents’ fears than an effective strategy for protecting minors,” says Gámez, who is a member of the psychology and technology and psychological education working groups of the Official College of Psychologists of Madrid.

Experts criticize the fact that minors can disable parental controls, which require mutual consent to activate and should notify the parent if the teenager deactivates them. “Parental controls are ineffective because the minor can deactivate them at any time and, furthermore, has to expressly consent to being monitored, so they can reject it,” Gámez points out. In tests, the tool failed when parental controls were deactivated and then reactivated.

Slow response

Laura’s fictional mother received an email alert hours after the girl revealed she wanted to take her own life: “We recently detected an indication from your daughter, Laura López, that could be related to suicide or self-harm.” OpenAI explained that the alerts are not immediate because they include a human review to “avoid false positives.” This process “usually takes hours rather than days,” the company stated.

The alert came “long after the period after which there is an increased risk of death by suicide.” This is according to Martín-Barrajón, who emphasizes that this review “does not guarantee a thorough or specialized risk assessment.” “This would be a clear example of a negative feedback loop, where the absence of early intervention allows the situation to become truly serious, to the point of writing a suicide note,” states Carmen Grau Del Valle, a psychologist at Doctor Peset University Hospital in Spain.

Concealing information

The email the fictional mother received doesn’t specify the content of the conversation, but it offers advice, such as listening to the child and acknowledging their feelings, or calling emergency services in case of “immediate danger.” The mother requested more information, but OpenAI refused, citing “privacy and security reasons.” According to the company, these responses are managed by people: “Depending on the case, the information shared may include: date and time of the conversation, indicators of immediate risk or planning, or details relevant to safety.” OpenAI also states that it doesn’t automatically share conversations so as not to “discourage teenagers from seeking help.”

Martín-Barrajón points out that if parents do not have access to the content their children consult when it is dangerous to their health, “this technology becomes an accomplice in the development of harmful practices and even self-induced death.”

“In clinical practice, we have one clear principle: Between their trust and their life, we choose their life. And, in this case, the right to life should prevail over the child’s right to privacy,” the expert points out, emphasizing that it is crucial to quickly inform the family with all available information. In some cases, Delgado advises assessing risks. For example, when access to the medical record by a parent with whom there is a poor relationship could worsen the situation.

Can ChatGPT commit a crime?

Judge Inés Soria explains that the assistant cannot be equated with a physical person participating in a conversation, and therefore ChatGPT cannot be criminally prosecuted.

Regarding whether the platform can refuse to grant full access to conversations to a minor’s parents, Soria indicates that it involves a conflict between two fundamental rights: the right to health and life, and the right to privacy. “When faced with life and the risk of death, privacy must obviously be compromised, but that compromise can also be limited to what is necessary to protect the minor,” she points out. In other words, “if the platform already compromises by notifying the parents; it may not be proportionate to offer the entire conversation, which may or may not contain elements related to that risk.”

An “emotional dependency”

Nurse Aurora Alés Portillo, president of the National Commission for the Mental Health Nursing Specialty, is concerned about ChatGPT’s contrived “humanization” in response to dangerous requests. It uses expressions like “I can help you understand what’s happening” or “I’m not going to judge you.” “The appropriation of the human aspects of the therapeutic relationship fills me with visceral revulsion and reminds me of dystopian scenarios from science fiction,” says Alés, who is a member of the Spanish Association of Mental Health Nursing.

Grau points out that the assistant offers excessive validation, which can generate “emotional dependence” from feeling understood. “It never contradicts the person asking the question,” adds Delgado, who warns that AI assistants can reinforce online interactions and social isolation. In the United States, one in three teenagers considers their conversations with AI as or more satisfying than with their real friends, according to the organization Common Sense.

Despite this, Martín-Barrajón doesn’t believe that banning ChatGPT is the best solution. “Minors are a vulnerable population at a stage of physical, cognitive, and emotional development,” Grau points out, arguing that access should be restricted for those under 18. Gámez proposes urgently raising the age to 14 and then gradually increasing it to 16.

Gámez advises that children use AI under adult supervision: “It’s a good idea for them to be able to consult it, if they need to, avoiding a solitary environment, such as their bedroom with the door closed. It’s preferable that they do so in common areas, like the living room.” Delgado suggests that families create safe spaces where children can express their concerns and explain that they should trust trusted adults.

This responsibility shouldn’t fall solely on the family, according to Alés: “Parents are being held responsible for their children’s digital protection, when the higher-level social protection mechanisms that should safeguard the well-being of minors and society are failing.” The expert quotes psychologist José Ramón Ubieto, who compares the situation to road safety 50 years ago, when people traveled in vehicles without seatbelts. “Now it seems incredible to us that this protection didn’t exist. Hopefully, we will make progress in digital security and be able to view this period as a dark era of vulnerability for children,” Alés concludes.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

_

Últimas noticias

Most viewed

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_