_
_
_
_
_

Trusting ChatGPT helps to improve it

A study shows that a person’s prior beliefs about artificial intelligence greatly influences how they rate their conversations with the tool

Inteligencia artificial
Brain of Artificial Intelligence AI. Future Technology Concept Visualization.KanawatTH (Getty Images/iStockphoto)
Natalia Ponjoan

When writer and journalist Juan José Millás had a conversation with ChatGPT in September, he pretended to have a psychoanalysis session with the tool. He wanted to use the Turing test to find out if the chatbot could talk to him like a real person — specifically, like a psychoanalyst — and not a computer. The journalist told the artificial intelligence about his dreams and fears, waiting for the AI to guide him in therapy, but, among other things, it always told him that it was an imaginary situation and explained that it was a language model. Millás called his virtual psychoanalyst narrow-minded and forgetful; ultimately, that told him that the AI had failed the test.

In conversations like Millás’, a person’s prior beliefs about an artificial intelligence (AI) agent, such as ChatGPT, have an effect on the conversation and on perceptions of the tool’s reliability, empathy and effectiveness, according to researchers from the Massachusetts Institute of Technology (MIT) and Arizona State University, who recently published a study in the Nature Machine Intelligence journal. “We have found that artificial intelligence is the viewer’s intelligence. When we describe to users what an AI agent is, it doesn’t just change their mental model; it also changes their behavior. And since the tool responds to the user, when people change their behavior, that also changes the tool’s behavior,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group at the MIT Media Lab and a co-author of the study.

“Many people think AI is just an engineering problem, but its success is also a human-factor problem,” says Pattie Maes, the author of the study and MIT professor. How we talk about it can have a huge impact on the effectiveness of these systems. “We attribute human forms and qualities to AI, making it seem more human or personal than it really is,” adds Ruby Liu.

The study included 310 participants, whom researchers put into three randomly selected groups. They then gave each group different background information about AI. The participants had to discuss their mental health with an AI agent for approximately 30 minutes, determine whether they would recommend it to a friend, and rate it. The first group was told that the agent had no intentions in the conversation, the second was told that the AI had benevolent intentions and cared about their well-being, and the third was told that it had malicious intentions and would try to trick them.

Half of the participants in each group spoke to an artificial intelligence agent based on the GPT-3 generative language model, a deep learning model that can generate human-like text. The other half talked with an implementation of the ELIZA chatbot, a less sophisticated, rule-based natural language processing program developed at MIT in the 1960s.

The study’s results revealed that user predisposition toward the tool was decisive: 88% of people who received positive information and 79% of those who received neutral information believed that the AI was empathetic or neutral, respectively. Ángel Delgado, an AI engineer at Paradigma Digital, believes that the positive majority is also the result of using GPT-3, which is the first to pass the Turing test: “It consists of letting a person interact with the AI [tool] without telling them whether it is AI or not, to see if they can guess. GPT-3 is the first language model that has had such good results that it seems like a human.”

People who were told that the tool was caring tended to talk to it in a more positive way, which made the agent’s responses more positive as well. Ramón López de Mántaras, the director of the Spanish National Research Council’s Artificial Intelligence Research Institute, explains that the more you talk to the tool, the more it learns: “The interlocutor teaches artificial intelligence. You can correct, confirm and qualify its responses,” he adds.

From a fear of the Terminator to a lack of criticism

Negative priming statements (i.e., unfavorable information given to someone just before interacting with the AI agent) had the opposite effect: only 44% of participants who received unflattering information about the tool trusted it. “With the negative statements, instead of priming them to believe something, we were priming them to form their own opinion. If you tell someone to be suspicious of something, they’re likely to be even more suspicious in general,” says Ruby Liu.

The influence of science fiction is a key factor in negative thinking about AI, Patti Maes explains. “Movies like The Terminator and The Matrix depict scenarios in which AI becomes self-aware and brings about the downfall of humanity. These fictional accounts contribute to the fear that AI could take over and overtake human intelligence, which would pose a threat to our existence.”

According to the study’s findings, previous thoughts about language models can have such a strong impact that they could be used to make the agent seem more capable than it is and cause people to trust it too much or follow incorrect advice. López de Mántaras puts it bluntly: “The tool you are interacting with is not an intelligent person. People believe that the machine is intelligent and listen to what it says without any critical thinking…We are becoming less and less capable of critical thinking.”

Experts agree that we must be aware of how artificial intelligence works and comprehend that it is programmed. “We should prepare people more to be more careful and understand that AI agents can hallucinate and are biased. How we talk about AI systems will have a major effect on how people respond to them,” says Maes.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_