The pros and cons of saying ‘thank you’ and ‘good morning’ to AI
While some experts see little value in being nice to non-sentient entities, others say the way we interact with artificial intelligence could influence the quality of its responses
Having good manners and being polite requires a certain amount of equanimity and balance. There are times in which we are so overflowing with optimism that we don’t just greet our neighbor and the newspaper vendor by our house, but before closing our laptop, we say thanks to ChatGPT for having helped us in our daily tasks — and hey, why not wish it a good day as well? The idea that one should be polite to a machine is as unusual as the fact that they are programmed to be nice to users. Of course, it’s easier for them, because they never have a bad day. But is there really any point in being courteous to an AI system? And what about when we’re straight-up rude to it?
In 1996, researchers Byron Reeves and Clifford Nass developed the concept of “the media equation.” The term suggests that people, sometimes without even realizing it, interact with technological systems like computers and televisions as if they were human beings. Together, the two researchers carried out several experiments, with varying results.
In one, for example, participants worked on a computer and then were asked to evaluate the machine’s performance. Interestingly, when the evaluation was carried on the computer itself, the ratings tended to be more positive, as if the participants did not want to speak badly about the machine to its face. In another experiment, a computer praised people for performing a task well. Participants gave higher marks to the machine that had praised them, even though they were aware that the praise had been automatically generated.
After these results were published, numerous studies have shown that, on one hand, human beings tend to anthropomorphize machines and, on the other hand, when a technological system imitates human qualities, like having good manners, users perceive that it is working better. Interesting to be sure — but the findings do not resolve the debate over whether we should be nice to technology.
Ethics and utility
This is a discussion that initially focused on our interactions with voice assistants like Siri and Alexa — one might ask why they always have a woman’s voice and name — and more recently, has looked at our relationship to advanced language models like ChatGPT, Gemini and Claude. The debate is split between two fundamental areas: ethics and practice. One on hand, it analyzes whether or not it is appropriate to be polite with a technological system, and whether it makes sense to consider entities like ChatGPT as mortal beings. On the other, it analyzes whether courteous treatment influences operating efficiency.
The first area is reminiscent, at least on its surface, of the longstanding ethical discussion surrounding the moral status of animals and how we should interact with them. But there are many biological and cognitive differences between animals and machines. In contrast to technological systems, many animals have nervous systems that allow them to experience pain and pleasure, indicating that they can be positively or negatively affected by the actions of others. In addition, many show signs of having some level of consciousness, which implies a subjective experience of the world.
These beings also can experience emotions that, although they are different from those of humans, reveal an emotional complexity that affects their well-being and behavior. Since machines do not possess these biological and emotional capabilities, they lack the necessary criteria to be considered similar to animals, let alone humans.
Better responses or a waste of time?
Enrique Dans, a professor of innovation and technology at the IE Business School in Spain, is not against being nice to machines. But he does underline the importance of knowing that a machine, which has no perceptions, emotions or awareness, cannot understand or value the courtesy or gratitude one expresses towards them. “No one is against being polite to them, but being polite to a machine has little value, because it can’t perceive it,” he says.
One of the arguments against this opinion is that future generations of AI will reach levels of complexity that could allow them to develop consciousness or even emotions. “Some people have jokingly told me that they prefer to say ‘please’ and ‘thank you’ in case, in the future, it’s necessary to get along well with artificial intelligence. Honestly, that belongs to the realm of science fiction, because right now we are quite far from reaching that point,” says Dans.
The next aspect of the debate is to ascertain whether behaving politely to a machine hinders or improves our interactions with it. Dans stresses the importance of understanding that behind every response from a machine, there is a complex system of data processing, patterns and algorithms, rather than a human being with emotions and intentions. “To try to treat an algorithm politely is to anthropomorphize it, and anthropomorphizing an algorithm is inappropriate. Machines need clarity, well-defined goals and the imposition of constraints. Expressions such as ‘please’ and ‘thank you’ only add superfluous information for the system to process, unnecessarily consuming computing resources,” he says.
Julio González, director of the Research Group in Natural Language Processing and Information Retrieval at Spain’s Open Distance University (UNED), says that in reality, within certain systems it is possible for the user to receive better responses if they are more polite. This is not a result of the machine processing emotions or feeling inclined to offer better service when it feels respected. The real explanation is that, when we communicate politely, user messages tend to more closely resemble the samples of polite interactions that the assistant has analyzed during its training. Since these samples are often associated with better quality responses, politeness can indirectly improve the quality of the responses.
Gonzalo explains that when we use certain language models like ChatGPT, Gemini or Claude, it’s crucial that we keep in mind that they are “very sensitive to formulation of the query, to surreal extremes.” Seemingly minor changes to the structure of a command, such as punctuation and the inclusion of certain motivational phrases, can have a dramatic impact on the effectiveness of the response. “Separating with a colon or a space or using more or fewer parentheses in formatting can make the accuracy of the response jump from 8% to 80%,” he says.
It has also been shown that adding “take a deep breath and think step by step” greatly improves the accuracy of responses that require reasoning. This happens not because the model “thinks” logically, but because these instructions lead it to response patterns that, in its training, were associated with greater clarity and detail. Even statements that should not influence the response, such as indicating the time of year (“it’s May” or “it’s December”) or complimenting the model (“you’re very smart”) can alter the quality of responses. “We reached the height of surrealism when it was recently discovered that answers about mathematics improve if the system is asked to express itself as if it were a Star Trek character,” Gonzalo says.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition