Humans are already repeating words learned from ChatGPT, such as ‘delve’ and ‘meticulous’

An analysis of 300,000 conferences reveals that the influence of generative artificial intelligence goes beyond the written word and is now impacting what we say

Stills from some of the conferences used to analyze the growing use of words encouraged by ChatGPT.

Researcher Ezequiel López was recently at an academic conference and was surprised to hear the speakers repeatedly use certain words, such as “delve.” Another researcher from the Max Planck Institute for Human Development in Berlin had a similar feeling: suddenly words that were hardly used before were being constantly used in presentations.

Research had already shown that curious words were increasingly making their way into scientific articles, sentences, and paragraphs generated by ChatGPT and other artificial intelligence tools. But could it be that humans were now unconsciously repeating words popularized by these machines? To explore this possibility, they set out to investigate. The first challenge was to gather a sufficient number of recent presentations. They collected around 300,000 videos from academic conferences and developed a model to track the frequency of specific words over the past few years: “Our question is whether there could be an effect of cultural adoption and transmission, whether machines are changing our culture, and these changes then spread,” says López.

The answer is yes. In 2022, they identified a turning point, noting a rise in the use of previously rare English words like “delve,” “meticulous,” “realm,” and “adept.” Iyad Rahwan, a professor at the Max Planck Institute and co-author of the study, remarked: “It’s surreal. We have created a machine that can speak, that learned to do so from us, from our culture. And now we are learning from the machine. This is the first time in history that a human technology can teach us things so explicitly.”

It is not that odd for humans to adopt and repeat new words they’ve just learned, especially if they are non-native speakers, who made up a significant portion of the sample in this study. “I don’t think it is a cause for alarm because, in the end, it is democratizing the ability to communicate. If you are Japanese and you are a world leader in your scientific field, but when you speak in English at a conference you sound like an American from kindergarten, this creates biases regarding your authority,” says López.

ChatGPT allows these non-native speakers to better capture nuances and incorporate words they didn’t use before. “If you’re not a native English speaker, and you go to the cinema tomorrow and there’s a new word that surprises you, you’re likely to adopt it too, as with ‘wiggle room’ in Oppenheimer; or ‘lockdown’ during the pandemic,” says López. But there is one caveat, this researcher points out. It is very particular that the words adopted at these academic conferences are not nouns that help describe something more precisely, but rather instrumental words such as verbs or adjectives.

There are two curious consequences of this adoption. First, since it has become widely recognized in academic circles that these words originated from ChatGPT, they have become tainted; using them is now often frowned upon, “I am already seeing this in my own lab. Every time someone uses ‘delve,’ everyone instantly catches on and makes fun of them. It has become a taboo word for us,” says Rahwan.

The second consequence may be worse. What if, instead of making us adopt words at random, these machines were able to put more loaded words into our heads? “On the one hand, what we found is fairly harmless. But this shows the enormous power of AI and the few companies that control it. ChatGPT is capable of having simultaneous conversations with a billion people. This gives it considerable power to influence how we see and describe the world,” says Rahwan. A machine like this could determine how people talk about wars like those in Ukraine or the Middle East, or how they describe people of a particular race or apply a biased view to historical events.

At the moment, due to its global adoption, English is the language where it is easiest to detect these changes. But will it also happen in Spanish? “I have wondered. I suppose something similar will happen, but the bulk of science and technology is in English,” says López.

Impact on collective intelligence

Generative AI may have unexpected consequences in many areas other than language. In another study published in Nature Human Behaviour, López and his co-authors have found that the mass use of AI is a threat to collective intelligence, as we understand it. Collaborative code sites such as GitHub or Stack Overflow will lose their role if programmers use a bot to generate code, and no longer need to consult what other colleagues have done before them, or to improve or comment on the code.

Stills from the conferences analyzed on the use of the words encouraged by ChatGPT and other generative AI.

For Jason Burton, a professor at Copenhagen University of Business and co-author of the paper, “Language models don’t mean the end of GitHub or Stack Overflow. But they are already changing how people contribute to and engage with these platforms. If people turn to ChatGPT instead of searching for things on public forums, we’re likely to continue to see a decline in activity on those platforms, because potential contributors will no longer have an audience.”

Programming is just one possible victim of AI. Wikipedia and its writers may become mere reviewers if everything is written by a bot. Even education could be impacted, according to López: “Let’s imagine that, in the current educational system, teachers and students are increasingly relying on these technologies; some to design questions and others to find the answers. At some point we will have to rethink what function these systems should have and what our new efficient role in coexisting with them would be. Above all, so that education does not end up consisting of students and teachers pretending on both sides and performing a play for eight hours a day.”

These language models are not just a threat to collective intelligence. They are also capable of summarizing, aggregating, or mediating complex processes of collaborative deliberation. But, as Burton points out, caution is needed in these processes to prevent the risk of groupthink: “Even if each individual capacity is enhanced by using an app like ChatGPT, this could still lead to poor results at the collective level. If everyone starts relying on the same app, it could homogenize their perspectives and lead to many people making the same mistakes and overlooking the same things, rather than each person making different mistakes and correcting each other.”

For this reason, the researchers in their study call for reflection and possible political interventions to encourage a more diverse field of language model developers and thus avoid a landscape dominated by a single model.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In