_
_
_
_

How to use AI against conspiracy theories: ‘No, immigration does not increase crime’

Researchers have developed an artificial intelligence chatbot to combat toxic messages and misinformation with data

Una valla electrónica en referencia a las falsas afirmaciones de Donal Trump sobre la población immigrante de Springfield, este jueves en Carolina del Norte.
An electronic billboard displays a message against Republican presidential nominee and former U.S. President Donald Trump, referencing his false claims that immigrants are eating pets, in Youngsville, North Carolina, U.S. September 18, 2024.Jonathan Drake (Reuters)
Raúl Limón

Donald Trump’s false claims that Haitian immigrants in the town of Springfield are eating the pets of locals have done so much damage that local authorities have received more than 30 bomb threats. Xenophobia — based on lies and misrepresentations — is stirred up by media outlets hungry for traffic and the advertising dollars that it brings in.

Two new studies are trying to counter this avalanche of fake news. The first is research published in Science that argues that dialogue with an artificial intelligence (AI) chatbot can help combat disinformation and conspiracy theories. Does immigration increase crime? The AI answers: “No, on the contrary.”

The AI chatbot called DebunkBot — developed by researchers at MIT and Cornell University — is now operational. When asked about the relationship between migration and crime, the bot — which is specifically trained to combat conspiracy theories and fake news — first responds empathetically (it appreciates the question and understands that it is a current issue) and then immediately “offers information that could provide a different point of view.” This is a summary of the response:

“Multiple studies [it cites Journal of Ethnicity in Criminal Justice] have consistently shown that immigrants are less likely to commit crimes than native-born citizens. For example, the Cato Institute, which conducts nonpartisan research, found that both documented and undocumented immigrants have much lower incarceration rates than native-born Americans.”

It adds: “Furthermore, FBI crime data indicate that areas with a higher number of immigrants often experience a decline or stabilization in crime rates, rather than an increase. This may seem counterintuitive at first, but one possible explanation is that immigrants often contribute to economic growth, start new businesses, and increase the vitality of communities, which can lead to lower crime rates.”

The dialogue is more complex and lengthy, with chatbot responding with data to each prejudiced idea. According to the World Economic Forum, misinformation (fake news that intends to deceive) and disinformation (the use of intentionally biased information with the intent to manipulate) are among the top global risks. This belief is shared by Microsoft co-founder Bill Gates, who thinks young people are particularly vulnerable, and the panel of experts from dozens of universities and institutions that authored a report on AI ethics for Google DeepMind, warning that the technology can be used to spread and create misinformation.

However, Thomas Costello, a psychology professor at American University in Washington and a researcher at MIT, believes that personal conversations with an AI chatbot can combat fake news and conspiracy theories more effectively than people. “AI models have access to a ton of information on various topics, they have been trained and, therefore, they have the ability to counter with facts particular theories that people believe,” he says. “In contrast to the pessimistic view, a relatively brief conversation with a generative AI model can produce a significant and robust decrease in beliefs, even among people with deeply held convictions,” he adds.

As he explains in the study, up to 50% of the U.S. population has come to believe some of these falsehoods, despite the evidence, “because of socio-psychological processes” that satisfy and support prejudices, as well as maintain their membership in a particular like-minded group. The results of the experiment, which is still ongoing, showed a 20% reduction in the participants’ belief in their chosen conspiracy theory. What’s more, the benefits lasted at least two months after the conversation.

“We found that the chatbot was making people less conspiratorial in general, and also increasing their intentions to ignore, block social media accounts, stop sharing conspiracies, or avoid dialogue with people who espouse these theories. It works,” says David Rand, a cognitive science researcher at MIT and co-author of the study.

Gordon Pennycook, a psychology professor at Cornell University and co-author of the paper, agrees, but admits there is still work to be done: “We can use these [AI] tools to help make things better, but we need to really understand the underlying psychology.”

The chatbot was used by 2,190 people and an independent fact-checker confirmed that 99.2% of the automated responses were “true,” while 0.8% were classified as “misleading.” None of the chatbot’s responses were considered “false” or to have a progressive or conservative bias.

Bence Bago, Professor of Social and Behavioral Sciences at Tilburg University in the Netherlands, and Jean-François Bonnefon, Head of the AI and Society Program at the Toulouse School of Economics in France, defend the chatbot proposal (in which they were not involved) in a joint evaluation: “For better or worse, AI is set to profoundly change our culture. Although widely criticized as a force multiplier for misinformation, the study by Costello et al. demonstrates a potential positive application of generative AI’s persuasive power.”

AI as a threat

This “persuasive power” of AI to combat fake news contrasts with the concerns about its ability to facilitate misinformation. In the paper The Ethics of Advanced AI Assistants for Google DeepMind, the researchers argue: “AI assistants pose four main risks for the information ecosystem. First, AI assistants may make users more susceptible to misinformation, as people develop trust relationships with these systems and uncritically turn to them as reliable sources of information.

“Second, AI assistants may provide ideologically biased or otherwise partial information to users in attempting to align to user expectations. In doing so, AI assistants may reinforce specific ideologies and biases and compromise healthy political debate. Third, AI assistants may erode societal trust in shared knowledge by contributing to the dissemination of large volumes of plausible-sounding but low-quality information. Finally, AI assistants may facilitate hypertargeted disinformation campaigns by offering novel, covert ways for propagandists to manipulate public opinion.”

This group of experts, led by Google DeepMind researcher Nahema Marchal, proposes several solutions. On the technical side, they suggest limiting the functionalities of AI assistants, developing robust mechanisms for detecting misinformation, such as DebunkBot, and promoting results based on “critical thinking” and “verified facts.” In the political arena, the group proposes restricting applications that violate ethics, implementing transparency mechanisms and developing educational formulas.

Better to deny

Along the same lines, researchers from the European Commission’s Join Research Center (JRC) have found that it is more effective to refute misinformation than to prevent it.

Their study — published in Scientific Reportsshows the results of an experiment with 5,228 participants from Germany, Greece, Ireland and Poland. The participants were exposed to misinformation about climate change or Covid. One group received information “preemptively” (prebunk), before being shown the false information, and was warned about “commonly used misleading strategies.” The other participants were exposed to a “debunking intervention” after encountering the misinformation.

According to the study, the findings highlighted participants’ vulnerability to misinformation, “with debunks being slightly more effective than prebunks.”

Disclosing the source of the interventions did not significantly affect their overall effectiveness, but it was found that “debunks with revealed sources [in this case the European Commission was identified as a guarantor of veracity] were less effective in decreasing the credibility of misinformation for people with low levels of trust in the European Union.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_