Fake news: The most useful strategy for combating it is correct before censoring

A study reveals that, in the battle against disinformation, proving the falsehood of a news item is more effective than eliminating it. X has recently incorporated that approach for tweets

Elon Musk
Elon Musk at a symposium on antisemitism, organized by the European Jewish Association, in Krakow, Poland, on January 22, 2024.NurPhoto (NurPhoto via Getty Images)

Elon Musk has had a good idea, according to a study published in the Information Systems Research Journal on February 28. Community Notes on X (formerly Twitter) are designed to combat disinformation and have proven to be more effective than censorship in preventing the spread of fake news, according to the study’s findings. That’s a point in favor of the mogul, who, since he acquired the social network, has been criticized for his approach to handling fake news, including making it difficult to evaluate his policies by limiting access to the platform’s data.

The analysis is based on 1,468 news articles spreading false information about health-related topics. The main conclusion is that the implementation of X’s policy, first launched in 2021 under the name Birdwatch, succeeds in reducing the spread of such fake news. Posts that include links to misinformation articles are less likely to be retweeted, cited or commented on. This effect is most noticeable in that there are fewer instances of real people sharing links to dubious information. However, it does not affect accounts managed by bots. In 2022, an academic paper from Purdue University analyzed this system and reached a similar conclusion, adding that this measure could reduce the number of posts and perhaps total user activity on the platform.

Community Notes are a mechanism through which users can provide additional context, point out errors, or highlight verified information that contradicts or explains the content of a post. Others on the platform can rate these notes in terms of their usefulness and accuracy. This is done to ensure that the notes added are of high quality and reliable. Users vote on whether the note is useful or not, and these votes determine the visibility of the correction, which appears directly below the tweet. According to the company, this mechanism seeks to foster an environment of transparency and collaboration where the community plays an active role in content moderation.

A late-arriving improvement

However, several fact-checking experts have pointed out that Elon Musk’s changes to the platform’s security and content moderation policies after he bought Twitter have made it considerably less reliable than before. Raúl Magallón, the author of Unfaking News: How to Combat Disinformation, believes that these notes are a good move, but he doubts that they will have a significant impact on the social network’s current situation. “When you take five steps back and one step forward, I don’t know how much you can talk about an improvement,” he observes.

In principle, there are several studies that confirm the experts’ view of the matter, noting that it is now more difficult to draw conclusions from the proliferation of fake news on the old Twitter. According to a report prepared by Health Feedback, the X accounts of disinformation super-spreaders have increased their interactions by 42% since the Tesla owner purchased the social network, while those of highly credible accounts have dropped by 6%. In the week after Musk acquired the company, accounts deemed untrustworthy experienced a 57% increase in interaction levels, measured in “likes” and retweets, according to a study by NewsGuard.

Myriam Redondo, the author of the book Digital Fact-Checking for Journalists, sees this tool as a positive, provided that X users know that this policy is in place. “They should also be offered this option as a ‘route’ that they can take or not take, that is, ultimately they are aware of what is happening when they navigate it,” she says. In her opinion, the best course of action would be to give a nudge in those cases where disinformation affects essential matters such as health or public safety and “when it reaches a level that the platform has previously established as alarming.”

For example, if a social media platform identifies a news story that has been disproven or is unfounded, instead of removing or censoring it, it could “redirect” it by displaying a message or link to verified articles or reliable sources that counter or clarify the information presented. “It is essential that platforms clearly state and communicate their criteria for determining what is considered alarming misinformation, ensuring that these measures do not appear to have a bias or hidden agenda,” Redondo adds.

A 2018 MIT study concluded that fake news spread significantly faster on Twitter than real news. The analysis is notable for being the largest longitudinal study ever conducted on the spread of fake news online. It used data from Twitter, spanning from 2006 to 2017, involving approximately 126,000 stories tweeted by about 3 million people, more than 4.5 million times. According to the research findings, fake stories were 70% more likely to be retweeted than true ones. The researchers used information from six independent fact-checking organizations to classify news stories as true or false.

Distrust of AI

In January 2024, the World Economic Forum’s Global Risks Report 2024 identified misinformation as one of the main risks in the current and future global context. It expressed particular concern about artificial intelligence-generated fake news, which has the potential to intensify social polarization and deteriorate public discourse.

In recent years, this phenomenon has raised particular alarm around elections in a number of countries. The Economic Forum report emphasizes the importance of implementing digital literacy campaigns that equip people with the necessary tools to discern and dismiss misinformation and incorrect information. It also highlights the need for collective and coordinated action both locally and internationally to counter misinformation. According to the document, collaboration among different sectors, including the public and private spheres, is essential for developing effective strategies to mitigate the impact of misinformation on society.

Along the same lines, Jesús Miguel Flores Vivar, professor of journalism at the Complutense University of Madrid, Spain, emphasizes the importance of media literacy. “It is crucial that people learn to differentiate false news from truthful information as soon as possible.” How can that be accomplished? Flores Vivar suggests creating an independent body, distinct from platforms and media outlets, that awards quality seals to reliable news sources.

“People should be informed that these types of quality ratings exist. A good example is the Trust Project, which, through its Trust Indicators, offers a method to evaluate the integrity and transparency of journalistic organizations, thus promoting a more truthful and trustworthy information environment.” Flores Vivar also points out the increasing trend toward using artificial intelligence that, through algorithms, facilitates the design and development of bots and platforms dedicated to combatting information toxicity.

The three experts consulted agree that the current performance of digital platforms in the fight against disinformation is insufficient. Redondo believes that “much of what they do in this area is aimed at improving their image, but not at solving the real problem, since doing the latter would imply too great a reduction in their income.”

For his part, Magallón is blunt in his assessment of these companies and gives them a failing grade: “Although each social network presents a different scenario, the measures implemented so far are clearly insufficient.” The expert believes that misinformation has taken root permanently. “It has become an essential component of international geostrategy, affecting critical areas such as climate change. The truth is that we are considerably behind in media literacy, which should be a fundamental component of secondary education.” The good news is that the solution is in our hands.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?


Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS