Ending fact-checking on social media fuels hate speech and harassment, experts warn
Meta’s decision to reduce content controls amplifies the most damaging effects of its platforms, as users are left without the necessary tools to combat harmful misinformation
The purchase of Twitter by Elon Musk, the close ally of future U.S. president Donald Trump, transformed the platform, which Musk rebranded as X, into a lawless jungle in the name of supposed freedom of expression. A study conducted by the City St. George’s School of Science and Technology at the University of London, covering nine countries, found that in just two years, X has become the hub of political abuse and misuse, where adversaries, dissenters, and moderates are increasingly treated as “enemies.”
Meta platforms (Facebook, Instagram, and Threads) are following suit by ending its third-party fact-checking program and easing content moderation. “The consequences of these decisions will be an increase in harassment, hate speech, and other harmful behaviors across platforms with billions of users,” warns Alexios Mantzarlis, director of Security, Trust, and Protection in the tech sector at Cornell University. Few organizations defend Meta’s decision.
Mantzarlis, who was involved in the international fact-checking network, emphasizes that shift taken by Mark Zuckerberg’s company is twofold: not only has Meta stopped verifying data to identify falsehoods and shifted control to users, but it has also opened the door to content that is more likely to fuel hate speech. This was confirmed by Joel Kaplan, Meta’s new director of global affairs: “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”
Mantzarlis is highly critical of the move: “In addition to ending the fact-checking program, Zuckerberg has also announced a more lax approach to content moderation, so Meta will no longer proactively seek out potentially harmful content across a broad range of domains.”
Meta has argued that its content moderation and fact-checking program has led to “censorship,” bias, and limitations on freedom of expression — claims strongly denied by the international network of fact-checkers.
The Cornell researcher also rejects these arguments, pointing out that Zuckerberg has eight years of data supporting the anti-misinformation program and the benefits of controlling harmful messages. “However,” he laments, “instead of sharing compelling evidence, he has chosen to imitate Musk and promise freedom of expression for all.”
“The [moderation] program was by no means perfect, and fact-checkers have undoubtedly gotten some percentage of their labels wrong [3.5%, according to Meta’s audits]. But we should be clear that Zuckerberg’s move to get rid of fact-checkers is a political decision, not a policy one,” Mantzarlis says.
This statement refers to Meta’s radical shift after Trump’s election and Musk’s appointment as the next president’s right-hand man. In fact, the X network has been one of the first to welcome its competitors into a social media ecosystem where populism has flourished worldwide.
Angie Drobnic Holan, a journalist, writer and member of the International Fact-Checking Network, agrees with Mantzarlis: “It is regrettable that this decision comes as a result of extreme political pressure from a new administration and its supporters.”
For Drobnic Holand, fact-checkers have been impartial and transparent, so questioning this objectivity, in her view, “comes from those who feel they should be able to exaggerate and lie without refutation or contradiction.”
“This decision [to end the fact-checking program] will hurt social media users who seek accurate and reliable information to make decisions about their daily lives and interactions with friends and family. Fact-checking journalism has never censored or removed posts; it has provided additional information and context to controversial claims and debunked false content and conspiracy theories,” concludes Drobnic Holand.
Tal-Or Cohen Montemayor, founder and director of CyberWell, an organization dedicated to combating hate online and specializing in the fight against antisemitism, is also against the measure. In her view, Meta’s decision represents “the intentional deterioration of best practices for trust and security” in an environment where there is “growing evidence of how hate speech, inflammatory content, and harassment cause harm in the real world.”
“The change [at Meta] signals one thing, very much in line with the trends we’ve seen on X since Musk acquired Twitter in both the quantity and quality of content: more hate speech, more politicized content, more niche content, and less effective responses from the platforms,” Cohen added.
Cohen also rejects Meta’s argument about facilitating the right to express opinions and eliminating alleged censorship. “This is not a victory for freedom of expression. The only way to avoid censorship and data manipulation by any government or corporation is to introduce legal requirements and reforms for big tech companies to modify social media and comply with transparency standards,” she says. “The answer cannot be less responsibility and less investment on the part of the platforms.”
Support for Meta’s decision
Although most social media experts have opposed the new measure, Meta’s decision has garnered some support, particularly from X, its competitor in the social media market, which sees it as aligned with Elon Musk’s ideological stance.
In this regard, the organization Foundation for Individual Rights and Expression (FIRE) has praised Zuckerberg’s move. “Meta is giving its users what they want: a social media platform that does not suppress political content or rely on top-down fact-checkers. Hopefully, these changes will lead to less arbitrary moderation and greater freedom of expression on Meta’s platforms,” the foundation says.
FIRE advisor Ari Cohn argues that the decision is in line with the First Amendment to the U.S. Constitution, which guarantees freedom of expression. For Cohn, Meta’s measure “protects the editorial choices of social media companies over the content on their platforms.” “It’s good that they are voluntarily trying to reduce bias and arbitrariness when deciding what content to host, especially when they promise users a culture of free speech,” argues Cohn.
Cycle of hate and indignation
However, various studies suggest that this concept of freedom of expression without moderation creates a vicious cycle that amplifies the most “outrageous” content while sidelining the most “reliable” information. This supports the predictions of many network experts, who foresee an increase in hate speech and disinformation.
One such study, published in Science, warns that social media posts containing misinformation provoke more “moral outrage” than posts with reliable information. This outrage, according to the study, facilitates the spread of fake news because users are “more likely to share them without reading them, reinforcing their moral positions or loyalty to political groups,” explains Killian L. McLoughlin, a researcher at the Department of Psychology at Princeton University.
This is where the automatic content selection process by social networks comes into play. “Because outrage is associated with increased engagement online, outrage-evoking misinformation may be likely to spread farther in part because of the algorithmic amplification of engaging content,” the researchers write.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.