Kicking toxic people off social media reduces hate speech on the internet
A Facebook study shows that deleting 100 ‘insult’ accounts had a positive impact on viewership
Controlling hate speech on the internet poses one of the greatest challenges of our information age. Everyone says it’s important, but how effective is it? Some platforms have chosen to remove individual accounts who disseminate toxic content. An internal study by Facebook, which analyzed interactions between 26,000 users, reveals that excluding extremist community leaders is an effective means of eradicating hate speech on social media, particularly over the long term. Removing just 100 accounts produced a noticeable impact, since it denied proponents of hate speech a microphone and ultimately improved the broader social media environment.
Earlier studies had suggested that deleting harmful accounts on platforms like Twitter, Reddit and Telegram helped reduce unwanted activity, including broader levels of hate speech. But a cause-and-effect relationship was only recently demonstrated by Meta (Facebook’s parent company) researchers in a study published in PNAS, a peer-reviewed journal of the National Academy of Sciences (NAS).
Daniel Robert Thomas and Laila A. Wahedi examined how the removal of most active representatives from six Facebook communities affected their audience. The Meta researchers aimed to measure how much the audience continued to watch, post, and share harmful content after the instigators were removed. The study found that, on average, “the network disruptions reduced the consumption and production of hateful content, along with engagement within the network among audience members.”
After the accounts were deleted, users saw 10% less hateful content on average. Given that they consumed around five toxic posts daily, the result translates to one less every two days. Furthermore, those who ceased interacting with toxic community members were then presented with different content, groups, or communities that were not explicitly linked to violent behavior. However, Facebook’s privacy protection guidelines prevented data tracking of specific user accounts throughout the study.
Organizations that propagate hate may retain a loyal audience for a while, but the expulsion of their leaders may drive some viewers away. Meanwhile, those who are less attached to these leaders are less likely to engage with this content in the first place. This is a positive finding since this is the group most susceptible to the influence of malicious communities. “The results suggest that strategies of targeted removals, such as leadership removal and network degradation efforts, can reduce the ability of hate organizations to successfully operate online,” concludes the study.
But there is no silver bullet that can kill this particular werewolf. People who are kicked off a platform can easily create new accounts and build new networks. They can also migrate to other platforms. Additionally, the authors suggest that other toxic organizations could take over and attract sympathizers of the deleted accounts. To increase the effectiveness of the deletion strategy, the authors propose simultaneous removal of multiple accounts, as this hinders an organization’s ability to find its members and regroup.
Hate speech or toxic speech?
But if account deletion decisions are left to the platforms, will they really want to do it? Sílvia Majó-Vázquez, a research associate at the Reuters Institute for the Study of Journalism at Oxford University (U.K.) and a professor at Vrije University in Amsterdam, said that content moderation on social networks must “be done by seeking a balance between freedom of expression and the preservation of other rights,” so it’s essential to differentiate between hate speech, toxic speech and incivility.
Majó-Vázquez says that incivility, such as disrespectful and sarcastic comments, is the mildest form of negative language. But when it becomes more extreme and “people are chased away from participating in a conversation,” toxic speech is born, which can become violent. “From a democratic perspective, this is very harmful because it discourages the democratic ideal of public debate,” she said.
To ensure the preservation of freedom of expression on social media platforms, careful consideration should be given to suspending or deleting accounts. According to Majó-Vázquez, the suspension process must incorporate conceptual dimensions and utilize manual mechanisms that sufficiently balance the right to freedom of expression with the preservation of other fundamental rights. She advises that a similar exercise should be applied to political figures as well. Automated mechanisms for deleting messages and suspending accounts must be continuously scrutinized, with a priority on expert evaluation of messages, similar to the external advisory boards some platforms have already implemented.
According to a recent study conducted in seven countries by the Reuters Institute, the correlation between toxicity and engagement is not always direct, and varies based on the content topic and severity. The study analyzed Twitter data during the pandemic and found that the most toxic tweets were often unpopular with audiences. “In fact, we see that the most toxic tweets lose popularity and messages with low levels of toxicity increase in popularity,” said Majó-Vázquez. The study did not offer conclusive insights on whether this was due to audiences disliking toxic content or the moderation techniques employed by the platform. “We can’t answer this question with the data from our study, but this result challenges the premise that toxicity is always the most popular online currency,” she said.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.