Regardless of the platform or algorithm, it’s humans that make social media toxic
A study published in ‘Nature’ analyzes 500 million messages over three decades to better understand bad manners on the internet
Social media changes over the years, but toxic human behavior persists. In academia today, a persistent debate is determining social media’s impact on our lives and democracies, especially whether it has contributed to making public debate more toxic. A new study published in Nature isolates several behaviors to try to better understand where online toxicity begins and ends. The research analyzes more than 500 million threads, posts and conversations across eight platforms over 34 years.
The finding is that toxicity is much more linked to humans and has not emerged now specifically as the result of social media: “The study indicates that, despite changes in social media and social norms over time, certain human behaviors persist, including toxicity,” says Walter Quattrociocchi, a professor at the University of La Sapienza (Rome) and a co-author of the study, along with other academics from his university, City University and the Alan Turing Institute in London. “This means that toxicity is a natural outcome of online discussions, regardless of the platform.”
The platforms from which the English-language posts under study originated are Facebook, Gab, Reddit, Telegram, Twitter, YouTube, Usenet (a forum created in 1979) and Voat (an American news aggregator). The authors defined toxicity as “a rude, disrespectful or unreasonable comment likely to make someone leave a discussion.”
Toxicity does not drive people away
Another novel aspect of this study is that it goes against the usual understanding of social media and finds that toxicity does not drive users away from it. That response is assumed to be a human reflex and normal in an environment where users cannot detect other attitudinal cues, such as gestures or tone of voice. “The study’s findings challenge the common belief that toxicity diminishes a platform’s attractiveness,” says Quattrociocchi. “[This study] shows that user behavior in toxic and non-toxic conversations has nearly identical patterns in terms of engagement, suggesting that the presence of toxicity may not deter participation, as is commonly assumed.”
Academic research on online behavior has found it difficult to find good data to distinguish which behavior is properly human and which is caused by social media design and its famous algorithms. This paper on toxicity attempts to partly unravel that difference. The study found that toxicity on social media is more a product of human nature than of technology: “Toxicity in online conversations does not necessarily stop or encourage people’s interactions. It is more a reflection of human behavior itself as seen across platforms and contexts,” says Quattrociocchi.
The study also found that polarization and a diversity of opinion may contribute more to hostile online discussions than toxicity itself. Users may end up lengthening the conversation and disrespecting a political opponent over conflicting opinions rather than reading rude or hostile comments. “We can conclude that polarization tends to reinforce participation on platforms by promoting argumentative interactions between users of different opinions,” Quattrociocchi asserts. “Interactions caused by controversy and debate may have a greater impact on maintaining user activity than toxicity,” he adds.
This finding may help social media approach content moderation differently and better filter toxic content so that such human behavior exists less online. “Systems could be designed to encourage healthy discussions without falling into toxicity, and content moderation could be sensitive to the complexities of human behavior,” the researcher explains.
Although the study notes that some toxicity is linked to human behavior on social media, that does not mean that all online interactions are doomed to be toxic or that efforts to mitigate such encounters are futile. “The most effective way to reduce online toxicity is to make people aware of our online behavior, and for that, above all, we need cognitive media training,” says Quattrociocchi.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition