Hate speech soared on Twitter after Elon Musk’s acquisition and its impact is deeper than expected
The number of toxic messages rose by 50% in the months following the purchase of the social media platform, now named X, and they also received 70% more likes
The number of hate messages on Twitter (now X) rose by 50% between the time that Elon Musk bought the social media platform in October 2022 and June 2023, when he blocked access to the network’s data to researchers and journalists. Hate messages also received 70% more likes in that period. Compared to the months before the purchase, activity only increased by 8% and general likes by 4%, showing that hate content increased much more than it should have, according to a paper published this Wednesday by researchers from the University of California at Berkeley and Los Angeles and the University of Southern California.
“The increase in hate speech just before Musk bought X persisted until at least May of 2023, with the weekly rate of hate speech being approximately 50% higher than the months preceding his purchase, although this increase cannot be directly attributed to any policy at X,” reads the study. “The increase is seen across multiple dimensions of hate, including racism, homophobia, and transphobia. Moreover, there is a doubling of hate post “likes,” indicating increased engagement with hate posts."
This data contrasts with Musk’s public statements. The tech magnate said in December 2022 that “hate speech impressions (# of times a tweet was viewed) continue to decline, despite significant user growth!” An average X user would have been able to observe this trend with little effort in recent years, but this research work is the first to give a specific figure on this increase in hate speech on the network.
Despite the figures, the authors do not know the specific reason for this growth in hate speech. “Given Musk’s comments about reducing moderation on the platform, coupled with the fact that he fired many employees of the trust and safety team and dissolved the safety advisory board, I am not surprised by this increase,” says Dan Hickey, a professor at the University of California, Berkeley. “But we cannot say for sure why it increased,” he adds.
Another phenomenon observed by this group of researchers is that bot activity has not decreased and has potentially increased, say the authors. Eliminating bots or fake accounts was one of Musk’s priority objectives and the reason he promoted paid subscriptions. He has not been able to keep his word on this front either.
“While the trends we reported on X are concerning, platforms do not have to have such high levels of hate speech or inauthentic activity,” Hickey says. “It is always the platforms that decide what type of content is acceptable and how they design their recommendation algorithms. They have an opportunity to improve the information environment by promoting content that encourages cooperation rather than division.”
Hate creeps into heads
These changes in X may seem to be linked only to a social media platform and have little impact on our real lives. But according to another new study, this is not the case: the amount of hate we see on social networks and media is more important than it seems because it affects citizens without them being fully aware of it.
How does a user react when they see so much hate speech about minorities? Does it provoke anxiety or disgust, does it provoke resistance and make them want to act? Very little. “It is an extremely interesting result because it shows that there is nothing significant, the content itself does not provoke a reaction,” says Pablo Madriaza, a Chilean-Canadian professor at the University of Quebec (Canada) who has just published a study that reviews, analyzes and compares dozens of scientific articles on hate speech. “It is as if it was banalized, normalized” and the more there is of it, the less sensitivity to its presence.
But this reaction of bland acceptance is not the most surprising consequence of hate speech in individuals. The most extraordinary thing is that what is seen and read actually has an influence: personal opinions about the insulted minority worsen. “This rhetoric produces changes in people who are exposed to it without the content itself provoking an obvious reaction,” says Madriaza. “People change their attitude towards minorities without considering the content itself to be necessarily negative,” he adds.
These results are scientific confirmation that the old saying “lie and something will stick” is literally true. “It is very surprising and it is sad too. It is not voluntary. I do not decide that I am going to hate minorities,” explains Madriaza, but it ends up happening after consuming these types of messages. In these experiments, people are exposed to hate content, and they are compared with others who have not seen it. Then a few days or weeks pass and the impact of the original messages is measured; and they confirm that it has an influence.
Madriaza cannot infer from these studies that it causes social changes or changes in the voting intention of the individuals under study: “I am desensitized to the rhetoric. I do not necessarily see it as something positive or negative. It happens, you see it and then your opinion of gay people is worse. But linking that to the political sphere or to an election or to the way you will vote, that’s a stretch,” he adds.
Although it may seem surprising, it is not: it confirms classic studies of the influence of this type of rhetoric in other societies and with other media prior to the internet and social media platforms. Madriaza has also observed a way of countering this type of discourse: the so-called counter-discourse. Just like hate speech, empathetic messages have concrete consequences on what people think about racial, sexual or religious minorities.
“In an experiment where a message like ‘you don’t realize the pain you’re causing’ was shown, appealing to people’s empathy, the number of insulting tweets decreased, which would eventually show that there is a moderation effect,” says Madriaza. “This means that what favors hate speech also works for the other side. It’s not unidirectional. That would be the hope, as long as this is led above all not by governments, but by people.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition