‘Beating,’ ‘shit,’ ‘machete’: How the tide of hate on social media soared during the Torre Pacheco crisis due to passivity of platform owners
A monitoring system run by the Spanish Ministry of Inclusion found a 1,500% increase in racist content targeting North Africans after unprovoked assault on an elderly resident
On any given day in 2025, around 2,000 hate messages are posted on social media in Spain. On July 12, that number multiplied to 33,000, a 1,500% increase. That Saturday was the peak of the rumors, hoaxes, and hate speech that led to the riots in Torre Pacheco (Murcia). This is further proof of the connection the digital world has with what is happening simultaneously and subsequently on the streets.
The central part of this spike in hateful posts lasted mostly one day, when nearly 30% of the messages detected throughout the week by the Spanish Observatory on Racism and Xenophobia of the Ministry of Inclusion (Oberaxe) were concentrated. The majority were directed at people originating from North Africa and featured the words “beating,” “shit,” and “machete.” These social media posts also focused on discussing citizen security and trying to get the attention of the average citizen.
Despite this incredible spike in hate speech, the companies that own these social media platforms took no further action to remove such content. Their activity was routine. Since July 11, X, Facebook, Instagram, TikTok, and YouTube have only removed a handful of messages from their users, even fewer than in previous days. For context, throughout 2024, the Spanish government notified social media platforms of 2,870 pieces of content considered hateful that could be criminal or violated the networks’ rules of conduct. The companies removed only 35%, and only 4% of hate messages were deleted within 24 hours of being reported.
The social network X received the most notifications (26% of the total) and yet was the one that removed the least amount of content (14%). According to the Observatory, the difference in the volume of content notifications is due to “the varying degree of difficulty in identifying content on each social network.” TikTok was the network that most complied with requests to remove messages, with 69%. Oberaxe’s work clearly shows how the spike in hate messages skyrocketed, while the trend in removed messages remained stable.
Telegram, the controversial messaging platform, was one of the main sources of hate speech in Torre Pacheco, but it’s not on the list because it can’t be monitored like an open network. There are open groups for dissemination to the general public, but also private groups where strategies are hatched. To access it, Telegram requires a mobile number, and Oberaxe can’t rely on the same methods as security forces, according to Ministry sources.
The Ministry of Inclusion denounced this lack of response from the social networks in a report last week and, to address the problem, has created a monitoring group with the companies, following a meeting held last Wednesday in Madrid. “This collaboration is unprecedented,” said Minister of Inclusion Elma Sáiz. “The government will not turn a deaf ear, because what happens on social networks has its translation into reality, and we have seen this in Torre Pacheco.” Telegram will be invited to these meetings as an interested platform, according to government sources.
The lack of response from some of the companies responsible for moderating content was heavily criticized in the report: “This circumstance makes it easier for messages that dehumanize, stigmatize, or incite violence to remain visible, particularly affecting different target groups such as people of North African origin, and thus contributing to the normalization of online hate speech.”
Oberaxe has been analyzing hate speech on social media since 2020, but in October 2024, it activated a monitor called Faro, thanks to the transfer of a tool that the Spanish soccer league had been using for years. The system was adapted by the company Séntisis Intelligence. Monitoring is automated, but the Observatory has a team of eight people who observe and analyze the content.
One of the tool’s challenges is examining messages that include more than just text. The X network is where hateful content is most easily found because the Faro system captures text much better than video or audio: message collection “is primarily based on textual analysis, although work is underway to expand it to image, video, and audio-based posts,” says a report on the system.
Meta and Google response
Some of the social network and communications companies mentioned advocate not generalizing everyone’s response, but rather analyzing the figures with caution and distinguishing between what these companies consider acceptable messages within the framework of freedom of expression, “even if they don’t coincide with a particular way of thinking and could be considered questionable, and communications that violate the law, which can be acted upon.”
In this regard, the companies point out that those with hate speech prevention policies have their own protocols for action, even immediate ones. However, they warn that what a particular institution considers objectionable is one thing, and each platform’s view and resolution regarding the aforementioned messages in question is another. In this case, they point out that a warning from a particular entity does not require that the notice be immediately considered a report of a policy violation. “A report is not the same as a complaint,” explains a spokesperson for Meta, Facebook’s owner.
The company did agree to comment directly on Oberaxe’s findings: “As our Community Standards state, we have strict policies against hateful conduct, incitement of violence, and harassment.” Meta adds: “We always take enforcement of our policies very seriously, as we have done during the recent incidents [in Torre Pacheco].”
Google also responded in relation to mentions of YouTube, defending its actions, both immediately and after the events. According to this entity, the intervention regarding hate speech was proportionate and immediate.
“YouTube has strict policies against hate speech, and we rigorously enforce them. However, content that does not violate these guidelines will remain on the platform. We take our responsibility in this area very seriously, balancing freedom of expression with robust safety measures,” the platform responded to this newspaper, echoing Meta’s statement.
Regarding the incidents analyzed by Oberaxe, Google responds that, “After reviewing [the content], we have found that the majority of the content (videos and comments) reported by Oberaxe do not violate YouTube’s hate speech guidelines. The majority of the reported content were comments that did not violate our guidelines.”
The company questions the official report’s failure to highlight content that YouTube did “proactively” remove and notes that it is working with the government to clarify reporting mechanisms and understand the company’s policies. Google defends its protocols: “YouTube’s policies make clear what is allowed on our platform and ensure community protection from harmful content while balancing freedom of expression. We developed these clearly defined policies in collaboration with outside experts and intentionally designed them to prevent serious real-world harm.”
Regarding hate speech, YouTube states that it is prohibited and that this guideline, updated in 2019 and endorsed by a 2020 report from the Institute of Strategic Dialogue, is rigorous (in the first quarter of 2025, it removed more than 192,000 videos for violating hate speech policies) and includes sanctions for both audiovisual creations and comments.
X, the platform most implicated in the incidents in Torre Pacheco along with Telegram, closed its official response channels to the media, so it has not been possible to obtain its opinion.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition