The war in Gaza has exposed how X has failed in tackling disinformation since Elon Musk bought it
The changes in the businessman’s security and content moderation policies after the acquisition of Twitter have caused the platform to become a much less reliable source than before.
The renewal of the armed conflict in Gaza has highlighted the inability of X (formerly known as Twitter) to stop the spread of misinformation, which has multiplied in recent days. Data verification experts have linked it to the arrival of Elon Musk. In the last year, the company has relaxed the platform’s rules, fired most of the people who were in charge of data verification, and restored accounts that were at some point suspended for breaking the rules. Moreover, a few months before the conflict, X removed access to a data tool that was free to academics and was used to identify accounts that were guilty of spreading false information. Data verification experts say that such an avalanche of falsehoods has never been seen before.
“Whenever there is some type of crisis, whether national or international, there is a spike in misinformation. But in this case we are seeing that there is generally more misinformation and that it goes further,” says Clara Jiménez, co-founder and CEO of Maldita, a Spanish non-profit data verification foundation. Jiménez explains that the new X payment model does not create clarity either. In fact, posts by users who have a premium subscription — who pay €8 ($8) per month for the blue check, among other services — are promoted by the platform and appear among the first tweets when looking for information about the war. “Twitter Blue accounts have better positions on the wall and in responses, and peddlers of misinformation know this and take advantage of it,” Jiménez complains. Before Musk’s arrival, verification was reserved for the accounts of famous and influential people, including journalists who worked in well-known media outlets with many followers, which conferred truth on the news they shared.
Since the summer, the platform’s moderation system has depended on “community notes,” comments written and evaluated by a group of volunteers who have been previously approved. The notes are published after reaching a certain threshold of useful votes from “people with different points of view,” as explained by X, but it does not clarify what criteria are used to become part of this community, nor how the posts needing to be verified are chosen. The platform only publishes notes that have received enough votes from users who usually disagree with their assessments, in order to guarantee an ideological cross-section. Once open, the note can continue to be rated, even by unregistered users who view it, and a previously published note often ends up disappearing later.
In a publication on the Community Notes account, the company acknowledges that it has disseminated more than 500 notes about the Israel-Hamas conflict. It also claims to have removed accounts affiliated with the Hamas militant group that had recently been created and that were acting “against tens of thousands of publications for sharing graphic media, violent speech, and hateful behavior.”
Videos and photos of other conflicts
In recent days, however, instead of encountering verified information, X users have witnessed an unprecedented number of images and videos taken out of context that have nothing to do to what is happening in Gaza and Israel. For example, a video purporting to show a Hamas militant firing a shoulder-mounted weapon and attacking an Israeli helicopter went viral. However, as BBC journalist Shayan Sardarizade pointed out in a thread in which he has compiled a list of lies about the conflict, it is a fragment of the video game Arma 3. Sardarizade asserts that on X “there’s always plenty of misinformation during major events,” but “the deluge of false posts in the last two days, many boosted via Twitter Blue, is something else.”
Also, the videos of Israeli children in cages kidnapped by Hamas are false. The Maldita team has traced the origin of the images from TikTok — they are no longer present on this platform, but they are still on X — demonstrating that the video was shared days before Hamas launched its offensive against Israel. Other hoaxes that are spreading like wildfire are the videos of bombings and building collapses from other wars, which are being reused to make it look like they have been recorded in recent days.
Elon Musk himself has only made the situation worse. As he has been doing regularly since he bought Twitter, the businessman recommended following information about Hamas attacks and Israel’s war against the Islamist militia through accounts that have been shown to spread false content. “For following the war in real-time, @WarMonitors and @sentdefender are good,” Musk wrote in a post on Sunday morning in which he invited his 150 million followers to contribute more suggestions.
In May, both accounts spread the lie that there had been an explosion near the White House, while the @WarMonitors account often posts anti-Semitic comments on X, as was pointed out by different users below Musk’s post. For example, last year the @WarMonitor account thanked Kanye West in a Twitter thread and claimed that “the overwhelming majority of people in the media and banks are zi0nists.” Musk removed his recommendation shortly after, although in the short time it was published it reached more than 11 million views. The same day, Musk attempted to correct the mistake with another tweet: “As always, please try to stay as close to the truth as possible, even for stuff you don’t like.”
The widespread misinformation that is circulating in X has not gone unnoticed by the European Union, either. In an urgent letter published on X, Internal Market Commissioner Thierry Breton warned that, in the wake of Hamas attacks against Israel, the social network is being used to spread fake news and illegal content. “We have reports from qualified sources reports about potentially illegal content circulating on your service, despite flags from the relevant authorities,” says the letter from Breton, one of the harshest voices against Musk and Twitter’s constant non-compliance. Under the direction of the businessman, the company left the voluntary code against misinformation, which would oblige it to comply with community standards that dictate that social media companies must respond to complaints about illegal content within 24 hours.
Musk’s response to the commissioner was swift: “Our policy is that everything is open source and transparent, an approach that I know the EU supports. Please list the violations that you allude to on X, so that the public can see them.” The next day, Breton ironically shared a tweet in which he shared his newly launched account on Bluesky, the social network created by a former CEO of Twitter, Jack Dorsey.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition