_
_
_
_

Misleading headlines in mainstream media are more dangerous than outright fake news

A study has revealed that links without context go viral on social networks with the help of malicious actors, elevating the risk of misinformation

Legislación UE
A user holds a cell phone with the Facebook application open in Moscow in 2021.Pavel Golovkin ( AP / LAPRESSE )
Jordi Pérez Colomé

Headlines such as “A ‘healthy’ doctor died two weeks after getting a COVID-19 vaccine; CDC is investigating why” from the Chicago Tribune or “A surprising number of healthcare workers, including physicians and registered nurses, refuse COVID vaccinations” from Forbes were two examples of the most viral posts on Facebook in early 2021 and the ones that most affected the U.S. vaccination rate against Covid. Fake news is more effective, but its reach was much smaller.

This is the main conclusion of a new article published in the journal Science that analyzes the impact of links about vaccines on Facebook between January and March 2021. Links flagged as false were viewed 8.7 million times, which was only 0.3% of the 2.7 billion views about vaccines in that period, according to data from the platform. In contrast, headlines not flagged as misinformation, but which implied that vaccines were harmful, many in mainstream media, were viewed hundreds of millions of times. The difference in reach is so great that in comparison, outright fake news has far less impact.

“Our analysis suggests that Facebook fact-checkers identify the most damaging misinformation, in that Facebook was doing a ‘decent’ job,” says Jennifer Allen, a researcher at the Massachusetts Institute of Technology (MIT) and co-author of the paper. “But other stories can go viral on networks and malicious actors can use rigorous stories to promote misleading narratives, something that platforms should do a better job of addressing. Although media outlets should also be mindful when writing headlines, as their content can be presented out of context,” she adds. A headline like the Chicago Tribune’s in an anti-vaccine group, with that loaded context, can be devastating.

This finding shifts the traditional focus on fake news and misinformation, acting as a reminder that more mainstream media also have to watch what they publish, especially in an era where a story can go viral just because of its headline. “Competition for clicks is a challenge,” Allen says, “but I don’t think that relieves the media of responsibility. Journalists should keep in mind that only headlines are read on social media and stories can be taken out of context. They should strive to avoid possible misinterpretations of their work.”

The authors calculated the real impact of these headlines based on the number of users who saw them. The negative impact of the misleading headlines on people who should have been vaccinated was 46 times greater than the more blatant fake news. In a comment in Science on the article, Cambridge University researcher Sander van der Linden estimates that these headlines prevented at least 3 million people from being vaccinated, based on the fact that 233 million Americans use Facebook. “It’s a rough estimate,” Allen cautions.

The study estimates that Facebook content that was skeptical about vaccines reduced vaccination intention in the U.S. by 2.3 points, but a person’s intention may differ from their final decision. “We assume, based on other research, that vaccine uptake is 60% of vaccination intention and from that we get the 3 million number. That number is speculative, but it suggests that the potential impact of these headlines could be considerable.”

The article focuses on the effects of vaccination headlines. But Van der Linden believes it would be easily replicable in other areas, such as policy, and “just depends on continued access to relevant data,” he says.

Female, older and conservative

In the debate between fake news with little reach and seemingly serious headlines that go viral, Science has published a second article on a well-known but rarely measured phenomenon: supersharers. This refers to the small group of users who turn their accounts into machines that retweet of disinformation or biased information. This new research has found that their real impact on public debate is greater than it seems.

The study looked at a panel of more than 664,000 registered voters during the 2020 U.S. presidential election. Of this number, only a small group of 2,107, representing 0.3%, shared 80% of the fake news. They are supersharers. Only that group alone managed to reach 5.2% of registered voters on Twitter. “These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many,” the authors of the article write.

The research also identified some personal characteristics of this group: they are mostly older women who are conservative. This profile coincides with a Nature study from the summer of 2023, with data from Facebook, which showed that the overwhelming majority of fake news is consumed by conservatives.

The new study is limited to Twitter due to the lack of data from other social networks, according to Nir Grinberg, a researcher at Ben Gurion University in the Negev (Israel) and one of the co-authors, “I would have liked to have the ability to answer questions about other networks with empirical evidence, but the availability of data from social networking platforms limits this type of research.”

Users on Twitter (today X) who followed these accounts tended to be more exposed to misinformation and to repeated exposure, which make them prone to believing in lies, according to the article. The impact of supersharers is not to be underestimated: if a candidate wanted the same level of reach, they would have spent $20 million: “Supersharers not only found a sizable audience online but were found to be influential members of their networks that provide approximately a quarter of the fake news to their followers,” the article says.

These developments in understanding misinformation open up ways for thinking about how to limit its reach. “Content moderation is a balance between freedom of expression and potential harm,” Allen says. “But it’s hard for platforms to measure how harmful content can be.” For example, there have been cases in which platforms have overlooked this other, more harmful type of content that is misleading, but does not strictly violate the norms.

“Our methodology allows platforms to first identify content that potentially has a negative impact and then craft policies,” says Allen, who worked at Meta before starting her Ph.D. at MIT. “As a first step, Facebook could prioritize sending content to verifiers based on its potentially harmful impact, on its persuasiveness multiplied by its potential audience, where pages that have many followers are prioritized more,” she explains.

As an alternative, Allen also proposes as a measure similar to X’s Community Notes, where users collaboratively fact-checking information. “It can be a way to mitigate the impact of damaging stories that pass in a fact check, but lack relevant context,” Allen says.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_