_
_
_
_
_

‘Right-wing bias’: A macro study confirms that Facebook disinformation is consumed by conservatives

According to analysis of data from 208 million US citizens, which Meta allowed a team of academics to access, 97% of fake news is seen by right-wing users. The investigation did not find clear links between social media and political polarization

Facebook Meta
Donald Trump supporters during the attack on the U.S. Capitol in January 2021.LEV RADIN / ZUMA PRESS / CONTACT
Jordi Pérez Colomé

Facebook is a network dominated by conservative news and its right-wing users are the ones who overwhelmingly consume information labeled as false. The data confirming these two hypotheses comes from academic research that had unprecedented access to internal Facebook data provided by Meta. These findings are based on the aggregated activity of 208 million U.S. users over several months around the 2020 US elections. The study, led by Spanish researcher Sandra González-Bailón of the University of Pennsylvania, is part of a series of four papers analyzing Meta social media’s impact on increasing political polarization, which were published Thursday in the journals Science and Nature.

“I didn’t expect to find some of the results we found, with such radical patterns,” González-Bailón tells EL PAÍS via videoconference from Philadelphia. “But this is what the data say,” she adds. The article examines how the combination of user behavior and the social media website’s algorithm segregates information consumption between progressives and conservatives. Although these two groups exist, they are not symmetrical, as previously believed: “Audiences who consume political news on Facebook are, in general, right-leaning,” the article says. But the most striking figure is the difference in the reach of news labeled as fake by Meta’s fact-checkers (which accounts for only 3% of the total number of links shared on Facebook): 97% of those links circulate among conservative users.

“It is true that it is the most controversial article,” University of Konstanz (Germany) Professor David García admits to EL PAÍS; he had access to the embargoed pieces to write a brief commentary in Nature. “But it is very important. The evidence we had [before] was not as solid. There was a 2015 study that had problems with it. They got it right, as we all would have wanted to do.”

The entire investigation’s impact goes beyond that: “It’s not that much of a surprise. Facebook is more conservative. But what is impressive is that someone was able to verify it from outside Facebook with access to [the company’s] internal data, although the results are not very [unflattering to] Facebook,” he explains, referring primarily to the other three investigations published at the same time. That research analyzes the problems with algorithmic timelines (feeds) on Instagram and Facebook, the risks of virality and sharing posts, and content received from ideologically like-minded people. None of the three has found clear results that point to easy solutions or culprits.

A complicated answer

For years, experts, technologists and academics have been trying to understand how social media affects our societies. In little more than a decade, the way we inform ourselves has changed dramatically: that must have consequences, but what are they? Although these articles try to answer that question, it is not easy to create a parallel world to compare and see where we would be today without Facebook, Twitter (X as of this Monday) and YouTube. “These findings can’t tell us what the world would have been like if we hadn’t had social media for the last 10 to 15 years,” Joshua Tucker, a professor at New York University and one of the academic leaders of the project, admitted at a virtual press conference.

“We can’t decouple the algorithmic from the social.”
Sandra González-Bailón, University of Pennsylvania.

“The question of whether social media is destroying democracy is very complicated. It’s a puzzle and each of these articles is a piece [of it],” says González-Bailón. These four articles are just the first of a total of 16, which will continue to be published in the coming months and are additional pieces of that huge puzzle. The project stems from an August 2020 agreement between Meta and two professors, who then selected the rest of the researchers. “I have never been part of a project where the standards of analytical rigor, fact- and code-checking have been so robust and, therefore, a project [that ensures] that quality control and …the results are genuine,” adds González-Bailón. The authors include academics, who are completely independent of Meta, as well as Meta employees. Meta’s internal leader for this investigation is Spanish researcher Pablo Barberá.

What if we removed the algorithm?

The other three articles looked at what would happen on Facebook and Instagram if three details to which political polarization and the creation of bubbles are often attributed were changed. The researchers received permission from 20,000 participants to change their timeline content and then compared them to a control group. The experiments were conducted for three months, between September and December 2020, around the time of the election in which Joe Biden won the presidency. While the numbers may seem small, researchers highlighted both the sample and the duration of the experiment as unusual and quite valuable.

The first of these papers measures the impact of replacing Facebook’s and Instagram’s algorithms (the order of what we see on our screen) with a simple chronological order: the last post published is the first we see and so on; the algorithm placing the most “interesting” items at the top has been removed. That is an obvious way to measure whether the famous “algorithm” is confusing us. For example, it looks at whether seeing the most extreme political content most often — because it provokes more interest than moderate content — polarizes us. The result is that it hardly affects polarization or users’ political knowledge.

That doesn’t mean that the change doesn’t have other consequences. The reduction of algorithmic content causes users to spend less time on the two Meta networks, presumably because the content is more boring or repetitive; that prompts them to go to TikTok or YouTube. In addition, users with chronological content saw more items from unreliable and political sources.

Avoiding shared stories

In another article, the authors removed some of the shared content on Facebook. The intent was to reduce the importance of virality. Again, there were no substantial changes, but there were “unexpected consequences” that were difficult to foresee, according to Andrew Guess, the Princeton University professor who led the study: “People go on to fail to distinguish between things that happened last week and things that didn’t. Why? Most of the news people get about politics on Facebook comes from sharing, and when you remove those posts, they see less virality-prone and potentially misleading content, but they also see less content from trusted sources,” Guess adds. The change decreased users’ awareness of current events without affecting other variables, which does not seem to be a positive change.

“Users have their own initiative and their behavior is not completely determined by algorithms.”
David Garcia, University of Konstanz

The third paper, the only one published in Nature, tries to reduce the presence of content from ideologically like-minded users. Again, the results of the study do not reveal substantial changes, but users whose like-minded content was cut off ended up interacting more with the content they did see: “Users found other ways to read like-minded content, for example, through groups and channels or by scrolling down in the Facebook feed. This shows that users have their own initiative and that their behavior is not completely determined by algorithms,” Garcia writes. While the researchers rule out what they call “extreme echo chambers,” they did observe that 20% of Facebook users receive 75% of content from like-minded accounts. Reducing that like-minded content, the authors write, does not cause substantial variations in polarization or ideology.

How to separate life from social media?

There are several potential problems with these studies: the responses were self-reported and the period and country they examined may have led to a result that cannot be duplicated as-is in other circumstances. The most obvious conclusion? It is difficult to isolate and measure a phenomenon with as many ramifications as political polarization, although it does show that tinkering with social media does not change its impact — positive or negative — on democracy.

Even in the González-Bailón-led paper, which has the strongest evidence and focuses on observing the entire network, scholars detect that liberals and conservatives consume different information diets. But they don’t know whether that diet is caused by the algorithm or by the individual’s prior opinions. “Our paper’s major contribution is that we used everything that happened on the platform, focusing on political news links, and that’s a very strong point,” Gonzalez-Bailon says. “But we can’t decouple the algorithmic from the social. Ultimately, algorithms learn from user behavior, and that’s the loop we can’t quite break,” she adds.

Would users be less polarized without the algorithm? Maybe, but that is not certain. González-Bailón’s study has discovered another interesting aspect of Facebook’s information diet: groups and pages are more important than the users you follow. They also found that specific links have more weight than domains: pages and groups create a specific diet that favors their ideology. For example, they may share a lot of traditional media, but they only choose the articles that favor their views or that they want to criticize: “Facebook has created an information ecosystem where groups and pages are particularly efficient machines for creating this kind of buffet [of options] to choose from,” says González-Bailón.

In his response to the articles, Garcia draws an analogy with climate change to clarify the impossible challenge that these articles face these articles: “Imagine a policy that reduces carbon emissions in some cities. Compared to a control group of cities, we are unlikely to find an effect on temperature anomalies, but the absence of an effect would not be evidence that carbon emissions do not cause climate change,” he writes. The same may be true for social media because these experiments do not rule out the possibility that social media algorithms have contributed to polarization; the experiments “show that there is a limit to the effectiveness of individual solutions when it comes to modifying collective behavior. These limits must be overcome by using coordinated approaches, such as regulation or collective action,” adds Garcia.

The study’s authors call for regulation to repeat and broaden these types of experiments, beyond the goodwill and interest of companies like Meta (in this particular case). “A proactive approach is needed to establish these collaborations in the future, so that technology’s effects on political behavior can be investigated without having to spend a decade worrying about it first,” writes Garcia. He adds that the new EU Digital Services Act is a “feasible” framework for such collaborations between the industry and academics.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_