Skip to content

Algorithms do widen the divide: Social media feeds shape political polarization

A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups

A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the social divides it has amplified. The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science.

Social media is an important source of political information. For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms. Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views.

How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA). Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm.

The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages.

Participants in the experiment were randomly exposed for one week to two types of feeds: one with a lot of polarized content (AAPA) and another with very little. “We measured the effects of these interventions on affective polarization (participants’ feelings toward the political outgroup) and emotional experience (anger, sadness, excitement, or calm) by using both in-feed and post-experiment surveys,” Piccardi and his co-authors explain in the study.

The results were compared with a control group whose feed was not altered. The researchers found that reordering the content “significantly influenced affective polarization,” with no significant differences based on political preferences. “Changes to the feed algorithm caused in-feed but not post-experiment changes to participants’ negative emotions,” the authors note.

The experiment also demonstrates that it would be possible to reduce the level of polarization on social media simply by reordering posts so that those with antidemocratic content are less visible. Michael Bernstein, a computer science professor at Stanford University and co-author of the study, believes the tool could “open ways to create interventions that not only mitigate partisan animosity, but also promote greater social trust.”

Changes to platforms

In recent years, social media platforms have undergone significant changes that affect the dissemination of political content. Content moderation teams, responsible for filtering toxic, illegal, or hateful messages, have been reduced, as in the case of Meta, or eliminated entirely, as X has done. This task has been left to community notes. The gap allowing problematic content to spread is enormous, and several studies show that fewer filters increase the amount of hate and harassment circulating on platforms.

Furthermore, the dynamics of social media itself have changed significantly. Whereas before we would see the most commented-on or most liked posts from our contacts, now the algorithm has total priority: it decides what each user sees and, therefore, what can or cannot go viral. This makes it crucial to measure the algorithm’s influence on shaping or reinforcing political ideas.

“Researchers are facing unprecedented limitations as social media platforms are choosing not to share data. Hence the importance of Piccardi and colleagues presenting a research methodology that does not require explicit collaboration from the platforms,” says Jennifer Allen, a professor in the Department of Technology, Operations, and Statistics at New York University, who was not involved in the study.

Allen also believes the model developed by Piccardi and his team could be replicated for other social networks, and that the experiments could be repeated at different times to test their validity. In her view, the approach by Piccardi’s team “is a form of creative research with a methodology that adapts to the current situation.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In