_
_
_
_
_
ANALYSIS
Educational exposure of ideas, assumptions or hypotheses, based on proven facts" (which need not be strictly current affairs) Value in judgments are excluded, and the text comes close to an opinion article, without judging or making forecasts , just formulating hypotheses, giving motivated explanations and bringing together a variety of data

How algorithmic recommendations can push internet users into more radical views and opinions

Correcting such biases and providing better information to users of technology platforms would go a long way in promoting better societal outcomes argues Anjana Susarla, a professor of Responsible AI

Algoritmos
Social media company logos.Jaap Arriens

The promise offered by social media was that of enabling better connections between expanding the speed, scale and spread of digital activism.

Before social media, public personalities and organizations could use mass-broadcast channels such as television to convey their message to large audiences. News media operated as gatekeepers, enabling information to be disseminated to a mass audience, using established criteria to decide which stories get priority and the manner in which they are covered. At the same time, we had citizen-centered communication – or peer communication – which was more informal and organic. Social media blurs the boundaries between these two and provides an opinion-making role to well-connected individuals.

Twenty years ago, we did not have the means to either raise awareness or mobilize for causes at the speed and scale enabled by social media, where a hashtag such as #deleteuber can go viral and result in 200,000 Uber accounts being closed in a single day. In the pre-social media era, successful citizen activism (such as that prompted by the Exxon Valdez oil spill) involved protracted negotiations over years between companies and activists. By contrast, in today’s world, a single viral tweet can wipe out millions of dollars in the stock valuation of firms or result in governments changing policies.

Polarization, misinformation and filter bubbles

While such an opinion-making role allows for unfettered civic discourse that may be good for political activism, it also makes individuals more susceptible to misinformation and manipulation.

The algorithms that underlie news feeds of social media platforms are designed for constant interaction and to maximize engagement. Most “Big Tech” platforms operate without the gatekeepers or filters that govern traditional sources of news and information. This, when combined with the vast swathes of data that these companies have, gives them enormous control over the flow of information to individuals.

Studies show that falsehoods diffuse faster than truth on social media. This is often because we find news that triggers emotions to be more engaging, which makes it more likely we will share such news, which then gets amplified through algorithmic recommendations. What we see in our social media feeds, including paid advertisements, are matched to our individual likes, political and religious views. Such personalization can have a host of negative consequences for society – whether it is digital voter suppression or targeting minorities for disinformation to discriminatory ad targeting.

The algorithmic design of Big Tech platforms prioritizes new and micro-targeted content, leading to almost unchecked proliferation of misinformation. This was echoed by Apple CEO Tim Cook, who recently said: “At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement – the longer the better – and all with the goal of collecting as much data as possible.”

Examples abound of misinformation circulated through social media such as voter suppression engineered through social media. The Senate investigation of the 2016 disinformation campaign concluded that “these operatives used targeted advertisements, intentionally falsified news articles, self-generated content, and social media platform tools” to intentionally manipulate the perceptions of millions of Americans.

The dark side of these engagement-driven models is online radicalization and political polarization. While social media provides a sense of identity, purpose and connection, the individuals who are posting conspiracy theories and engaging in online misinformation also understand the virality of social media wherein disturbing content garners more engagement.

We are grappling with social media coordinated actions that could disrupt the collective working of society, from financial markets to electoral processes, where meme stocks or meme wars ranging from #StopTheSteal or r/WallStreetBets mark “a coup for the ’gram”.

The danger is that such viral phenomenon, when combined with algorithmic recommendations and echo chamber effects, end up leading to a reinforcing cycle of filter bubbles where users could be pushed into more radical views and opinions.

Algorithmic awareness and media literacy is key

Correcting algorithmic biases and providing better information to users of technology platforms would itself go a long way in promoting better societal outcomes.

Some of these types of misinformation could be solved by a mix of governmental directives and self-regulation by technology companies into better content curation and labeling of misleading information, by technology companies partnering with news organizations and employing a hybrid of AI and crowdsourced misinformation detection. Employing better bias-detection strategies and providing greater transparency in how algorithmic recommendations are provided to users on technology platforms could address some of these issues.

What is also required is more media literacy, including algorithmic awareness about how personalization and recommendations engineered by Big Tech companies shape our information ecosystem.

Most individuals are not sophisticated enough to comprehend how algorithms impact their information ecosystem. For instance, a Pew survey of the United States found that adults who mainly got their news from social media knew less about politics and current events. In the era of Covid-19, misinformation has been dubbed an infodemic by the World Economic Forum.

It is important to understand how platforms are exacerbating pre-existing digital divides, leading to the potential for active harm for users of search and social media. In my own research I found that, based on how digital platforms provide information to search queries, a user with greater health literacy is more likely to discover usable medical advice from a reputed healthcare provider, such as the Mayo Clinic. The same digital platform will steer a less-literate user toward fake cures or misleading medical advice.

Social media has evolved from its initial promise of being a “utopia” of a rich democratic online debate to the complexity of filter bubbles and propagation of hate speech. Big Tech companies yield societal power on an unprecedented scale. Their decisions about what behaviors, words and accounts are allowed govern billions of private interactions, shape public opinion and affect people’s confidence in democratic institutions. It is time to acknowledge that technology platforms can no longer be seen as profit-making entities, but bear a responsibility to the public. We need a conversation about how society is impacted by the pervasiveness of algorithms and we need greater awareness of algorithmic harms accruing from over-reliance on Big Tech.

Anjana Susarla is Endowed Professor of Responsible AI at the Michigan State University’s Eli Broad College of Business.



More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_