_
_
_
_

No more free pass: Regulation starts to crack down on social media platforms

The arrest of Telegram’s CEO in France and the closure of X in Brazil are two of the latest signs that times are changing, with networks beginning to be held more accountable

Social media platforms
X owner Elon Musk and Telegram owner Pavel Durov.Getty
Manuel G. Pascual

Russian-born tycoon Pavel Durov, founder and CEO of Telegram, was arrested on August 24 outside Paris as soon as he got off his private jet. He is accused of complicity in the dissemination of child pornography on Telegram, which is widely used for criminal activities. Just a week later, a judge ordered the closure of X in Brazil due to the social network’s “repeated failure to comply with court orders.” Its owner, Elon Musk, refuses to block profiles that contribute to the “massive dissemination of Nazi, racist, fascist, hateful and anti-democratic speech.”

These two actions are symptomatic of how times are changing. During the first decade of this century, the world was seduced by social media, so much so that today more than half of the global population — around 4.5 billion people — use these platforms every day. During the second decade, social media companies grew into omnipresent, corporate giants. The world also got a glimpse of the dark side of social media, with the Cambridge Analytica scandal the first big wake-up call. And in the third decade, action is being taken to curb social media.

The crackdown on Telegram and X are part of this latest wave. The arrest of the Russian-born tycoon — beyond its geopolitical ramifications — sends a message to the top executives of technology companies: they too can be held personally responsible for what happens on their platforms. Meanwhile, the closure of X in Brazil shows that governments are no longer hesitating to take action against social media companies. “It is the first example that Latin American countries can decide their own future in the digital economy, open a new regulatory toolbox against technology companies and, without aligning themselves with the United States or China, decide their own path towards technological sovereignty,” says Ekaitz Cancela, a researcher at the Internet Interdisciplinary Institute (IN3) of the Universitat Oberta de Catalunya in Spain.

Meta CEO Mark Zuckerberg before testifying in the U.S. Senate on protecting minors from online sexual exploitation on January 31.
Meta CEO Mark Zuckerberg before testifying in the U.S. Senate on protecting minors from online sexual exploitation on January 31. TASOS KATOPODIS (EFE)

Paloma Llaneza, a lawyer specializing in digital law, believes that the trend has clearly shifted: platforms have gone from facing no regulation at all to a regulatory frenzy. According to Llaneza, this shift is “very common in all technological revolutions.” She believes social media shows that allowing a phenomenon to rise before regulating it doesn’t work.

“I think we are seeing a new stage,” agrees Rodrigo Cetina, a law professor at the Barcelona School of Management at Pompeu Fabra University in Spain, and an expert on social networks and the U.S. legal system. “The European Union is more active than ever and the DSA [Digital Services Act] is beginning to be applied. What happened in Brazil is a powerful sign that some countries are not prepared to tolerate everything.”

The response to the big social media platforms has been in the works for some time. The EU has deployed an ambitious legal architecture that began in 2018, when the General Data Protection Regulation (GDPR) came into force. This regulation requires companies to declare what information they are going to use from internet users and for what purpose.

Meanwhile, the DSA — created at the same time as the Digital Markets Act (DMA) — has been widely applied since February this year. It establishes specific obligations for digital platforms, such as improving transparency and fighting the dissemination of illegal content, which must be quickly removed to avoid fines. In just half a year of its existence, the DSA has launched investigations against X, for breaching rules on illegal content, and against Meta, the parent company of Facebook and Instagram, to assess its possible harmful effects on minors. It also forced TikTok to withdraw an application that rewarded users for watching videos.

The latest addition to this regulatory framework will arrive in 2026, when the AI Act comes into force. This regulation sets down different requirements and obligations for AI applications depending on their risks.

In the U.S., most of the backlash against social media platforms’ apparent impunity is being felt in the courts. A series of lawsuits filed by families and educational institutions accuse the major social networks of designing platforms in a way that knowingly harms the mental and physical health of minors. Meanwhile, a privacy law similar to the GDPR came into force in California last year, and work is underway on new laws to ban deepfakes during election periods and to establish safeguards for the future development of AI models.

There have also been attempts to reform Section 230, the U.S. regulation that exempts technology companies from being held responsible for third-party content shared on platforms. But they have not yet been successful. “It is very difficult to bring about changes if, to begin with, each party diagnoses the problem in a very different way from the other. For Republicans, the problem with social media is censorship; for Democrats, it is that they spread disinformation and hate content,” says Cetina.

A global, but uncoordinated, response

When did the backlash against platforms begin? The first significant movements took place about a decade ago. The year 2016 was key for two reasons: the European Union approved the GDPR, which would come into force in 2018, and Donald Trump won the U.S. presidential election against all odds, in a victory that was helped by the spread of fake news supporting the Republican candidate’s campaign.

The call for change intensified due to social media’s own failures, says Carissa Véliz, an expert in applied ethics in technology and a professor at the Centre for Ethics and Humanities at the University of Oxford. “We are where we are, mainly, because of the buildup of bad experiences: from Cambridge Analytica to the damage to teenagers revealed by [Facebook whistleblower] Frances Haugen, and the social polarization caused by social media. Governments are able to exert pressure on the platforms, partly because citizens are upset, but also for national security reasons.” However, these government actions have uncoordinated, which has diminished the strength of the measures.

Former Facebook employee Frances Haugen testified in the U.S. Senate in October 2021 about her leaks published by 'The Wall Street Journal,' in which she showed official Instagram documents proving that the company's executives were aware of the harmful effects of the social network on teenage users.
Former Facebook employee Frances Haugen testified in the U.S. Senate in October 2021 about her leaks published by 'The Wall Street Journal,' in which she showed official Instagram documents proving that the company's executives were aware of the harmful effects of the social network on teenage users.Pool (Getty Images)

Even Ireland — which typically opposes the EU’s measures to curb big tech — appears to be responding to the public backlash against social media. “There are signs that Ireland’s Data Protection Commission [DPC], which has responsibility for enforcing EU digital law on most U.S. and Chinese tech companies operating in the EU [because their European headquarters are in Ireland], is shifting gears,” says Johnny Ryan, director of the digital rights section of the Irish Council for Civil Liberties. “The DPC recently used an urgent procedure it has never used before to stop X from using its users’ data to train its AI model, and also intervened to stop Meta from doing the same thing.”

Another factor has also contributed to the sudden interest in regulating social media: artificial intelligence. “When social media was born, there was a very interesting legal debate about how to regulate it. Should they be treated as a means of communication or not? If so, they should be held responsible for the content they publish, just like a newspaper. It was decided that they shouldn’t be, and this was reflected in the U.S. Communications Decency Act of 1996,” explains Borja Adsuara, a jurist and consultant specializing in digital law.

But social media evolved and, little by little, algorithms began to play a bigger role in the equation. “When a platform is not neutral, but recommends some content and postpones others based on certain criteria, then it is acting with editorial powers, and it must be held to the same responsibility as a publisher,” the lawyer stresses. This is the same conclusion that many countries have reached.

Technology and geopolitics

There are strong geopolitical overtones to the arrest of Durov, who recently reconciled with Vladimir Putin after the Russian president made him sell his first startup, vKontakte, and leave Russia. It is not the first time that politics and technology have collided. In 2018, Huawei’s vice president and the daughter of its founder, Meng Wanzhou, was arrested in Canada at Washington’s request. She was held for three years after being charged for violating the U.S. economic sanctions on Iran. The episode was part of the trade dispute between the United States and China at that time, which led then-president Donald Trump to declare war on Huawei.

Tech CEOs often warn governments against regulation, but rarely follow through with them. OpenAI CEO Sam Altman said last year that EU efforts to regulate AI could lead his company to leave the continent. The EU ignored the warning and this year approved the AI Act. OpenAI stayed, and Altman now says he is committed to upholding the law.

Huawei Vice President Meng Wanzhou in October 2021 after leaving court in Vancouver, Canada.
Huawei Vice President Meng Wanzhou in October 2021 after leaving court in Vancouver, Canada.BOB FRID (EFE)

A similar situation was seen years ago, when the GDPR was being drafted, which gives European citizens the right to know how and for what purposes companies manage their data. Facebook founder and CEO Mark Zuckerberg met with members of the European Parliament and visited the European Commission. He threatened to leave Europe, considering Brussels’ demands excessive. The GDPR came into force in 2018 and Facebook, now Meta, is still operating on the continent.

A new horizon

Whether uncoordinated or not, government action is having an effect on social media companies. Platforms are now increasingly careful about what they do. “Activists have been calling for changes for a long time and, gradually and in a non-radical way, progress is beginning to be made in this direction,” says Simona Levy, a technopolitical strategist and founder of the Xnet collective. She has just published Democratic Digitalization. Digital Sovereignty for the People, a book that calls on institutions to guarantee that large platforms respect the law.

The EU has put in place a legal framework that is starting to be applied. In the U.S., lawsuits and new laws that are underway will also greatly affect the future of large social media platforms. And, as seen in Brazil, countries are no longer cowering in the face of technology companies that flout the law.

The heavy cogs of the system are starting to turn. Society is responding and wants to curb the dangers of social media platforms, from the dissemination of misinformation and incitement to hate to the addiction and harm caused to young people. But, just as 2016 marked a turning point in this race towards regulation, 2024 could also be a game changer. Almost all of Silicon Valley, with Elon Musk at the helm, has placed its bet for the U.S. presidential elections in November. Their man is Trump, and the candidate is keenly aware of his supporters.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_