_
_
_
_
_

European elections: Which parties are trying to influence the vote through Facebook or Instagram?

The threat of misinformation and the spread of fake news is now common in all elections, at least since the 2016 U.S. presidential election. The danger has increased considerably with generative artificial intelligence

Elecciones europeas
Some of the ads used on Facebook and Instagram during the 2016 United States presidential election. They were sent to users who were expected to be most receptive to them.AP Photo/Jon Elswick
Manuel G. Pascual

The European elections on June 9, in which more than 370 million citizens are eligible to vote, has become fertile ground for disinformation and political manipulation. The EU Agency for Cybersecurity (ENISA) issued a warning about it in October, and there is great concern in the organization about the effect that generative artificial intelligence (AI) may have on the process. This technology is able to produce compelling texts and hyperrealistic videos, which could be used to propagate false information and influence citizens’ votes.

But the spread of hoaxes and biased messages is not the only problem that voters face. There are political parties that use digital advertising tools provided by social media to personalize and segment their message with the aim of influencing the electorate. This is what Cambridge Analytica did in the 2016 U.S. presidential elections, in that case using data from 80 million users fraudulently captured through Facebook.

Audience segmentation, that is, dividing voters into groups that share certain characteristics, is a legal practice widely used in political marketing. Political microtargeting, on the other hand, which analyzes the interests of individuals, and not their groups, is not allowed in the EU. Article 9.1 of the General Data Protection Regulation prohibits the processing of personal data that reveals the political opinions of citizens. And that is exactly what the ideological profiles created by microtargeting do. The practice keeps a kind of political file on individuals made from available information in their browsing history or in their reactions on social media.

Appearance of a summary sheet from Who Targets Me tool, which reveals the number of times the user has been exposed to personalized political advertising.
Appearance of a summary sheet from Who Targets Me tool, which reveals the number of times the user has been exposed to personalized political advertising.

Despite being prohibited, microtargeted political advertising is still a common practice in Europe. The privacy protection group NOYB (None of Your Business), led by Austrian activist Max Schrems, filed a series of complaints last year against several German political parties for having resorted to this technique in the 2021 federal elections.

In Spain, all parties tried to reform the Electoral Regime Law (LOREG) through the Data Protection Law (LOPD, 2018) to allow parties to collect “personal data regarding the opinions of citizens” from the web and social media ahead of the 2019 elections. A group of jurists and associations pressured the Ombudsman to appeal this change to the Constitutional Court. And the Constitutional Court struck down the change.

“That was the biggest victory of my career,” recalls Borja Adsuara, one of the lawyers who put forward the appeal. “We managed to stop some parties that had given themselves permission to use websites and social media to collect the political opinions of citizens linked to their personal data. In other words, matching them to the names and surnames of real people,” he points out.

However, there are parties that continue to support this technique, even though it is banned. The digital rights activists’ network Xnet has launched the “Who Targets Me?” campaign in coordination with a coalition of European groups and organizations with the same concerns. Its aim is to analyze how Facebook and Instagram, Meta’s two star social networks, exploit user data to make individualized profiles for political purposes.

The campaign pivots around the Who Targets Me tool, a browser extension that allows users to collect, catalog, and display personalized electoral advertising targeted to Facebook users while they browse that platform. The tool tracks and processes the anonymized data received from campaigns and posts on social media, stores it, and subsequently processes it.

The more users download the extension, the more valid the data that analysts extract from them will be. The objective is to find out which parties resort to microtargeting and at what times during the campaign. Xnet will prepare a report with this data that it will publish once the electoral period ends.

A person holds a phone in which the Instagram app is seen on the screen.
A person holds a phone in which the Instagram app is seen on the screen.Unplash

Experts and legislators agree that microtargeting is a practice that threatens the proper functioning of democracy. These techniques, which use digital data analysis to provide users with information specially tailored to their profile, are in danger of seriously influencing the voter. “After influencers, political parties are the second-largest group of clients in the information manipulation industry. They buy bots, user profiles, etc.,” explains Simona Levi, founder and coordinator of Xnet. “The parties’ microtargeting strategies seek to manipulate users psychologically. They are based on sending us the information we want to see, which creates information bubbles. Telling us what we want to hear, and not what they think, is not convincing, it is manipulating.”

“Any data about a person’s political opinions is strictly protected by the [EU General Data Protection Regulation],” says Felix Mikolasch, a privacy lawyer at NOYB. “Not only is that data extremely sensitive, but it also allows for large-scale manipulation of voters, as Cambridge Analytica has demonstrated,” he notes.

Disinformation and manipulation in the AI era

Two weeks ago, the European Commission asked X, TikTok, Facebook, and other large platforms to take steps to stop the circulation of suspicious content that seeks to influence voters. Fearing a barrage of interference and disinformation, Brussels has published a series of guidelines for platforms with more than 45 million active users in the EU, which is aimed at combating harmful AI-powered content and misleading political advertising. Google, Meta, and TikTok have set up specially focused teams to combat misinformation around the elections.

In Europe, there are 24 official languages to monitor, and mastery of so many languages is not a common feature among content moderators. Hence, the European Commission has a special interest in strengthening this area. According to a report by X collected by Euronews, the social network only has one content moderator per language who is fluent in Bulgarian, Croatian, Dutch, Portuguese, Latvian, and Polish in its global team of 2,294 people. There is no one to cover 17 of the EU’s official languages, including Greek, Hungarian, Romanian, and Swedish: everything in those languages is entrusted to AI.

The threat of misinformation and the spread of fake news is now common in all elections, at least since the 2016 presidential election that brought Donald Trump to the White House. The danger has increased considerably with generative AI. There are now particular fears that deepfakes made by AI could have a direct influence on the votes of millions of citizens. This technology allows bad-faith actors to generate videos in which any politician can appear in any situation, saying anything.

A recent Microsoft report warns that China will try to influence the U.S. presidential elections in November, as well as the South Korean and Indian elections, with content generated using AI. The technology company expects that several cyber groups associated with Beijing and Pyongyang are already working on it, as they did in Taiwan. “Although the impact of this content remains limited, China’s growing experimentation with memes, videos, and audio will continue, and may prove effective in the future,” the study concludes.

“Critically, confidence in the EU electoral process will depend on our ability to rely on secure cyber infrastructure, as well as the integrity and availability of information. It is up to us to ensure that we take the necessary steps to achieve this sensitive but essential objective for our democracies,” said Juhan Lepassaar, the CEO of Enisa.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_