_
_
_
_
_

The algorithms that could elect the next president

The Obama and Trump campaigns and the massive email hack against Macron show how data science can influence democratic elections

artificial intelligence and data science is becoming more widespread in democratic elections.
The use of artificial intelligence and data science is becoming more widespread in democratic elections.Boston Globe (Boston Globe via Getty Images)

Franchise is a short story by Isaac Asimov that first appeared in a science fiction magazine in 1955. The story is about how the United States has converted to an electronic democracy in which the world’s most advanced computer (Multivac) selects a single person to answer a number of questions and then uses the answers to determine the outcome of a vote, obviating the need for an actual election.

While we have not yet reached this disturbing future, the role of artificial intelligence and data science in democratic elections is becoming increasingly important. The election campaigns of Barack Obama and Donald Trump, Denmark’s Synthetic Party, and the massive data theft from the Macron campaign are good examples.

Sentiment analysis

One of the first successful examples of using big data and social network analysis techniques to fine-tune an election bid was Barack Obama’s US presidential campaign in 2012. This campaign and many others that followed used traditional polling methods supplemented with social media analysis.

These analytical techniques offer inexpensive and near real-time methods for measuring voter opinion. Natural language processing (NLP) techniques such as sentiment analysis are often used to analyze messages in tweets, blogs and other online posts, and measure whether the opinions expressed are positive or negative with respect to a particular politician or election message.

The main problem with this approach is sampling bias, since the most active social media users tend to be young and tech-savvy, and are not representative of the population as a whole. This bias limits its ability to accurately predict election results, although the techniques are very useful for studying voting trends and opinions.

The 2016 Trump campaign

While social media sentiment analysis may be disturbing, it’s even more disquieting when used to influence opinion and voting outcomes. One of the most well-known examples is Donald Trump’s 2016 campaign for the US presidency. Big data and psychographic profiling had a lot to do with a victory that traditional polls failed to predict.

The Trump example was not a case of mass manipulation. Rather, individual voters received different messages based on predictions about their susceptibility to various arguments. They often received information that was biased, incomplete and sometimes contradicted other messages from the same candidate. The Trump campaign contracted Cambridge Analytica for this effort, the same company that was sued and forced to close after it was caught harvesting information belonging to millions of Facebook users. Cambridge Analytica’s approach was based on psychometric methods developed by Dr. Michal Kosinski that could develop a comprehensive user profile by analyzing a small number of social media likes.

The problem with this approach is not the technology used, but how campaigns secretly use it for psychological manipulation of vulnerable voters by direct appealing to their emotions and deliberately disseminating fake news through bots. This happened in Emmanuel Macron’s bid for the French presidency in 2017 when his campaign suffered a massive email theft just two days before the election. A large number of bots were then deployed to spread alleged evidence of crimes described in the emails, which later proved false.

Political action and government

Another worrisome thought is the possibility of a government driven by artificial intelligence (AI).

In Denmark’s latest general election, a new political party called the Synthetic Party emerged, led by an AI chatbot named Leader Lars that was seeking a seat in the country’s parliament. Of course, there are real people behind the chatbot, specifically the MindFuture Foundation. Leader Lars has been machine-trained on all the political manifestos of Denmark’s fringe political parties since 1970 for the purpose of developing a platform that appeals to the 20% of the country’s population that never votes.

While the Synthetic Party may have outlandish ideas like a universal basic income of almost $15,000 a month, it stimulated debate about the potential for AI-driven government. Can a well-trained and resourced AI application really govern people?

We are currently seeing one AI breakthrough after another occurring at blazing speeds, particularly in the field of natural language processing, following the introduction of a new, simple network architecture – the Transformer. These are giant, artificial neural networks trained to generate text, but can also be easily adapted to many other tasks. These networks learn the general structure of human language and develop an understanding of the world through what they have “read.”

One of the most advanced and impressive examples is called ChatGPT, developed by OpenAI. It’s a chatbot capable of coherently answering almost any question asked in natural language. It can generate text and perform complicated tasks like writing complete computer programs from just a few instructions from the user.

Immune to corruption, but opaque

There are several advantages to using AI applications in government. Their ability to process data and knowledge for decision-making is far superior to that of any human. Theoretically, it would also be immune to the influence of corruption and would not have any personal interests.

Right now, chatbots can only react to the information that someone feeds it. They cannot really think spontaneously or take initiative. The AI systems of today are better viewed as answer machines – oracles – that can respond to “what do you think would happen if...” questions, and should not be thought of as agents that can take action or control.

There are many scientific studies on the potential problems and dangers of this type of intelligence based on large neural networks. A fundamental problem is their lack of transparency – they don’t explain how they arrived at a decision. These systems are like black boxes – something goes in, and something comes out – but we can’t see what’s going on inside the box.

We shouldn’t forget that there are people behind these machines who may consciously or unconsciously introduce certain biases through the learning texts they use to train the systems. Moreover, as many ChatGPT users have learned, AI chatbots can also spit out incorrect information and bad advice.

Recent technological advances are giving us a glimpse of future AI capabilities that may be able to “govern,” but not without the essential human control for now. The debate should soon shift away from technology questions to ethical and social issues.

This article first appeared in The Conversation. Read the original here.

The Conversation

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_