_
_
_
_
_

What are deepfakes, their risks and how to spot them

This type of synthetic media could spread disinformation, which could have an effect in worldwide politics — and it’s not its only risk

Artificial intelligence show a fictitious skirmish with Donald Trump and New York City police officers
Images created using artificial intelligence posted on X, in march 2023.J. David Ake (AP)
Alonso Martínez

Ever since the term Deepfake appeared online in 2017, it has become more popular because of its innovative way of creating artificial videos and the dangers it poses. More recently, the term came into the mainstream after fake nude photographs of American singer Taylor Swift proliferated on X (formerly known as Twitter), which led to calls in Congress for new legislation.

These AI-generated images of real people, and which appear authentic, have garnered significant attention in light of Swift’s targeting. Some states have already enacted laws targeting deepfakes, while others are considering measures to combat their proliferation. Efforts include deepfake detection algorithms and embedding codes in content to identify misuse. Model legislation proposed by the American Legislative Exchange Council focuses on criminalizing possession and distribution of deepfakes depicting minors and allowing victims to sue for nonconsensual distribution of sexual content.

However, ensuring effective enforcement and navigating free speech concerns remain significant challenges. Federal legislation has also been introduced to provide individuals with property rights over their likeness and voice, allowing them to sue for misleading deepfakes. States such as Indiana and Missouri are pushing for legislation criminalizing the creation and distribution of sexually explicit deepfakes without consent.

But deepfake pornography is just the tip of the iceberg when we talk about the risk this type of technology poses. Deepfakes have several potential uses that could represent different harms, such as fake news, hoaxes, financial fraud and other types of pornography like revenge porn or child sexual abuse material.

What are deepfakes?

Deepfakes are videos, photos, or audio recordings of real-life people that seem authentic, but have been manipulated with Artificial Intelligence, according to the U.S. Government Accountability Office (GAO). The name comes from the type of machine learning used to generate this type of media, deep learning.

The GAO says deepfakes are tools that can be used for exploitation and disinformation. They could influence elections and cause damage to public and private figures, “but so far have mainly been used for non-consensual pornography”, as was the case with Taylor Swift.

How do deepfakes work?

Deepfakes utilize advanced AI techniques like autoencoders and generative adversarial networks (GANs) to create realistic synthetic media —both examples of deep learning, which can take certain types of data and learn to produce a similar media that resembles the example.

An autoencoder is an artificial neural network — designed to replicate how the human brain learns information — trained to recreate input from a simple representation, which means that it can reconstruct an image or a video taking a basic file. GANs consists of two competing artificial neural networks, one is trying to produce a fake version and the other tries to detect it. They work constantly, resulting in a more “realistic” or “accurate” portrayal. According to GAO, “GANs create more convincing deepfakes, but are more difficult to use”.

Improvements on these types of technology are making deepfakes harder to detect. In the past, viewers could easily detect fraudulent content, but this may no longer be the case, considering how realistic some images, videos and audios seem.

Risks of deepfakes

As mentioned, deepfake technology could be used to create several types of content, such as pornography using a celebrity or any person’s face without their consent, or fake news with altered videos of politicians saying things they never said or doing things they never did.

A report made by the Department of Homeland Security states that “the threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see”, and highlight that deepfakes and synthetic medicare effective in spreading misinformation or disinformation despite not being “advanced or believable”.

The Department also highlights how divided the opinion of experts on the urgency of the threat synthetic media and deepfakes pose is. It says that the spectrum of concerns range from “an urgent threat” to “don’t panic just be prepared”.

How to detect deepfakes

Technological detection of deepfakes relies on extensive and diverse datasets for training detection tools, but current datasets are insufficient and require constant updates to effectively detect manipulated media. Automated detection tools are still under development, with ongoing research aiming to automatically identify deepfakes and assess the integrity of digital content. However, detection techniques often spur the development of more sophisticated deepfake methods, so regular updates to detection tools are necessary.

Even with effective detection, disinformation spread through deepfake videos may still be impactful due to audience unawareness or lack of verification. Social media platforms have inconsistent standards for moderating deepfakes, and proposed legal regulations raise concerns about freedom of speech, privacy rights, and enforcement challenges.

As for human detection, while in the past it could’ve been easy to spot a fake video — there were common visual mistakes like inconsistent eye blinking or lack of definition in certain areas of the media —, currently, with its advances it is becoming even harder to spot fake content.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_