_
_
_
_
_

Beware of ChatGPT’s evil twin and other generative AI dangers

The dark side of this technology can be used to create sophisticated scams, deepfake pornography and biochemical weapons

Experts warn about AI-enabled phishing and misinformation campaigns.
Experts warn about AI-enabled phishing and misinformation campaigns.Михаил Руденко (Getty Images/iStockphoto)
Isabel Rubio

Meet FraudGPT, the evil twin of ChatGPT. This generative AI tool lurks in the dark web, capable of creating messages that look like they come from your bank, crafting malicious software, and building sham websites to trap unsuspecting victims. Netenrich, a data-driven tech security platform clearly spells out the dangers of the FraudGPT AI bot. But that’s not the only one — cybercriminals have an accomplice in WormGPT, a tool that automates the creation of highly convincing fake emails that are personalized to the recipient. It’s a chilling reminder that generative artificial intelligence can be harnessed for malicious purposes, from elaborate scams to non-consensual pornography and disinformation campaigns, and even biochemical weapons.

“Although it’s still a relatively new technology, criminals have quickly exploited the capabilities of generative artificial intelligence,” said Josep Albors of ESET, a Spanish tech security firm. Albors points to sophisticated, targeted phishing campaigns, misinformation blitzes and deepfakes — AI-manipulated videos altering or replacing a person’s face, body or voice.

According to Proofpoint’s Fernando Anaya, generative AI is an evolutionary step rather than a revolutionary one. “No longer are users told to search for obvious grammatical, context, and syntax errors to spot malicious emails,” he said. Now attackers can just ask one of these tools to create a persuasive and urgent email requesting people to update their bank account and routing information. They can quickly and easily create emails in multiple languages. “An LLM [a powerful model trained on vast amounts of data using deep learning techniques] can read an organization’s LinkedIn profiles and craft tailored emails for each employee. The emails are personalized in flawless English or Dutch, catering to the recipient’s specific interests,” warns the National Cyber Security Center of the Netherlands.

According to Philipp Hacker, a professor at the European New School of Digital Studies, generative artificial intelligence can be utilized to develop highly effective malware that is difficult to detect and capable of targeting specific systems and vulnerabilities. “Although human expertise remains crucial for the development of advanced [computer] viruses, artificial intelligence can assist in the initial stages of creating malware.”

The use of such techniques is not yet widespread, said Albors. However, tools like FraudGPT or WormGPT could become a significant problem in the future. “With the help of these tools, even people with limited technical know-how can create malicious campaigns that are likely to succeed. This means individual users and companies will have to deal with an even larger number of threats.”

AI-generated audio, video and pictures

The more convincing a scam, the higher the likelihood of success. Some use AI to synthesize audio. “The ‘pig butchering’ crypto scam may eventually transition from messages to calls, boosting its effectiveness,” said Anaya. This scam got its name because criminals “fatten up” the victims by gaining their trust, and then steal everything they have. While commonly associated with cryptocurrencies, the scam can also be applied to other financial transactions.

Proofpoint has also found cybercriminals using this technology to deceive government officials. Its investigation into the Russia-aligned TA499 group revealed how their email campaigns convince high-profile North American and European government officials as well as CEOs of prominent companies and celebrities to participate in recorded phone calls or video chats. The TA499 group leverages AI and other methods to impersonate people in these video calls, and elicit information that can be used against them on social media.

Generative AI is also used to modify images and videos. Television personalities and high-profile figures like Elon Musk and Spanish politician Alberto Núñez Feijóo have all had their voices cloned. “Deepfakes are often used to promote cryptocurrency investments in which many people lose money,” said Josep Albors.

From pornography to biochemical weapons

The use of generative AI to create non-consensual pornography is especially alarming. “It’s targeted at women and causes significant personal and professional harm,” said Philipp Hacker. Last year, Spanish schoolgirls reported deepfake nudes circulating on social media, and celebrities like Rosalía have also been victims of similar attacks.

AI technology has also been used “to create misleading images of immigrants, influencing public opinion and elections, and to execute large-scale disinformation campaigns that are effective and convincing,” said Hacker. Following the devastating wildfires on Maui in August 2023, certain publications made baseless claims that they were caused by a secret “climate weapon” being tested by the United States. The New York Times reported that this misinformation campaign was led by China and used AI-generated images.

Generative artificial intelligence has potential far beyond its current applications. An article in Nature Machine Intelligence explores how AI technologies for drug discovery could be misused for the design of biochemical weapons. Other cybersecurity experts warn that algorithms can infiltrate critical infrastructure software, “blurring the lines between traditional attack scenarios and making them challenging to predict and counter within existing laws and regulations.”

Defending against FraudGPT and malicious AI

Machine learning and other techniques can detect and block sophisticated attacks. However, Josep Anaya emphasizes the need to educate users to recognize phishing emails and other threats. Philipp Hacker says addressing the risks of malicious use of AI requires regulatory measures, technological solutions and ethical guidelines.

Other defensive measures include deploying independent teams with tools to detect vulnerabilities, and prohibiting certain open source models. "Addressing these risks is complex because of the competing ethical objectives in the AI space, and the feasibility issues related to certain defensive measures," said Hacker.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_