Hacked ChatGPT account for sale: More than 100,000 are available on the dark web
The internet’s black market is full of stolen credentials to access the Open AI artificial intelligence technology, which can reveal users’ confidential information
In recent months, more than 100,000 hacked ChatGPT accounts have been put up for sale on the dark web. The cybersecurity firm Group-IB has descended into the depths of the internet and found usernames and passwords from web services, among them credentials from the OpenAI artificial intelligence (AI) platform, which is intended for professional use and, therefore, sometimes has confidential information of the companies that use it.
Since ChatGPT was popularized late last year, its adoption has been massive. It reached 100 million users in just two months, and today it maintains its meteoric growth. Companies like Microsoft approve of their workers using it to automate tasks, albeit with caution.
But not everyone is so enthusiastic. Some giants, including Apple and Samsung, have prohibited the use of this or other AI applications for fear of leaks of internal information. In this context, a survey carried out by the Fishbowl app, which facilitates group discussions for business environments, indicates that 68% of those who use ChatGPT or other AI tools do so without the knowledge of their bosses.
The vertiginous growth of ChatGPT suggests that some companies have rushed to use the application without protocols or user guides. That has its risks, as the tool stores all the questions that the user asks and the answers that the AI gives. “Many companies have started using ChatGPT in their day-to-day processes. Some senior managers and sales managers can use it to improve their emails, which are then sent externally. Obviously, in this correspondence there can be sensitive data, such as prices handled internally, numbers, information about products, about innovations, invoices and other critical information,” says Dmitry Shestakov, Threat Intelligence product manager at Group-IB.
In total, the cybersecurity firm found 101,134 ChatGPT account credentials on the black market. The cybercriminals used malicious programs, called information stealers, such as Trojans, to gather the data. Then, they sold them in packages, called stealer logs, which are compressed files that contain folders and text documents with the usernames and passwords stolen from a device. The average price of one of these files is $10, although Group-IB points out that it is not known how many of them have been purchased.
ChatGPT’s historical data may contain information for internal use that companies do not want to see circulate freely. But data can also be extracted for targeted attacks against the employees of the companies themselves. The attackers could use the name of an employee or details about processes in which the company works in a malicious email. In this way, they achieve a more credible text, and it would be easier to convince a manager to click on a link or download a file.
Another of the major risks associated with the leaking of ChatGPT accounts is related to the use of the tool in programming. Shestakov explains the problems this can lead to: “Sometimes code from products developed within the company is shared with ChatGPT, creating the risk that malicious actors could intercept, replicate and sell this code to competitors. Additionally, this code can be used to search for vulnerabilities in the company’s products, leading to potential security breaches.”
Armando Martínez-Polo, partner responsible for Technology Consulting at PwC, encourages companies to explore generative artificial intelligence, but following certain recommendations, including usage policies that clearly define what cannot be done, which he considers necessary. “The first thing is to establish that personal data and companies’ confidential and intellectual property data are not shared with generative artificial intelligences,” Martínez-Polo points out.
“The big problem with OpenAI is that everything you do with them is uploaded to the cloud and, in addition, OpenAI uses it to train its own models,” explains Martínez-Polo, who advises using AI within a private cloud service. “It is important to create a secure work environment with ChatGPT, so that when you provide information about your company to do the training, you know that everything remains within your protected environment.”
At the moment, it does not seem that data leaks are going to diminish. Quite the opposite: the cybersecurity firm Group-IB has observed that the number of files for sale with ChatGPT keys has increased steadily over the last year. And it has increased significantly in the last six months. In December 2022, 2,766 hacked accounts of the artificial intelligence tool were found. By May, there were 26,802. “We expect more ChatGPT credentials to be included in the stealer logs, given the increasing number of users registering with the chatbot,” Shestakov says.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition