According to data from dark web marketplaces, more than 101,000 ChatGPT user accounts have fallen victim to information-stealing malware over the past year.

Group-IB, a cyberintelligence firm, has identified over a hundred thousand info-stealer logs containing ChatGPT accounts on various underground websites. The peak of these attacks was observed in May 2023, when threat actors posted 26,800 new ChatGPT credential pairs.

The Asia-Pacific region was the most targeted, with almost 41,000 compromised accounts between June 2022 and May 2023, followed by Europe with nearly 17,000, and North America with 4,700.

Victims distribution
Victims distribution (Group-IB)

Information stealers are a type of malware that targets account data stored on applications such as email clients, web browsers, instant messengers, gaming services, and cryptocurrency wallets. These malware types are known for stealing credentials saved to web browsers by extracting them from the program’s SQLite database and abusing the CryptProtectData function to reverse the encryption of the stored secrets. The credentials and other stolen data are then packaged into archives called logs and sent back to the attackers’ servers for retrieval.

ChatGPT accounts, alongside email accounts, credit card data, cryptocurrency wallet information, and other more traditionally targeted data types, signify the rising importance of AI-powered tools for users and businesses. With ChatGPT allowing users to store conversations, accessing one’s account could mean gaining insights into proprietary information, internal business strategies, personal communications, software code, and more.

“Many enterprises are integrating ChatGPT into their operational flow,” comments Group-IB’s Dmitry Shestakov. “Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Due to these concerns, tech giants like Samsung have banned staff from using ChatGPT on work computers, going as far as threatening to terminate the employment of those who fail to follow the policy.

Group-IB’s data indicates that the number of stolen ChatGPT logs has grown steadily over time, with almost 80% of all logs coming from the Raccoon stealer, followed by Vidar (13%) and Redline (7%).

Compromised ChatGPT accounts
Compromised ChatGPT accounts (Group-IB)

If you input sensitive data on ChatGPT, consider disabling the chat saving feature from the platform’s settings menu or manually delete those conversations as soon as you are done using the tool. However, it should be noted that many information stealers snap screenshots of the infected system or perform keylogging, so even if you do not save conversations to your ChatGPT account, the malware infection could still lead to a data leak.

Unfortunately, ChatGPT has already suffered a data breach, where users saw other users’ personal information and chat queries. Therefore, those working with extremely sensitive information shouldn’t trust inputting it on any cloud-based services, but only on secured locally-built and self-hosted tools.

Leave a Reply

Your email address will not be published. Required fields are marked *