Your account on ChatGPT may not be as safe as you might think. According to research published by Group-IB, over 100,000 account credentials have been compromised and are being traded on the dark web. Group-IB is a global cybersecurity leader with its headquarters in Singapore.
Group-IB discovered 101,134 stealer-infected devices with ChatGPT credentials saved in them. It showed that the Asia-Pacific region experienced the highest number (40.5%) of ChatGPT accounts that were stolen by malware between June 2022 and May 2023. Other highly affected regions include the U.S., Vietnam, Brazil, and Egypt.
The company’s Threat Intelligence platform uncovered these compromised credentials within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year. The number of affected accounts reached a peak of 26,802 in May 2023, presenting an alarming situation for users.
Group-IB mentioned that the growing inculcation of ChatGPT in business communication and software development, also means sensitive information is shared on the platform. This makes it an ideal target that can be exploited to gain illegal benefits. As Group IB adds, ChatGPT accounts have gained significant popularity in various underground communities.
The company analyzed such communities and found that a majority of the ChatGPT accounts have been breached by an info-stealing hacker called the Raccoon. Besides Raccoon, malware like Vidar and Redline have the most number of compromised hosts with ChatGPT access.
Info-stealing hackers are a type of malware that gather credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history, and other information from browsers installed on infected computers, and then sends all this data to the malware operator. As this type of malware operates non-selectively, it affects as many computers as possible to gather as much data as possible.
Dmitry Shestakov, Head of Threat Intelligence at Group-IB, explained the grave situation by stating:
“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials. At Group-IB, we are continuously monitoring underground communities to promptly identify such accounts.”
To mitigate the effects of such a cyberattack, experts suggest users enable 2-Factor Authentication (2FA) where they are required to provide an additional verification code before accessing their ChatGPT accounts. While this makes the login process a bit lengthier, it is an essential way to increase account security.
14 Comments - Add comment