It"s not exactly a shock to learn that organized cybercriminals are beginning to use generative AI tools and services for new kinds of attacks. Today, we got a lot more information about how these state-sponsored hackers are using large language models.
In a blog post, Microsoft, in partnership with OpenAI, announced a new research on how hackers are using LLMs.It includes info on specific groups that have been identified as using generative AI tools.
The groups are based in countries like Russia, North Korea, Iran, and China. One of them is labeled Forest Blizzard and is linked to the Russian GRU Unit 26165. Microsoft says the group has been active in targeting organizations that are related to ongoing invasion of Ukraine.
Microsoft claims Forest Blizzard has been using LLMs "to understand satellite communication protocols, radar imaging technologies, and specific technical parameters." Its research adds that this group has been trying to automate tasks like file selection and manipulation with LLMs as well.
Another cybercriminal group that Microsoft says is using AI has been labeled as Salmon Typhoon. The China-based group has a history of targeting government and US defense contractors. According to Microsoft, Salmon Typhoon has used LLMs in a variety of ways. They include finding coding errors and translating technical computing papers.
Microsoft says it will continue to partner with OpenAI to research the use of AI by cybercriminals.
As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models.
The company added that it will use its own Copilot for Security service to go after these LLM-based cyber threats.