Microsoft has had its own issues with China-based hackers that have accessed information from government email accounts that used Outlook. However, the company"s Microsoft Threat Analysis Center (MTAC) has also been monitoring actors based in China that have been using social networks to post false information to US citizens.
The MTAC previously reported on these activities in September 2023. In an updated report this week from the group, Microsoft says China-affiliated actors have continued their activities to sow division in the US as the 2024 US presidential election gets closer.
In a blog post, the MTAC showed several examples of China-based actors posting false information, often with AI-generated content, on social networks in the US. Some of the posts were trying to push fake conspiracy theories about major news events.
For example, the massive fires on Maui island in the state of Hawaii in August 2023 were the target of a group Microsoft called Storm 1376, which posted false claims that the fires were caused by "weather weapons" from the US government. The same group also generated false posts about the reasons for a train derailment in Kentucky in November 2023.
Another China-based group, labeled Storm-1376, was responsible for AI-made content that used false information to influence the Taiwanese presidential election in January 2024. Microsoft says this was the first time that an actor backed by a nation-state used AI-generated content to try to influence the results of a foreign election.
While Microsoft says that the efforts by China-based actors to influence upcoming elections around the world with these social networking posts, including the US, may result in low results for now, "China"s increasing experimentation in augmenting memes, videos, and audio will likely continue – and may prove more effective down the line."
The new MTAC report also showed that North Korea-based cyber actors had stepped up efforts to steal cryptocurrency funds, among other operations. One such group, labeled Emerald Sleet, was identified as using large AI language models to help improve their efforts. However, Microsoft says that, in a joint effort with its partner OpenAI, it shut down accounts and assets that were being used by that group.