Microsoft has initiated legal proceedings against a network of cybercriminals it alleges are misusing generative AI technology, including its own Azure OpenAI Service. In an amended complaint relating to recent civil litigation, the tech giant has named four principal developers behind malicious tools designed to bypass the guardrails of its AI services. The named defendants include:
- Arian Yadegarnia (alias "Fiz") – based in Iran
- Alan Krysiak (alias "Drago") – based in the United Kingdom
- Ricky Yuen (alias "cg-dot") – based in Hong Kong, China
- Phát Phùng Tấn (alias "Asakuri") – based in Vietnam
These individuals are central to what Microsoft has labelled Storm-2139, a global cybercrime network. Members of this network allegedly exploited publicly available customer credentials to gain unauthorised access to generative AI services. They subsequently modified these services and resold access to other malefactors, even providing explicit instructions to produce harmful content, including non-consensual intimate images of celebrities and other explicit material.
Microsoft’s investigation outlines Storm-2139 as an organisation structured into three key tiers:
- Creators: The developers who created the tools enabling the abuse of AI services.
- Providers: Those who modified, supplied, and offered these tools under various service tiers and pricing structures.
- Users: The end users who employed these tools to generate prohibited synthetic content, often targeting celebrities or producing sexually explicit imagery.
Following the initial filing of the lawsuit in the Eastern District of Virginia in December 2024 by Microsoft’s Digital Crimes Unit (DCU), targeting ten unidentified "John Does" suspected of contravening both U.S. law and Microsoft's Acceptable Use Policy and Code of Conduct, the court granted a temporary restraining order and a preliminary injunction.
This allowed Microsoft to seize a critical website used by the cybercrime network, significantly impairing its operational capacity. The unsealing of the legal filings in January triggered an immediate reaction within the network. In monitored communication channels, members began speculating on the identities of the “John Does” implicated in the case and, in some instances, attempted to cast blame on other members of the operation.
Furthermore, several emails were received by Microsoft’s legal team from suspected Storm-2139 members, with these communications aiming to shift responsibility and point fingers at other operatives. Doxing of Microsoft’s counsel was also observed, with personal information and photographs circulated online, a tactic that can lead to severe real-world consequences such as identity theft and harassment.
Microsoft’s efforts are part of a broader commitment to curb the abuse of generative AI. While the Redmond giant acknowledges that dismantling such an entrenched cybercriminal network is an ongoing battle, the legal actions and operational disruptions aimed at unmasking these malicious actors mark a significant step forward. By shining a light on the covert activities of Storm-2139, the company intends not only to dismantle the current network but also to deter future attempts to weaponise AI technology.
Overall, the case underscores the challenges posed by cybercriminals in the digital age and the need for persistent, coordinated efforts to safeguard innovative technologies from misuse.
0 Comments - Add comment