Microsoft is named the main creators of harmful tools intended to get beyond the restrictions of generative AI services, such as Microsoft’s Azure OpenAI Service, in an updated claim to a former civil lawsuit. We are taking this legal step against the named individuals right now in order to put an end to their behavior, to keep dismantling their illegal business, and to discourage others from using our AI technology as a weapon.
They involve the following: (1) Arian Yadegarnia, also known as “Fiz,” from Iran; (2) Alan Krysiak, also known as “Drago,” from the United Kingdom; (3) Ricky Yuen, also known as “cg-dot,” from Hong Kong, China; and (4) Phát Phùng Tấn, also known as “Asakuri,” from Vietnam. These perpetrators are at the core of Storm-2139, a worldwide cybercrime network that Microsoft monitors. Members of Storm-2139 illegally accessed accounts using certain generative artificial intelligence (AI) services by using exposed client login details that were obtained from public sources. After that, they changed these services’ features and offered access to other nefarious actors, giving them comprehensive guidelines on how to produce damaging and illegal materials, such as personal photos of superstars taken without their consent and additional sexually explicit content.
The conditions that exist for our generative Artificial Intelligence services forbid this behavior, and it took conscious effort to get beyond our security measures. We have removed artificial pictures and suggestions from our files to stop the spread of bad information, and we are not mentioning individual celebrities to protect their privacy.
Global Network Of Creators, Providers, And Users.
Ten unnamed “John Does” were accused of engaging in acts that violated Microsoft’s Appropriate Use Standard and the Code of Ethics as well as U.S. law in a complaint filed in December 2024 by Microsoft’s Digital Crimes Unit (DCU) in the Eastern District of Virginia. We had the opportunity to learn more concerning the criminal enterprise’s activities via this initial file.
The three primary groups that make up Storm-2139 are suppliers, users, and creators. The illegal technologies that made it possible to misuse AI-generated services were created by creators. These tools were subsequently altered by carriers and made available to end customers, frequently with different service and cost levels. Lastly, individuals created offensive synthetic material with these technologies, frequently featuring sexual images and personalities.
A graphical representation of Storm-2139 can be found below, showing online nicknames that we found throughout our study along with the nations where we think the related personalities reside.

A number of the mentioned identities, which include but are not limited to the four named the accused, have been identified by Microsoft as part of its continuing investigation. Although we have found two performers in the United States, namely in Illinois and Florida, their full names are not being revealed to prevent any prospective criminal probes from being hampered. Microsoft is getting ready to report criminal cases to authorities officials in the US and other countries.
Criminals React To Microsoft’s Website Seizure.
The Court granted a temporary order of protection and initial injunction as part of our original case, allowing Microsoft to take control of an online presence that was essential to the illegal activity and preventing the gang from operationalizing its services. Actors reacted quickly to the confiscation of the website and the ensuing release of the court documents in January; in several instances, this led to group members turning on and accusing one another. On the group’s targeted channels of communication, we saw discussions on the lawsuit, with people making assumptions about who the “John Does” were and what may happen.

Some users even “doxed” Microsoft’s legal representation of records over these means, sharing their identities, private data, and occasionally even photos. Real-world consequences of doxing might include harassment and theft of identity.

Consequently, a number of emails, including some from alleged Storm-2139 participants, were sent to Microsoft’s legal team in an effort to place the responsibility on other participants.
This response highlights the significance of Microsoft’s legal efforts and shows how they may successfully take down a cybercriminal organization by seizing its buildings and having a significant deterrent impact on its participants.

Combating generative AI abuse
Given the severe and enduring effects that abuse imagery has on victims, we take the improper use of AI extremely seriously. By integrating strong AI safeguards and defending the services we provide from dangerous and unlawful content, Microsoft is still dedicated to protecting its consumers. By laying out a thorough strategy to counteract harmful AI-generated content, we pledged last year to continuously come up with new and creative methods to protect consumers. In order to give authorities the resources they need to hold criminals accountable, we released a whitepaper with suggestions for American lawmakers on updating criminal legislation. We also gave an update on how we handle the misuse of personal images, including the precautions we take to shield our services from any damage, artificial or otherwise.
As previously said, no disturbance is finished in a single day. Targeting malevolent actors calls for perseverance and constant attention to detail. Microsoft hopes to establish a standard in the battle against the abuse of Artificial intelligence (AI) by exposing these people and bringing attention to their nefarious actions.