Microsoft claims its servers were illegally accessed to make unsafe AI content

Microsoft claims its servers were illegally accessed to make unsafe AI content


  • Microsoft’s December 2024 complaint pertains to 10 anonymous defendants
  • “Hacking-as-a-service operation’ stole legitimate users’ API keys and circumvented content safeguards
  • Virginia district complaint has led to a Github repository and website being pulled

Microsoft has accused an unnamed collective of developing tools to intentionally sidestep the safety programming in its Azure OpenAI Service that powers the AI tool ChatGPT.

In December 2024, the tech giant filed a complaint in the US District Court for the Eastern District of Virginia against 10 anonymous defendants, who it accuses of violating the Computer Fraud and Abuse Act, the Digital Millenium Copyright Act, plus federal racketeering law.

Microsoft claims its servers were accessed to aid the creation of “offensive”, “harmful and illicit content”. Though it gave no further details as to the nature of that content, It was clearly enough for swift action; it had a Github repository pulled offline, and claimed in a blog post the court allowed them to seize a website related to the operation.

ChatGPT API keys

In the complaint, Microsoft stated that it first discovered users abusing the Azure OpenAI Service API keys used to authenticate them in order to produce illicit content back in July 2024. It went on to discuss an internal investigation that discovered that the API keys in question had been stolen from legitimate customers.

“The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown, but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers,” reads the complaint.

Microsoft claims, with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, plus additional software to allow de3u to communicate with Microsoft servers.

De3u also worked to circumvent the Azure OpenAI Services’ inbuilt content filters and subsequent revision of user prompts, allowing DALL-E, for example, to generate images that OpenAI wouldn’t normally permit.

“These features, combined with Defendants’ unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer means of circumventing Microsoft’s content and abuse measures,” it wrote in the complaint.

Via TechCrunch

You might also like

administrator

Related Articles