A hacker said they purloined personal details from countless OpenAI accounts-but scientists are doubtful, and the business is investigating.
OpenAI states it's examining after a hacker claimed to have actually swiped login qualifications for asteroidsathome.net 20 million of the AI firm's user accounts-and put them up for sale on a dark web online forum.
The pseudonymous breacher posted a cryptic message in Russian advertising "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing potential purchasers what they was sample information containing email addresses and passwords. As reported by Gbhackers, the full dataset was being offered for sale "for simply a couple of dollars."
"I have over 20 million gain access to codes for OpenAI accounts," emirking composed Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If legitimate, this would be the 3rd major security event for the AI company given that the release of ChatGPT to the public. Last year, a hacker got access to the company's internal Slack messaging system. According to The New York City Times, the hacker "stole details about the style of the business's A.I. innovations."
Before that, in 2023 an even simpler bug involving jailbreaking prompts permitted hackers to obtain the personal information of OpenAI's paying customers.
This time, however, security scientists aren't even sure a hack occurred. Daily Dot press reporter Mikael Thalan wrote on X that he discovered void email addresses in the expected sample data: "No evidence (recommends) this alleged OpenAI breach is genuine. At least two addresses were void. The user's only other post on the online forum is for a stealer log. Thread has because been erased too."
No proof this supposed OpenAI breach is legitimate.
Contacted every email address from the purported sample of login qualifications.
A minimum of 2 addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has actually considering that been erased too. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shown Decrypt, an OpenAI spokesperson acknowledged the scenario while maintaining that the business's systems appeared safe and secure.
"We take these claims seriously," the representative said, including: "We have not seen any evidence that this is linked to a compromise of OpenAI systems to date."
The scope of the alleged breach sparked issues due to OpenAI's huge user base. Millions of users worldwide rely on the business's tools like ChatGPT for service operations, instructional functions, and content generation. A legitimate breach might expose personal discussions, industrial projects, and other delicate data.
Until there's a final report, some preventive measures are always a good idea:
- Go to the "Configurations" tab, log out from all linked gadgets, and enable two-factor authentication or 2FA. This makes it virtually impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then produce a virtual card number to handle OpenAI memberships. In this manner, it is simpler to find and prevent scams.
- Always watch on the discussions saved in the chatbot's memory, and be mindful of any phishing attempts. OpenAI does not request for any individual details, and any payment update is always managed through the main OpenAI.com link.