OpenAI's 2023 Data Breach Incident Goes Unreported

OpenAI’s 2023 Data Breach Incident Goes Unreported

OpenAI, founded in 2015, focuses on advancing artificial intelligence through research in natural language processing and machine learning. Known for its tools like GPT models, OpenAI collaborates globally for the development of AI technologies.

On July 4, 2024, The New York Times reported that in early 2023, a hacker breached OpenAI’s internal messaging systems, which contained discussions about the company’s AI technologies. The hacker accessed details from these discussions in an online forum, but did not penetrate the systems where OpenAI stores and develops its codes.

OpenAI executives disclosed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed the board of directors. However, the executives opted not to announce the breach publicly since no customer or partner information had been compromised. They also assessed that the incident did not pose a national security threat as they believed the hacker was an independent individual without ties to any foreign government.

Following the breach, Leopold Aschenbrenner, an OpenAI technical program manager dedicated to mitigating potential harms from future AI technology, forwarded a memo to OpenAI’s board of directors. In the memo, he asserted that the company’s current measures were insufficient in safeguarding against potential theft of its intellectual property by the Chinese government and other foreign adversaries. Aschenbrenner was subsequently fired, the newspaper reported.

OpenAI has contested the claims made by Leopold Aschenbrenner regarding their operations and security practices and says they have addressed the issue and alerted the board. OpenAI holds the belief that the risks associated with current AI technologies are minimal. They advocate for code sharing among engineers and researchers in the industry as a way to identify and address potential issues effectively.

What to do if you or your vendors have active relationships with OpenAI

In light of the incident that occurred early last year, we recommend the following best practices to help keep your organization safe:

  • Consider implementing redundancy measures in AI deployments to minimize dependency on a single provider like OpenAI.
  • Assess and strengthen internal data access controls to limit exposure of sensitive information shared with OpenAI.
  • Advocate for transparency in OpenAI’s code-sharing practices to ensure robustness and security of AI applications.

By following these recommendations, you can mitigate potential risks associated with the OpenAI incident and reinforce your overall cybersecurity strategy.

Sign up to try VISO TRUST today

Sign up for free

Try the VISO TRUST platform for free to see the CDK Global risk advisory in the context of your TPRM program and see if it impacts your vendors or your nth parties.