The New York Times has reported that early in 2023, a hacker gained access to OpenAI's internal messaging systems and stole information about the company's AI technologies. The company revealed the incident to employees but not to the public or law enforcement.
The hacker managed to steal information from discussions in an online forum for OpenAI employees, where they spoke about the latest technologies. Luckily for the company, the hacker didn't manage to break into systems where GPT models are housed and trained.
Two sources who brought this information to The New York Times said that some OpenAI employees had raised concerns that such attacks could be used by countries like China to steal AI technology. Eventually, this could endanger US national security.
When told about the incident, some employees also raised questions about how seriously the company was taking security. Divisions are also said to have emerged among employees about the risks of artificial intelligence.
Related to the incident, a former OpenAI technical program manager wrote a memo to the company's board saying that the company wasn't doing enough to stop the theft of secrets by foreign adversaries. The manager was Leopold Aschenbrenner, who alluded to the security breach in a podcast. He has been let go by OpenAI for leaking information outside the company and argues that this dismissal was politically motivated.
The revelation that OpenAI was breached and that this caused division among employees just adds to the growing list of issues at the company. We've seen CEO Sam Altman fight with a previous board and come out on top. Most recently, several AI safety researchers have left the company because of disagreements about superalignment, where methods are found for humans to control superintelligent AI.
According to prominent figures in the AI field, like Daniela Amodei, an Anthropic co-founder, if the latest designs of generative AI were stolen, it wouldn't be a great national security threat. However, as this technology becomes more capable, then this could change.
6 Comments - Add comment