OpenAI says it is making a new effort to keep its generative AI systems safe to use, even as it also admits it is training its next AI model. This morning, the company announced that its Board of Directors had formed a Safety and Security committee that will examine the company"s safety procedures in regard to its product development.
In a blog post, Open AI said its CEO Sam Altman will be part of that committee, along with board members Bret Taylor, Adam D’Angelo, and Nicole Seligman. Other OpenAI employees include its chief scientist, Jakub Pachocki, and its head of security, Matt Knight. The committee will also contact other outside safety and security experts to serve as its consultants.
The blog post stated:
A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.
The new committee has been formed in the wake of reports that OpenAI shut down its superalignment team earlier in May. That team was previously formed to find ways to keep humans in control of generative AI services that could, in theory, become smarter than humans one day.
Speaking of which, the same OpenAi blog today confirmed that the company had "recently begun training its next frontier model." It didn"t name that model, but it"s likely to be called ChatGPT-5. OpenAI said it will "bring us to the next level of capabilities on our path to AGI." However, it added that it also wants "a robust debate at this important moment" about keeping systems safe to use.
Earlier this month, OpenAI announced ChatGPT-4o, the latest version of its GPT-4 AI model with more realistic voice interactions. It is available to use for both free and paid ChatGPT users.