There"s been a lot of concerns about how OpenAI handles safety on its models like ChatGPT. Early on Thursday, the company"s co-founder and CEO Sam Altman announced plans for a new agreement that, in theory, should make the next major AI model from OpenAI safer than it would without this plan.
In a post on his X account, Altman stated that OpenAI is talking with the US Artificial Intelligence Safety Institute, a body of the US government that"s part of the National Institute of Standards and Technology (NIST).
Altman said the two groups are working on an agreement that, if and when it"s launched, would give the Institute "early access to our next foundation model so that we can work together to push forward the science of AI evaluations."
a few quick updates about safety at openai:
— Sam Altman (@sama) August 1, 2024
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
Altman did not reveal if the final agreement between OpenAI and the US AI Safety Institute had actually been reached, nor did he give any additional details on what the deal would entail.
This new development follows OpenAI"s announcement in late May that its board of directors had formed a new Safety and Security committee. That group"s first job is to look at "OpenAI’s processes and safeguards" over 90 days. That period should end sometime in late August, after which the committee will reveal its findings and recommendations to the full board. The company will then publicly reveal which recommendations it plans to set up "in a manner that is consistent with safety and security."