When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

The U.S. AI Safety Institute will have early access to new models from OpenAI and Anthropic

The OpenAI logo

The U.S. AI Safety Institute, established at the National Institute of Standards and Technology (NIST) in 2023, aims to advance the science of AI safety and mitigate risks posed by advanced AI systems. This governmental organization will focus on testing, evaluating, and developing guidelines to accelerate safe AI deployment both domestically and internationally.

Today, the U.S. Artificial Intelligence Safety Institute announced formal agreements with two prominent AI startups, Anthropic and OpenAI, for collaborative research, testing, and evaluation in AI safety.

Under the Memorandum of Understanding, the U.S. AI Safety Institute gains early access to significant new models from OpenAI and Anthropic before public release. This access extends to post-release evaluations, allowing the Institute to assess model capabilities and safety risks and develop risk mitigation methods. The Institute will then provide feedback to the model owners on potential safety enhancements, collaborating with the U.K. AI Safety Institute in these evaluations.

Elizabeth Kelly, director of the U.S. AI Safety Institute, said:

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we eagerly anticipate commencing technical collaborations with Anthropic and OpenAI to further AI safety science. These agreements mark a crucial step as we strive to responsibly guide the future of AI.”

Google, a leading AI innovator, is notably absent from such agreements despite its continuous development of cutting-edge models. Meta's open-source approach to model development, however, eliminates access concerns for the Institute regarding its Llama models.

These collaborations signal a positive step towards ensuring the safe and responsible development of AI technologies, highlighting the increasing importance of prioritizing safety in AI innovation. However, the lack of involvement from other major players underscores the ongoing challenges in achieving comprehensive AI safety standards.

Recently, the OpenAI Board formed a Safety and Security Committee to make recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations. OpenAI also plans to share information on the recommendations related to safety and security publicly.

Report a problem with article
AMD adrenalin driver
Next Article

New AMD 24.8.1 Windows WHQL driver fixes Black Myth: Wukong driver timeout crash

Windows 10 logo white on dark blue-grey background
Previous Article

Windows 10 22H2 non-security update 19045.4842 (KB5041582) released

Join the conversation!

Login or Sign Up to read and post a comment.

1 Comment - Add comment