In July 2023, four tech giants: Open AI, Microsoft, Google, and Anthropic, announced a platform called the Frontier Model Forum. The Forum aims to ensure the safe and responsible development of generative AI, especially frontier AI models.
Today, the companies have made a joint announcement of the appointment of Chris Meserole as the first Executive Director of the Frontier Model Forum and the establishment of an AI safety fund with a more than $10 million initiative to promote research in the field of AI safety. Additionally, the Frontier Model is sharing its first technical working group update on red teaming.
According to Google’s blog post, Chris Meserole recently worked as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution.
Now, as the Executive Director, his latest role requires him to identify and share best practices for frontier models, expand knowledge to academics, policymakers, and civilians, and support efforts to leverage AI to address society’s biggest challenges.
Chris Meserole added:
“The most powerful AI models hold enormous promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum.”
As for the AI Safety Fund, alongside the four tech titans, major contributions have come from the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt (former CEO of Google and philanthropist), and Jaan Tallinn (Estonian billionaire).
The announcement also mentioned:
“The Forum is excited to work with Meserole and to deepen our engagements with the broader research community, including the Partnership on AI, MLCommons, and other leading NGOs and government and multi-national organizations to help realize the benefits of AI while promoting its safe development and use.”
Previously, the members of the forum agreed to voluntary AI commitments at the White House. It included a pledge to report vulnerabilities in AI forums and support third-party discovery. It has since been working on establishing definitions of terms and processes to ensure common ground in future discussions.
As stated, the fund will prioritize the development of new model evaluations and techniques for red-teaming AI models. These would, hence, help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems.
In the latest working group update, the group introduced a definition for red teaming as “A structured process for probing AI systems and products for the identification of harmful capabilities, outputs, or infrastructural threats.” It also shared case studies that highlight the issue.
The blog post also mentioned some recent ongoing projects by the Forum. It is working on a new responsible disclosure process to allow frontier AI labs to share information related to the discovery of vulnerabilities or potentially dangerous capabilities within frontier AI models and their mitigations. It also plans to establish an Advisory Board in the future to help with strategizing.
The fund, administered by the Meridian Institute, will call for proposals in the next few months and grants can be expected shortly after that.
1 Comment - Add comment