Former OpenAI safety lead Jan Leike has taken up a job at rival firm Anthropic. There, he will continue his work on superalignment and try to find a way for humans to control powerful AIs.
Superalignment RSS
OpenAI has announced that members of its Board of Directors have formed a new Safety and Security Committee after reports came out earlier in May that it shut down its superalignment team.
Sam Altman and Greg Brockman has issued a response to Jan Leike who raised questions over OpenAI's commitment to safety. The pair said they do care about safety. You can read the response here.
The superalignment team at OpenAI has been disbanded less than a year after it was created to ensure super intelligent AI doesn't get out of human control. It comes after senior figures left the firm.
OpenAI is building a new Superalignment team to develop methods to train and steer artificial intelligence that surpasses human intelligence. It worries existing training methods will be redundant.