Major technology companies have come together to make formal pledges to advance AI development safely. At a virtual global meeting hosted by the leaders of South Korea and the UK, representatives from OpenAI, Microsoft, Google, and others committed to transparency and oversight in their work.
Sixteen companies acknowledged both the huge potential benefits of AI and the risks if it is misused or harms people. Practical steps agreed to include publishing safety frameworks, avoiding models with unmanageable risks, and coordinating with regulators internationally. The goal is to ensure AI systems behave ethically and avoid any unintentional harm as the technology evolves.
Joining Western tech giants were companies from China and the Middle East, such as Tencent, Meituan, Xiaomi, and Samsung. Researchers and engineers will evaluate systems for biases or other issues that could disadvantage groups. All intend to carefully monitor their AI models and get multiple perspectives on risks before deployment.
On the other hand, political leaders from the US, EU, Australia and others endorsed the pledges and planned future meetings to maintain progress. While voluntary commitments are a good start, politicians believe some regulation will also be needed down the line.
This step is especially noteworthy for OpenAI, which has been experiencing controversy over AI safety in recent days. Just last week, it shut down its "superalignment" team, which was focused on making sure AI acts safely.
This news comes after two important people left OpenAI— Ilya Sutskever and Jan Leike. Leike said on X that he was leaving because he felt the company wasn"t taking safety seriously enough. He said the safety team was struggling to get the resources they needed to do their work.
Then, Sam Altman and Greg Brockman responded to Jan Leike"s concerns about AI safety, assuring the public that the company is committed to "rigorous testing, careful consideration, and world-class security measures."
Source: Reuters