In July, representatives from seven tech giants – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – met with the Biden-Harris administration to discuss the responsible development of AI. Today, the White House announced that eight more tech companies have volunteered to work on AI risks.
Adobe, IBM, Nvidia, Palantir, Stability, Salesforce, Scale, and Cohere joined existing signatories to principles put forward by President Biden. The commitments require signatory companies to take steps like watermarking or labeling AI-generated media so that people know a human didn't create it.
The AI companies also commit to promoting fairness, non-discrimination, transparency, privacy, and security when working with AI.
White House chief of staff Jeff Zients praised the additional companies for joining the effort, saying Biden has made harnessing AI's benefits while managing its risks a top priority.
Today, these eight leading AI companies commit to:
- The companies commit to internal and external security testing of their AI systems before their release.
- The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks.
- The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
- The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
- The companies commit to developing robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system.
- The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
- The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.
- The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges.
However, many people fear the rise of AI services. The voluntary nature of the commitments is seen as a temporary solution.
In late August, Brad Smith, Microsoft's chairman and executive vice president, backed this up by calling for a "regulatory scheme" to ensure that AI remains under human control. Also, the company released its blueprint on how it believes AI should be governed.
5 Comments - Add comment