Representatives from seven tech giants – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – met with the Biden-Harris administration to discuss the responsible development of artificial intelligence. The tech companies sought to reassure the administration that they are taking ethical considerations seriously as they race to make progress in AI.
The White House meeting comes amid growing concerns that advances in AI could potentially be misused or have unintended negative consequences.
Powerful AI systems are now capable of generating text, images, audio, video, and code in very convincing ways. While this technology promises many benefits, some fear it could also enable the spread of misinformation and be used to manipulate people.
Following the meeting, Microsoft emphasizes the importance of building AI that is safe, secure, and beneficial to society. It aims to ensure its AI systems are fair, reliable, trustworthy, inclusive, transparent, and accountable. To achieve this, the company has established principles and guidelines for AI development.
In its blog, Microsoft stated:
By moving quickly, the White House’s commitments create a foundation to help ensure the promise of AI stays ahead of its risks. We welcome the President’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public.
Google also recently published a proposal around designing inclusion into AI systems to avoid unfair bias. The company believes in taking a bold yet thoughtful approach to AI innovation.
OpenAI has emphasized a cautious approach to releasing its technology and thinking through potential dangers. Despite this, some experts at the FTC have urged OpenAI to be more transparent about its safeguards.
These seven leading AI companies are committing to:
- Internal and external security testing of their AI systems before their release.
- Sharing information across the industry and with governments, civil society, and academia on managing AI risks.
- Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
- Facilitating third-party discovery and reporting of vulnerabilities in AI systems.
- Developing robust technical mechanisms to ensure that users know when content is AI generated.
- Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
- Prioritizing research on the societal risks that AI systems can pose
- Develop and deploy advanced AI systems to help address society’s greatest challenges.
Meanwhile, Meta and Microsoft collaborated on the next generation of Llama. In its blog post, Microsoft said that "Llama 2 is designed to enable developers and organizations to build generative AI-powered tools and experiences."
The White House has made it clear that it wants to work closely with tech companies to promote responsible AI innovation. It has called for guardrails to be built into AI systems to protect privacy, civil rights, and national security.
7 Comments - Add comment