Microsoft and Andreessen Horowitz (A16z) today published a joint philosophy and policy statement regarding AI. The statement was attributed to Satya Nadella (Chairman and CEO, Microsoft), Brad Smith (Vice-Chair and President, Microsoft), Marc Andreessen (Co-founder and General Partner, Andreessen Horowitz), and Ben Horowitz (Co-founder and General Partner, Andreessen Horowitz). Both companies believe that small tech companies and large tech companies can work together to build a broader innovation ecosystem and collaborate on public policy initiatives.
Additionally, Microsoft and A16z believe in open-source AI. This is a surprising stance from Microsoft, given its close relationship with the closed-source AI vendor OpenAI. The companies want regulators and decision-makers to embrace a regulatory framework that protects open-source AI. Furthermore, Microsoft and A16z expect governments to participate in and lead Open Data Commons efforts by releasing datasets required for AI in the public interest.
The US government has plans to restrict AI models because of the growing concerns about the potential misuse of this technology. While the policy aims to safeguard national security, it"s important to consider the potential impact on startups and the broader AI ecosystem. Striking the right balance between security and innovation will be crucial for the future of AI development in the US.
Microsoft and A16z offered the following policy ideas for AI startups so they can thrive, collaborate, and compete with big tech companies like Google, Amazon and others.
- Regulation that promotes opportunity for U.S. businesses: U.S. AI laws and regulations should support the global success and proliferation of U.S. technology companies by promoting access and opportunity. This can be done by leveraging a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology. Regulation should be implemented only if its benefits outweigh its costs. In accounting for costs, policymakers should include an assessment of possible costs associated with unnecessary bureaucratic burdens to startups. As the new global competition in AI evolves, laws and regulations that mitigate AI harm should focus on the risk of bad actors misusing AI and aim to avoid creating new barriers to business formation, growth, and innovation.
- Competition and choice: enabling choice and broad access fosters AI innovation and competition. Regulators should not only permit providers to offer a broad array of models—proprietary and open source, large and small—but should permit developers and startups the flexibility to choose which AI models to use wherever they are building solutions and not tilt the playing field to advantage any one platform. Developers should have the freedom to choose how to distribute and sell their AI models, tools, and applications for deployment to customers.
- Open-source innovation: open–source software provides immense value to our economy by catalyzing the innovation ecosystem. It allows tech companies big and small the ability to build the next innovation quickly and gives them a wide array of tools for developing software safely, securely, and competitively. We believe the same is true for open–source AI models. They increase choice and allow startups to more easily develop fine-tuned systems and applications. The free availability and performance of these models allow startups to access, use, and benefit from AI by modifying it to suit their conditions and diverse needs. They also offer the promise of safety and security benefits, since they can be more widely scrutinized for vulnerabilities. Regulators and decision-makers should embrace a regulatory framework that protects open source and secures the ability of entrepreneurs, startups, and companies to create, build, transform and win the future.
- Open data commons: data is a critical input for all AI developers. There is a role for government to enable and craft policies that support a thriving and growing ecosystem of data around the globe through Open Data Commons—pools of accessible data that would be managed in the public’s interest. Governments should participate and lead this effort by releasing data sets in ways that are useful for AI cultural institutions and libraries. Governments should ensure that startups can easily access these data pools.
- The right to learn: copyright law is designed to promote the progress of science and useful arts by extending protections to publishers and authors to encourage them to bring new works and knowledge to the public, but not at the expense of the public’s right to learn from these works. Copyright law should not be co-opted to imply that machines should be prevented from using data—the foundation of AI—to learn in the same way as people. Knowledge and unprotected facts, regardless of whether contained in protected subject matter, should remain free and accessible.
- Invest in AI: the U.S. government should invest in AI to accelerate American innovation, strengthen our national security, and create economic opportunity. As part of this investment strategy, the government should examine its procurement practices to enable more startups to sell technology to the government.
- Help people thrive in an AI-enabled world. building a new AI economy that supports startups, and American entrepreneurship will require public policy that cultivates technical talent and engages digital citizens. To that end, policy should fund digital literacy programs that help people understand how to use AI tools to create and access information. It should also support workforce skill development and workforce retraining programs to help people secure jobs in an AI-driven economy.
The collaboration between Microsoft and A16z signals a significant push for policies that foster AI innovation while addressing potential risks. Their focus on open source and accessibility could shape the future of AI development and deployment.