Microsoft"s main theme for 2023, in case you have been away from Earth in the past several months, is developing AI products and services. That includes its Bing Chat chatbot AI, and its Copilot features that are coming to both Microsoft 365 and Windows 11, among other products.
However, there is a growing amount of uncertainty and even fear about how the rise of AI services in businesses could affect jobs and even through the creation of "deep fakes" for content. Today, Microsoft announced that it has developed some commitments for AI for its third party customers that it claims will make using these services better for businesses and enterprise users.
The blog post states these new commitments will come in three parts. The first part is sharing Microsoft"s research into creating responsible AI systems:
We are committed to sharing this knowledge and expertise with you by publishing the key documents we developed during this process so that you can learn from our experiences. These include our Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on the implementation of our responsible AI by design approach.
The second AI commitment is making sure businesses that run AI systems on Microsoft"s platforms match government rules for running AI responsibility. That will include help with talking with government regulators and more.
The third AI commitment from Microsoft says the company will help develop responsible AI programs for its many third-party partners:
We will create a dedicated team of AI legal and regulatory experts in regions around the world as a resource for you to support your implementation of responsible AI governance systems in your businesses.
Microsoft is definitely one of the biggest researchers and leaders in creating AI products. These new commitments to its many partner companies to help them create responsible AI rules and products may go a long way in keeping down fears about what AI might bring in the future.