When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft releases its first annual AI transparency report

Microsoft logo lit up by its halo

Microsoft has been at the forefront of generative AI development for the past year and a half. Working with partners like OpenAI and others, the company has been adding its Copilot generative AI assistant to many of its products and services, with more to come. It has also been working on its own AI large language models, such as the recently announced Phi-3 family of lightweight LLMs.

All of this activity has also concerned many people, with the thought that AI tools such as the ones Microsoft has developed could be used for unethical or illegal acts. Today, Microsoft announced in a blog post that it has issued the first of what is planned to be an annual transparency report on its current responsible AI practices.

Brad Smith, the president of Microsoft, stated in the blog post:

This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust. We’ve been innovating in responsible AI for eight years, and as we evolve our program, we learn from our past to continually improve.

The report itself, in PDF format, goes over a number of areas where Microsoft has put in responsible AI practices for its services, along with training of its employees to use them. It stated:

The 2023 version of our Standards of Business Conduct training, a business ethics course required companywide, covers the resources our employees use to develop and deploy AI safely. As of December 31, 2023, 99 percent of all employees completed this course, including the responsible AI module.

Microsoft also has a Responsible AI Council that meets on a regular basis, with a goal to keep making improvements in its efforts to make its AI services safe. There's also a Responsible AI Champion program at the company with its members asked to find and solve problems with its AI products along with issuing guidance to other on responsible practices.

The report also offers examples of how its responsible AI programs affect the creation and development of its products. One of those examples involved Microsoft Designer, the AI image creation app that was reportedly used by unknown parties to make explicit "deep fake" images of pop singer Taylor Swift that later went viral on the internet.

Microsoft said it asked the journalism group NewsGuard to type in prompts into Designer that would "create visuals that reinforced or portrayed prominent false narratives related to politics, international affairs, and elections." The result was that 12 percent of the images that were created by Designer "contained problematic content." Microsoft made changes to Designer to try to keep those images from occurring. The result brought down the number of problematic images to just 3.6 percent.

Report a problem with article
Microsoft Santuary AI
Next Article

Microsoft and Sanctuary AI team up to help develop AI models for future humanoid robots

An App Store screenshot showing Anthropics iOS app Claude
Previous Article

You can now use Anthropic's Claude directly on your iPhone through its dedicated app

Join the conversation!

Login or Sign Up to read and post a comment.

3 Comments - Add comment