Microsoft has been quite busy launching AI-based software and services. In March, it officially unveiled Bing Image Creator, which lets users make art with just a few text prompts. In April, it launched the full public preview of Microsoft Designer, which allows people to create a project like a blog post, a website, and more with text prompts and its AI model.
However, there are lots of concerns that AI-based art creation tools could be used by hostile groups and even countries to spread misinformation. With more and more "deep fake" images and videos being made, Microsoft is taking a proactive stance to make sure AI art from its programs can be identified.
Today, as part of Microsoft's Build 2023 developers conference, it announced it would soon add a feature that will give anyone a way to see if a piece of art or a video clip was made by AI in Bing Image Creator and Microsoft Designer. It stated:
The technology uses cryptographic methods to mark and sign AI-generated content with metadata about its origin. Microsoft has been a leader in R&D on methods for authenticating provenance and co-founded Project Origin and the Coalition for Content Provenance and Authenticity (C2PA) standards body. Microsoft’s media provenance will sign and verify generative content in accordance with the C2PA standard.
Microsoft said the feature will be used for "major image and video formats" for both of its AI content creator programs. However, it did not announce which formats would be supported with metadata. It also didn't offer a specific date for this feature to be added, with Microsoft stating it would be included in the coming months.
This news comes after Microsoft announced earlier this month that Bing Image Creator now supports over 100 different languages. Also, over 200 million images have been created by the program.
In case you want to read more, you can find the rest of the Build 2023 coverage here.
1 Comment - Add comment