Microsoft has made several announcements today related to generative AI safety features. These include a new tool in Azure AI Content Safety designed to detect and correct AI-made hallucinated content. Microsoft also announced that its Azure OpenAI Service would start putting invisible watermarks in images created in the service via DALL-E 3.
Microsoft is not yet done with AI safety announcements. Today, it also revealed that it has begun a public preview of its new Multimodal API in its Azure AI Content Safety service. This API detects harmful or inappropriate content in apps and services created by humans and AI tools.
In a blog post, Microsoft stated:
The multimodal API accepts both text and image inputs. It is designed to perform multi-class and multi-severity detection, meaning it can classify content across multiple categories and assign a severity score to each one. For each category, the system returns a severity level on a scale of 0, 2, 4, or 6. The higher the number, the more severe the content.
The new Multimodal API is designed to detect content in text and images, including emojis, that might have harmful, unsafe, or inappropriate content. That includes subject matters like explicit content, hate speech, violence, self-harm, and sexual content. Microsoft also says that the API can find that kind of content even if it is included in a combination of text and images. The API can detect harmful content, which may not be evident if the text and the image are viewed by themselves.
Microsoft added:
By addressing these objectives, the multimodal detection feature ensures a safer and more respectful user environment where content generation is creative yet responsible.
Microsoft also says the new Multimodel API is able to detect harmful content quickly so that it won't be released to users of apps or services.
1 Comment - Add comment