OpenAI"s DALL-E models can generate images based on text prompts. Azure OpenAI Service provides REST API access to OpenAI"s DALL-E models, allowing developers to create images programmatically within their applications. Microsoft today announced a new built-in feature in Azure OpenAI Service called "Watermarks." The new Watermarks feature will add invisible watermarks to all images generated using DALL·E to offer improved transparency and protection for AI-generated images.
The recent rise of image generation models has increased the spread of disinformation and AI-generated deepfakes. So, the need to identify AI-generated content is essential. This new invisible watermark embedded in AI-generated images can be identified by specialized detection tools but will not be visible to the naked eye, protecting the image"s fidelity. Also, even if someone resizes or crops the AI-generated image, the watermark"s integrity remains intact.
The invisible watermarks will contain information about the origin of an image and are represented by a manifest attached to the image. The manifest is cryptographically signed by a certificate that traces back to Azure OpenAI Service and will include the following details:
- "description" - This field has a value of "AI Generated Image" for all DALL-E model generated images, attesting to the AI-generated nature of the image.
- "softwareAgent" - This field has a value of "Azure OpenAI DALL-E" for all images generated by DALL-E series models in Azure OpenAI Service.
- "when" - The timestamp of when the Content Credentials were created. Watermarks in other Azure AI services
This is not the first service in which Microsoft is embedding watermarks. Last year, Microsoft introduced watermarks for voices created with the Azure AI Speech personal voice feature. This allowed users to identify whether speech was synthesized using Azure AI Speech or not.
Microsoft is also working with other major AI players, including Adobe, Truepic, and the BBC to ensure that watermarking, cryptographic metadata, and other detection mechanisms work across platforms.
This initiative by Microsoft marks a significant step towards responsible AI deployment and combating the misuse of AI-generated content.