Recently, deepfake images of pop star Taylor Swift that contained explicit content flooded the internet. Some reports claimed the images were created by someone who used Microsoft Designer. Microsoft officially stated it had seen no evidence of this but added that it had made changes to the design "to strengthen our text filtering prompts and address the misuse of our services."
Today, Microsoft President Brad Smith wrote a post on the company's official blog. While he did not reference the Taylor Swift deep fake images or the alleged use of Microsoft Designer in the creation of those images, Smith did state that the company sees "a rapid expansion in the abuse of these new AI tools by bad actors."
Smith outlined six specific areas that Microsoft will be focusing on to combat these kinds of "abusive AI-generated content." The first is to help boost the safe use of these tools through activities like blocking specific text prompts, testing, and quickly banning users who abuse these tools.
Another area Smith said Microsoft was working on was to help identify AI-generated content. He stated:
We are already using provenance technology in the Microsoft Designer image creation tools in Bing and in Copilot, and we are in the process of extending media provenance to all our tools that create or manipulate images.
He added that the company is looking into "watermarking and fingerprinting techniques" for the future. Smith also said that it is working to remove abusive AI-based content from its services like LinkedIn and its gaming services like Xbox.
Smith said that it also wants to collaborate with others in the tech industry, along with law enforcement, to combat AI-based deepfakes. In addition, he said Microsoft wants to work with world governments to establish new laws to ban this kind of content.
Finally, Smith said Microsoft wants to help educate the public to identify deepfake content made to deceive people. He feels this will "require new public education tools and programs" to show them what content is real and what is fake.
1 Comment - Add comment