Earlier this week, large numbers of AI-made deepfake images of pop artist Taylor Swift, most of an explicit nature, began to flood X (formerly known as Twitter), further fueling the fear of AI misuse.
As a result, the social network has taken the unprecedented move to block any searches for Taylor Swift for the time being, starting on Saturday.
The Wall Street Journal got confirmation of this decision from X's head of business operations, Joe Benarroch. He told the WSJ:
This is a temporary action and done with an abundance of caution as we prioritize safety on this issue.
The story adds that fans of Swift flooded X with real images of the singer after the fake pictures started showing up on the social network. They also tried to report accounts that were spreading the deepfake photos.
Before X's decision to block all searches for Taylor Swift, Microsoft CEO Satya Nadella weighed in on the flood of AI-made images of the singer in an interview for NBC News. He said:
Yes, we have to act . . . I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.
404Media claims that the deepfake pictures of Taylor Swift that were spread on X were created in part by a group that used Microsoft Designer, an AI-based image creator. Microsoft told NBC News that its own investigation of the situation has "not been able to reproduce the explicit images in these reports." However, it added, "Out of an abundance of caution, we have taken steps to strengthen our text filtering prompts and address the misuse of our services."
92 Comments - Add comment