The Oversight Board, a global body of experts that reviews Meta’s most difficult and significant decisions related to content on Facebook, Instagram, and Threads, picked two new cases for consideration. They both involve explicit AI-generated images of female public figures and their sharing on Meta’s popular social media platforms.
Today’s announcement implies that the Board will investigate whether the content moderation on these platforms meets the same level of standards worldwide or whether there are significant differences that require a policy change. Specifically, the Oversight Board picked the following two cases for consideration:
“The first case involves an AI-generated image of a nude woman posted on Instagram. The image has been created using artificial intelligence (AI) to resemble a public figure from India. The account that posted this content only shares AI-generated images of Indian women. The majority of users who reacted have accounts in India, where deepfakes are increasingly a problem.
“The second case concerns an image posted to a Facebook group for AI creations. It features an AI-generated image of a nude woman with a man groping her breast. The image has been created with AI to resemble an American public figure, who is also named in the caption. The majority of users who reacted have accounts in the United States.”
Meta’s content moderation has consistently faced criticism for double standards – making more mistakes or delays in content review in countries outside the U.S. market or in non-English-speaking regions.
Therefore, the selection of these particular cases is clearly not coincidental. In the first case, a nude AI-generated image of an unspecified Indian female public figure was reported by a user for pornography. However, the report was rejected automatically because it was not reviewed within 48 hours. The subsequent appeal was also automatically dismissed.
Meta removed the post only after the case was picked up by the Oversight Board. The company admitted that the image indeed violates its Community Standards related to bullying and harassment.
The second case is the opposite of the Indian story. An AI-generated image posted by an American user was deleted, and the decision was upheld even after the user’s appeal. Actually, it was not the first time this particular image was deleted because a different user had already posted it. After the initial removal, the image was added to the so-called Media Matching Service Bank, which helps to automatically remove rules-violating images upon their republishing.
The Board now accepts public comments on the matter (until Tuesday, April 30) before it reaches its final decision.
The Oversight Board is an independent entity, however, with quite a significant authority earning it the label “Meta’s supreme-court,” it can overturn the company’s decision in particular cases. On top of that, the Board can issue policy recommendations to Meta. These recommendations are not binding, although Meta must officially respond to them within 60 days.