Facebook has released its Community Standards Enforcement Report which details the actions the firm has taken against content that’s not allowed on its platform such as graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.
Discussing the report, Alex Schultz, VP of Analytics, said:
“Today’s report gives you a detailed description of our internal processes and data methodology. It’s an attempt to open up about how Facebook is doing at removing bad content from our site, so you can be the judge. And it’s designed to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time.”
Some of the highlights of the report include:
- 837 million pieces of spam were removed in Q1 2018, all of which were found and flagged by Facebook’s systems before anyone even reported it.
- The firm disabled about 583 million fake accounts which were disabled minutes after registering. Current estimates by the firm suggest 3-4% of active Facebook accounts on the site between October 2017 and March were fake.
- It took down 21 million pieces of adult nudity and sexual activity in Q1 2018 - the company’s systems had already flagged 96% before being reported.
- With regards to graphic violence, 3.5 million items were removed in Q1 2018 and 86% was flagged before it was reported.
The last stat that Facebook highlighted was hate speech; it admitted its technology wasn’t very good at picking it up so it still gets reviewed by review teams. 2.5 million items of hate speech were removed in Q1 2018 - 38% of which was flagged by automated systems.
Facebook admitted that the release of the report was a trust building exercise, trust that is sorely lacking following the Cambridge Analytica scandal.
Source: Facebook