YouTube has been working to change its policies and community guidelines over the past few years in order to help keep the community healthy and safe for everyone to use. A lot of measures have come from those efforts including a recent change to the platform's stance on hate speech, which was announced in early June.
Today, the company has shared the history of the changes it's made since 2016, including updates to policies, the creation of new review teams, the implementation of machine learning to detect content that violates YouTube's policies and more. Specifically, the company focused on measures to reduce hateful content on YouTube.
Google recently published the enforcement report for YouTube community guidelines for the second quarter of 2019, which ended in June. Notably, despite only updating its policy on hateful content earlier that month, the number of channels, videos, and comments removed for that reason skyrocketed over the previous quarter.
The number of channels deleted for hateful content rose nearly five times from just 3,379 to 17,818. Similarly, the number of individual videos removed for hateful content increased more than five times, from 19,927 to 111,185. The number of comments deleted nearly doubled and went well over 500 million, with YouTube crediting the expanded removal of hateful comments. The company also said it's increased how quickly it detects and responds to these issues, with over 80% of videos being deleted before they get any views.
Of course, hateful content isn't the only thing YouTube has been attempting to reduce, and the company says it deleted about 50% more spam channels in the second quarter than in the first quarter. This was thanks to an update to its automated detection systems.
Looking forward, Google is promising to also update its harassment policy in the coming months. The company initially promised work on this front back in April.
57 Comments - Add comment